Spanish scientists are therefore currently exploring a completely different approach, according to the Science Journal. The machine translator should be able to translate content in the future without ever seeing comparable sample texts in the respective languages. Instead, the system should learn almost every required language completely for themselves. If two learning outcomes are then combined, it should be possible for the computer to simply reproduce the meaning of a text in one language in the other. Above all, this should mean that less rigid terms are translated, but meanings.
Internally this works so that the AI builds a map of the respective language. Training with very large amounts of text can statistically analyze relationships between words and phrases. Ultimately, there are words that exist in close and distant relationships with each other, refer to each other in a variety of ways, and ultimately result in a vast network of languages. If you have a created map then of two languages, these should simply be superimposed, which should then bring very good translations.
However, there are still some pitfalls that must show how flexible the systems can ultimately respond to linguistic peculiarities. After all, natural languages are not a rigid system that reflects an objective reality. There are also strong influences of the underlying culture, which means that the meshes can basically look completely different with the same terms. Anyone who learns other languages also has to deal with these problems – and in the end, these difficulties have finally proved themselves to be solvable if appropriate experiences can be incorporated into the learning process.