"Babel Tower" project

Source: www.science.org/doi/10.1126/science.abg0818#fa


The "Babel Tower" project is supposed to deliver a language technology that allows to communicate on Earth in all currently spoken languages without any barriers.

The project is a sub-project of the Asteroids project


Information from "Law of One" points towards 52+1 star systems that had an effective protection technology in place for a long cosmic time period. That technology was disabled about half a million years ago, according to that information, by trying to focus an energy strike on a planet outside of that 53 star systems cluster from inside our solar system with an erroneous setup of intended energy pulse which involved Jupiter as a reflector. As a result, the energy returned to the initiator and destroyed a planet inside our solar system.

There seems to be an ongoing effort to re-establish the protection technology of the 53 star systems which involves re-building the destroyed planet in our solar system. Could it be that representatives of the other 52 star systems were more present in past than today on Earth and that their activities somehow relate to some of those 52 historic languages that were involved in above analysis of language divergence?

In general, the picture reminds to the "Tower of Babel" story, doesn‘t it.

First stability calculations what will happen in our solar system after the destroyed planet has been rebuilt, point towards precisely one possible orbit that would not put the orbits of other planets like Earth on risk. The orbit of Earth would slightly change at that time in far future and may cause that there will be 53 weeks per year, however, it is a speculation whether there is a relationship to the 53 star systems in our solar systems cluster. That assumption would have to be backed up by running model simulations once the bookkeeping task for millions of asteroids has made more progress compared to today state.


The "Babel Tower" project aims at reversing the language divergence process that happened on Earth. Current approach is to generate as a first step a language technology that instantly translates from any of currently actively spoken languages into each of those languages dependent on what language the user of that envisioned language technology is familiar with.

The hardware implementation would be a headset that analyses the sound arriving in the microphone from nearby speakers and that would cancel out the sound components of that speaker and it would replace it with the translated information stream while preserving the pitch and speaker voice characteristics.

The basic linguistic concept is to translate the source language into an intermediate language in a first step and to continue with translating the intermediate language into the target language in a second step. While it is well understood from a linguistic point of view that the ultimate highest quality could possibly be accomplished only when translating a source language directly into a target language, that approach would result in far more than 10000 translation directions of language pairs.

Following the intermediate language approach, the effort could be reduced to twice the number of actively spoken languages on Earth today. That looks like a good trade-off which could deliver intended result in a much shorter time while still maintaining a high quality level, despite of using an intermediate language. Obviously, the intermediate language has to be well chosen and it should offer all language concepts that can be found in the source/target languages.

What intermediate language would serve the best purpose? Has it still to be invented in order to meet mentioned requirements or does there exist already a good candidate language for the purpose?

Intermediate language

Sanskrit has actually been discussed already as a possible "language container" for other languages.

One could leave out the Sandhi rules since those would come into play only for spoken Sanskrit.

An advantage of Sanskrit may be that it is a still spoken and an official language in Karnataka in India.

Yoda the proposal appreciated has :)


Please feel free to contribute to the project if you want to.

Project structure - language dependent tasks

Syntax rules

The rules for generating valid sentences and statements have to be generated in a suitable form for automatic language recognition and for subsequent machine translation to and from chosen intermediate language.

Dictionaries and translation memories

High quality translations require translation templates that are derived from high quality sources like books that have been translated to other languages. The result will be dictionaries and translation memories that are being used for translating on demand any text from and to chosen intermediate language.

Project structure - general tasks


There are three major components that the system will be made from:

Language recognition requires a decomposition of the voice signal from a microphone into frequency patterns. That task will be supported by running a Fast Fourier Transform (FFT) on the input signal which will be implemented on a FPGA circuit. Resulting patterns may be analyzed further on-chip or can be forwarded to a connected smartphone which sends those patterns to a server in the internet for final language recognition and decomposition into words and sentences in one or multiple source languages that are to be translated into target language in a next step.

The context dependent translation service may partly be implemented locally on specialized circuits for some frequently used languages or by servers in the Internet that are capable to provide for a round trip signal delay less than about 100 ms in order to allow for live translations for less frequently used languages.

A composition of the output audio signal will be based on the input frequency patterns. Those will be mixed with the intended target language audio stream such that the frequency components of the source language will be replaced by target language components with a matching pitch and voice characteristics of the original signal.


Software will be needed for implementing the tasks that have been mentioned in the Hardware section. Additionally, dealing with the language dependent tasks requires software for having scientists with a linguistic background collaborating when generating syntax rules and translation memories for all spoken languages on Earth.

Started: 1997 by Eckhard Kantz