NewsBizkoot.com

BUSINESS News for MILLENIALAIRES

In Conversation: How AI is Redefining Language Barriers in the Digital Age

6 min read

Benjamin Muller

At a time when Artificial Intelligence (AI) and Natural Language Processing (NLP) are making groundbreaking advances, Benjamin Muller, Ph.D from INRIA Paris, currently pursuing a postdoc at Meta in NY, is pioneering efforts to overcome language barriers globally.

Muller’s research focuses on making AI systems comprehend and articulate various languages. This challenge is not just technical – it is a quest to improve human connectivity and democratize technological access.

We sat down with Benjamin to explore his journey and uncover insights on AI’s role in redefining language barriers in the digital age.

Q: Can you share your journey into AI and NLP with us?

A: I got into AI in 2015 when Deep Learning, a machine learning approach that uses trainable artificial neural networks, started to massively impact Natural Language Processing by improving the quality of AI systems on many tasks like machine translation and document classification. At the time, we had to design and train one model per specific NLP task, making every progress constrained to a particular domain and use case.

Fast-forward to 2018, when I started doing full-time research as a PhD student, the BERT model was released by Google. It was the first genuinely general-usage language model, a revolution for the field.

That’s when I started to intensely work on scaling language models multilingually. More specifically, our idea was to make BERT a better multilingual model by adapting it to languages it was not designed for. We adapted it to Bambara, Nigerian Pidgin, and North African Arabic. We were one of the first teams in the world to show that it was feasible with only a limited amount of data.

Q: What inspired you to focus specifically on scaling AI technologies to multiple languages?

A: First, I’m fascinated by the idea that, while I will never be able to speak or understand more than a few languages (I speak about three now), AI systems can help us understand, speak, and generate text in dozens. When seen this way, AI expands what we can understand. It opens up our awareness of the world.

Next, my passion for mathematics and an interest in languages naturally led me to this field. The possibility of AI helping us understand and communicate in numerous languages beyond our human capacity truly fascinated me.

For example, when I got into AI, I was captivated by one statistical property of AI systems. Words are typically represented in vectors to build AI systems. Around 2015, it was shown that learning a geometrical transformation of these vector spaces makes it possible to map one language into another. I explored this technique in 2016 and improved the classification of documents in French using data in English.

Q: What are your primary research goals?

A: There are about 7,000 languages in the world today. In Africa alone, this linguistic diversity flourishes, with estimates ranging between 1,250 and 3,000 languages spoken across the continent. One of my primary research goals is to extend AI’s reach to cover as many of these languages as possible, including primarily spoken ones. Essentially, my goal is to make AI systems more robust and general to the infinite linguistic diversity of human languages. To put things into perspective, AI and digital technologies currently cover, at most, about 5% of the world’s languages. Additionally, beyond the 30 most studied languages, the quality of NLP technologies, such as search engines, machine translation systems, and speech recognition, degrades drastically.

With more than 500 languages, Nigeria is a fascinating example of such linguistic diversity. The languages with the most speakers, such as Hausa, Igbo, and Yoruba, benefit from a limited coverage of technologies. Other languages like Bade or Reshe have nearly no technological coverage. They are spoken by a few hundred of thousands of speakers and are in danger of extinction.

For these languages, NLP technologies have the potential to help preserve and potentially revitalize them by providing tools for communities. For example, NLP can help translate content into these languages and transcribe and digitize content to help linguists describe them.

My second goal is to develop AI systems that understand and generate multiple modalities like speech or images. My hope and intuition is that multimodal models will enable AI systems to process speech and pictures along with text and improve our models’ textual abilities. For instance, multimodal models could help us achieve better multilingual capabilities.

Q: How do you see the current state of NLP technology’s ability to handle multiple languages?

A: Until a few years ago, scaling NLP technology to new languages was very challenging. We had to collect a large quantity of training data for each language and use case, such as human-annotated examples, from which the system would learn. This data collection is usually time-consuming and costly.

Self-supervised learning techniques have been a game-changer, allowing us to build multilingual systems for about 100 languages without requiring extensive human-annotated data. However, many challenges remain. Scaling to the following thousand languages is still unsolved. Additionally, making AI systems work and adapt to the socio-cultural context of users across all these languages is an open technological problem.

Q: What NLP breakthroughs do you anticipate seeing in the next few years?

A: I anticipate even more progress in what we call multimodal modeling. Multimodal models integrate text, speech, and images within a single model, potentially enhancing the reasoning abilities of AI and its proficiency in multiple languages.

Q: What are the biggest challenges in scaling AI for a broader language spectrum?

A: The primary challenge is creating evaluation benchmarks for new languages and collecting sufficient raw textual data.

To address this need, I recently worked on releasing the Belebele benchmark (Belebele means big, large, fat in Bambara). Belebele is a first-of-its-kind evaluation benchmark. Before this release, no massive multilingual benchmark for reading comprehension evaluation existed. Reading Comprehension is one of the foundational skills of Large Language Models.

Thanks to Belebele, it is now possible to measure progress and hill climb better AI systems for up to 122 language variants. I hope this will pave the way for future progress in multilingual AI systems. Belebele reached hundreds of thousands of downloads in a few months. This shows the great interest from researchers and developers in making progress on this topic.

Q: How does your work contribute to bridging the digital language divide?

A: AI technology is impacting our daily lives. It has a massive influence on the workplace by improving productivity and automating repetitive tasks. It also has the potential to revolutionize how we learn and access information.

By building better multilingual AI systems, I hope my research will contribute to democratizing access to digital experiences, ensuring that AI technologies offer the same quality of experiences to speakers of all languages.

Q: How do you perceive the evolution of AI and NLP in the global tech landscape?

A: AI has evolved from rule-based systems to models capable of learning from raw data. This trend towards minimal human intervention in the learning process is set to continue, paving the way for more autonomous and sophisticated AI systems.

Q: Where do you see your research leading you in the next five years?

A: I’m enthusiastic about the potential of multimodal modeling. My future projects will focus on designing better multimodal language models (i.e., models that can understand and generate images, audio, and potentially videos). I hope that multimodal models will unlock better multilingual modeling by integrating richer socio-cultural contexts and providing better experiences to users in languages different from English.

Final Thoughts

Benjamin Muller’s contributions to AI and NLP represent a significant step towards a world where language no longer divides but unites. As AI continues to improve and evolve, Muller’s work paves the path for a future where digital experiences are universally accessible and transcend linguistic boundaries.

About Author