Optical technology enabling the growth of artificial intelligence

We face complex and dire challenges in today’s world. The only certainty is change. In order to predict future developments for the good of all, we will need to absorb and analyze information on an unprecedented scale. Artificial intelligence (AI) has a pivotal role to play.

But in order for AI to expand, we need new networking technology that boosts transmission speeds and improves responsiveness.

The answer? Innovative Optical and Wireless Network (IOWN). IOWN is a communication infrastructure that uses optical and photonic technologies to deliver ultra-high-capacity, ultra-low-latency and ultra-low-power communications.

The expectations and demands placed on AI are highly relevant to IOWN. Consider asynchronous distributed learning, in which AI systems in diverse domains collaborate and share knowledge.

This can help to solve the problem of centralized data collection, which is otherwise an impractical approach to a diverse array of data sources, encompassing vehicles, factories, individuals and the environment, and countless other sensors.

IOWN will serve as the communication infrastructure for such systems, enabling AIs to process vast amounts of information and enhance their interactions.

What are Large Language Models (LLMs)?

Large language models (LLMs) are a rapidly evolving branch of the natural language processing field, enabling AI to understand and generate everyday human language.

Progress hinges on expanded data availability, enhanced computational capabilities, and the development of new training algorithms.

Some early LLMs are already beginning to gain widespread adoption and are expected to have substantial impacts on business and society at large.

NTT, for example, has long been committed to research and development into natural language processing technology.

The company has launched its own LLM, called ‘tsuzumi,’ in anticipation of the potential that AI holds when it comes to improving people’s well-being around the world. Tsuzumi was developed with an energy-efficient design, language processing capabilities, and adaptability to various user needs.

Rapid advancements mean LLMs can now interact naturally with humans. But they are not without ethical and technical challenges; for instance, LLMs are susceptible to learning biases from training data, which can result in inappropriate outputs.

Despite the remarkable capabilities of LLMs, they still have difficulty in seamlessly collaborating with humans.

And, the inner workings of LLMs are not yet entirely clear, making it difficult to understand how they generate their output. So, more research and development is still needed and will be for some time to come.

NTT and the future of AI

NTT’s goal is to develop an AI cognitive engine that can collaborate naturally with people and contribute to societal and individual wellbeing.

This means developing AIs that have the same interfaces as human beings. NTT is now developing a model called VisualMRC designed to interpret language within web pages visually, much like a human, and SlideVQA, designed to respond to questions based on multiple sets of images such as slideshows. NTT is also building a Japanese visual reading comprehension model.

With these models in place, NTT will then be able to create versatile software robots capable of interactive collaboration with human beings.

The idea is that everyone will be able to collaborate with this software as an assistant, and the world will then see an increase in communication not only between people and AI but also between AI and AI, as well as between AI and objects.

IOWN will be instrumental in enabling all of these advancements. Imagine the vast volumes of data that will need to be processed in real-time, including text and all audiovisual information perceived by humans.

IOWN will allow for the connection of these massive amounts of data generated by people, devices, sensors and the digital world, enabling collaboration between humans and cutting-edge AI technologies.

Learn more about IOWN here.



Source link