Tech

Google would have created the first generalized artificial intelligence, which “competed” with the human mind

[VIDÉO] You may also like this affiliate content (after ad)

DeepMind, a company (from Google) specializing in artificial intelligence, just released its new artificial intelligence called “Gato”. Unlike “classic” AIs, which specialize in a specific task, Gato can perform more than 600 tasks, often much better than humans. Controversy is emerging as to whether this really is the first “generalized artificial intelligence” (GAI). Experts remain skeptical about DeepMind’s announcement.

l’artificial intelligence changed many disciplines in a positive way. Incredibly specialized neural networks are now capable of producing results in many areas far beyond human capabilities.

One of the big challenges in AI is realizing a system that integrates generalized artificial intelligence (GAI), or strong artificial intelligence. Such a system must be able to understand and master any task that a human being would be capable of. It could therefore compete with human intelligence and even develop a measure of consciousness. Earlier this year, Google unveiled a sort ofAI that can code like an average programmer. Recently, in this AI race, Deep Mind announced the creation of Gato, an artificial intelligence presented as the world’s first AGI. The results were published in arXiv

An unprecedented generalist agent model

A single AI system that can solve many tasks is not new. For example, Google recently started using a system for its search engine called the “unified multitasking model,” or MUM, that can process text, images, and video to perform tasks from research to cross-language variations. word, and the association of searches with relevant images.

Incidentally, Senior Vice President Prabhakar Raghavan gave an impressive example of MUM in action, using the fake search: I have climbed Mount Adams and now want to climb Mount Fuji next fall, what else should I do to prepare? MUM enabled Google Search to show the differences and similarities between Mount Adams and Mount Fuji. He also brought up articles about the gear needed to climb the latter. Nothing impressive one would say, but concrete at Gato is innovative in the diversity of tasks that are tackled and the way of training, from a single and unique system.

Gato’s guiding design principle is to train on the widest possible variety of relevant data, including various applications such as images, text, proprioception, joint moments, button presses, and other discrete and continuous observations and actions.

To allow for the processing of this multimodal data, scientists encode it into a flat array of “tokens”. These tokens are used to represent data in a way that Gato can understand, allowing the system, for example, to figure out which combination of words in a sentence makes grammatical sense. These sequences are grouped and processed by a transformative neural network, which is typically used in language processing. The same network, with the same weights, is used for the different tasks, unlike traditional neural networks. In the latter case, each neuron is given a certain weight and thus a different importance. Simply put, weight determines what information enters the network and calculates output data.

In this view, Gato can be trained and sampled from a standard large-scale language model, on a variety of datasets, including agent experience in simulated and real environments, in addition to a variety of natural language datasets and images. In use, Gato uses context to assemble these sampled tokens to determine the form and content of its responses.

Example of implementation of Gato. The system “consumes” a series of previously sampled Observation and Action tokens to produce the next action. The new action is applied by the agent (Gato) to the environment (a game console in this image), a new set of observations is obtained and the process repeats. © S. Reed et al., 2022.

The results are quite heterogeneous. When it comes to dialogue, Gato falls far short of rivaling the prowess of GPT-3, Open AI’s text-generation model. He can give wrong answers during conversations. For example, he replies that Marseille is the capital of France. The authors point out that this can probably be improved with further scaling up.

Yet he proved to be very capable in other areas. The designers claim that half the time, Gato outperforms human experts in 450 of the 604 tasks listed in the research paper.

Examples of the tasks performed by Gato, as strings of tokens. © S. Reed et al., 2022.

The game is over “, Actually ?

Some AI researchers see AGI as a existential catastrophe for humans : A “superintelligent” system that would surpass human intelligence would replace humanity on Earth, at worst. Other experts believe that it will not be possible in our lifetime to see the emergence of these AGIs. This is the pessimistic view that Tristan Greene argued in his editorial on the site TheNextWeb† He explains that it is easy to mistake Gato for a real IAG. The difference, however, is that a general intelligence could learn to do new things without prior training.

The response to this article was not long in coming. Secure TwitterNando de Freitas, a researcher at DeepMind and a professor of machine learning at the University of Oxford, said the game was over (“ The game is over ”) in the long quest for generalized artificial intelligence. He adds: ” The point is to sample these models larger, safer, computationally more efficient, faster, with smarter memory, more modalities, innovative data, online/offline… By solving these challenges, we will increase the IAG . to gain

Still, the authors caution against the development of these AGIs: While generalist agents are still an emerging field of research, their potential impact on society requires a thorough interdisciplinary analysis of their risks and benefits. […] Agents for general damage mitigation are relatively underdeveloped and require further research before deploying these resources

Moreover, generalist agents, capable of performing actions in the physical world, present new challenges that require new mitigation strategies. For example, physical embodiment can lead users to anthropomorphize the agent, leading to misplaced trust in the event of a system failure.

Beyond these risks of seeing the AGI tip in a damaging operation to humanity, no data currently demonstrates its ability to consistently produce solid results. This is mainly because human problems are often difficult, do not always have a single solution and for which no prior training is possible.

Despite the reaction of Nando de Fraitas, Tristant Greene remains equally firm in his opinion, on TheNextWebIt’s nothing short of amazing to watch a machine perform diversions and summon a la Copperfield, especially when you realize that machine is no smarter than a toaster (and clearly dumber than the dumbest mouse)

Whether or not we agree with these statements, or whether we are more optimistic about the development of AGIs, it nevertheless appears that the scaling-up of such intelligences, which competes with our human minds, is far from complete. †

Source : arXiv