Can Artificial Intelligence imagine things? Imagination is a complex process. If we stop to think about the number of elements that can make up an imaginary scenario, it is easier to begin to approach a notion of how profound this mental procedure can be.
A team of researchers from the University of Southern California, USA, presented a project that seeks to endow AI with a similar quality, in order to implement systems capable of creating new concepts based on known elements.
A new system that allows Artificial Intelligence to imagine
At first glance, the idea of training an artificial intelligence system with “human capabilities” may sound strange. However, this has a practical, interesting, and approachable purpose.
The imagination process is generally defined as a creative process in which, based on previously perceived elements, new mental images are constructed. Taking this to the AI plane, a system that masters a good number of drug formulas, for example, could decompose their components and functions, in order to start testing new recipes.
Mechanisms of this kind have been presented before, but limited in their action to a certain context, such as the example just mentioned of drugs. Unlike those, the AI developed by USC researchers can be extrapolated to different applications. This means that in different scenarios, this system should be able to define its own rules and variables, to configure as many combinations of attributes as possible.
To achieve this versatility, the researchers used a mechanism similar to that used in the generation of deep fakes. Just as in the case of these tricked audiovisual pieces, an algorithm can identify a person’s face and gestures, to emulate them with a digitally replaced face; in the case of this Artificial Intelligence, the system would be able to recognize the components of each scenario analyzed.
In a conversation with his house of studies, student Yunhao Ge, part of the team behind this development, exemplified this process based on the movie Transformers. “It can take the shape of a Megatron car, the color and pose of a yellow Bumblebee car, and the background of New York’s Times Square. The result will be a colored Bumblebee Megatron Car, driving in Times Square, even if this sample was not witnessed during the training session,” he commented.
In the same instance, another member of this team, Professor Laurent Itti, commented that “deep learning has already shown unsurpassed performance and promise in many domains, but too often this has happened through superficial mimicry and without a deeper understanding of the separate attributes that make each object unique,” adding that “this new approach to disentanglement, for the first time, really unleashes a new sense of imagination in AI systems, bringing them closer to human understanding of the world.”
With systems like this, autonomous cars would be able to imagine as many scenarios as possible based on weather and environmental factors, strengthening their safety. And so, as needs compatible with what this system offers are identified, the catalog of possible applications could continue to grow.
Details of this research were made public in a paper entitled “Zero-Shot Synthesis with Group-Supervised Learning“, released at this year’s International Conference on Learning Representations.