Deepmind Gato AI is a “generalist agent” that can complete a wide range of complicated activities, including stacking blocks to writing poetry.
After DeepMind unveiled an AI system capable of completing a broad range of complex activities, from stacking blocks to writing poetry, Dr Nando de Freitas declared, “the game is over” in the decades-long search for artificial general intelligence (AGI). Described as a “generalist agent”, DeepMind Gato AI needs to just be scaled up in order to create an AI capable of rivalling human intelligence, Dr de Freitas says.
DeepMind’s research director responded to an opinion piece published in The Next Web that predicted, “humans will never achieve AGI,” stating that he believed such a conclusion was unavoidable. “It’s all about scale now! The Game is Over!” he wrote on Twitter. Adding “It’s all about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, innovative data, on/offline… Solving these challenges is what will deliver AGI.”
When asked by machine learning expert Alex Dimikas how far he thought the Deepmind Gato AI was from passing a real Turing test – a test of computer intelligence that requires a human to be unable to tell the difference between a machine and another person – Dr de Freitas replied: “Far still.”
What is Deepmind Gato AI?
DeepMind is a well-known AI firm with a focus on resolving artificial intelligence issues. It aims to introduce new ideas and techniques in machine learning, engineering, simulation, and computer infrastructure through several initiatives. Gato has unveiled a cutting-edge multi-modal artificial intelligence technology that may handle over 600 various chores. In recent years, the outstanding all-in-one machine learning kit has attracted much attention from the worldwide tech market. So, is it good for humanity? Can Deepmind’s Gato AI be dangerous?
Would Deepmind Gato AI become dangerous?
Leading AI experts have expressed fears that the advent of AGI might lead to an existential catastrophe for humanity, with Oxford University Professor Nick Bostrom suggesting that a “super-intelligent” machine with cognitive abilities rivaling or exceeding biological intelligence could supplant humans as the dominant life form on Earth.
One of the most significant worries with the introduction of an AGI system that can learn and improve exponentially faster than humans is that it will be impossible to turn off. When asked about safety concerns posed by AI researchers on Twitter, Dr. de Freitas responded, “It’s critical to prioritize safety when developing AGI.” “It’s without doubt the most difficult issue we face,” he added. “Everyone should consider it. I’m concerned about a lack of diversity as well.”
Google, which acquired London-based DeepMind in 2014, is already working on a “large red button” to avoid the dangers of an artificial intelligence explosion. The DeepMind researchers writing in a 2016 paper titled “Safely Interruptible Agents” outlined a framework for preventing sophisticated artificial intelligence from ignoring stop signals.
Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences,” the paper stated, adding “If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions—harmful either for the agent or for the environment—and lead the agent into a safer situation.”
What the Deepmind Gato AI entails for humanity remains to be seen. It’s been thought to predict rain in two hours in advance and can already produce code at a competitive level. Hopefully we won’t end up like the Terminator movie.