During a recent interview with CBS‘ “60 Minutes,” Sundar Pichai, the CEO of Google and Alphabet, expressed his concerns about the rapid development of AI and its potential impact on society. Pichai stated that “every product of every company” will be influenced by this technology, and it is imperative that society prepares for the changes that AI will bring.
60 Minutes: Sundar Pichai on the future of Google AI
During the interview at 60 Minutes, Scott Pelley, the interviewer, tried several of Google AI projects, including the chatbot Bard, which has human-like capabilities. Pelley expressed his astonishment, stating that he was “speechless” and found it “unsettling.”
Pichai acknowledged the need for adaptation in society to accommodate AI, and he warned that jobs across various industries would be affected. Pichai highlighted that “knowledge workers,” such as writers, accountants, architects, and even software engineers, would be among the professionals most vulnerable to displacement by AI.
“This is going to impact every product across every company. For example, you could be a radiologist, if you think about five to ten years from now, you’re going to have an AI collaborator with you. You come in the morning, let’s say you have a hundred things to go through, it may say, ‘these are the most serious cases you need to look at first,” Pichai stated.
During 60 Minutes, Pelley explored other areas within Google where advanced AI products were being developed. One such area was DeepMind, where robots were learning to play soccer without human input. Pelley was also shown robots that could recognize objects on a countertop and retrieve requested items, such as an apple.
In discussing the potential consequences of AI, Pichai highlighted the issue of disinformation and fake news, stating that the scale of the problem would be “much bigger” and that it could cause harm.
Recently, CNBC reported that Pichai had expressed concerns about the public testing of Google’s new AI chatbot Bard, stating that the success of the program now hinged on public testing, and that “things will go wrong.”
Google released Bard as an experimental product to the public last month. This followed Microsoft’s announcement in January that its search engine Bing would incorporate OpenAI’s GPT technology, which gained international attention after the launch of ChatGPT in 2022.
Recently, concerns regarding the potential consequences of AI’s rapid progress have been expressed by the public and critics alike. In March, Elon Musk, Steve Wozniak, and numerous academics called for an immediate pause in training “experiments” linked to large language models that were deemed “more powerful than GPT-4,” OpenAI’s flagship LLM. The letter garnered over 25,000 signatures since its release.
During the 60 Minutes interview, Pelley remarked on the competitive pressure between companies like Google and startups that are driving humanity into the future, whether or not society is ready for it.
While Google has published a document outlining “recommendations for regulating AI,” Pichai stressed the need for society to quickly adapt with appropriate regulations, laws, and treaties among nations to ensure AI’s safety and alignment with human values and morality. Pichai also emphasized the importance of not leaving such decisions solely in the hands of engineers and instead bringing in experts from other fields, such as social scientists, ethicists, and philosophers.
During the CBS interview, Pichai acknowledged that there appears to be a mismatch between the pace at which technology is evolving and the ability of societal institutions to think and adapt. When asked if society is prepared for AI technology like Bard, Pichai responded, “On one hand, I feel no.”
However, Pichai remained optimistic, noting that more people are starting to consider the implications of AI technology early on, unlike with other technologies in the past.
In a demonstration of Bard’s capabilities, Pelley gave the AI program a six-word prompt, and it created a story with characters and plot that it had invented on its own. Pelley was astonished by the level of humanity and speed with which Bard had produced the tale.
When Pelley asked Bard why it helps people, the AI program simply replied, “because it makes me happy.” Pelley expressed shock at this response, noting that Bard appears to be thinking. However, James Manyika, a SVP Google hired as head of “technology and society,” clarified that Bard is not sentient and is not aware of itself. Instead, it can behave like it is aware.
During the CBS interview, Pichai mentioned that Bard has a lot of hallucinations. Pelley had asked Bard about inflation, and it provided a response with book suggestions that did not actually exist when Pelley checked them later.
Pelley expressed concern about Pichai’s comment that there is a “black box” with chatbots, and that it is unclear why or how they come up with certain responses. This lack of transparency and understanding regarding the inner workings of AI programs is a common concern among critics and the public alike.