The White House pressed Silicon Valley leaders on Thursday to minimize the hazards of artificial intelligence, in the administration’s most apparent move to address mounting concerns and requests for regulation of the quickly expanding technology.
US Vice President Kamala Harris met with the CEOs of numerous technology companies on Thursday to discuss the risks linked with artificial intelligence, as the White House launched several measures geared at resolving such concerns.
Harris spoke with executives from Google, Microsoft, ChatGPT maker OpenAI, and AI company Anthropic during a two-hour discussion at the White House. President Joe Biden also paid a quick appearance during the conference.

‘IA has enormous potential and enormous danger’ Biden says
“What you’re doing has enormous potential and enormous danger,” Biden said to the CEOs in a video uploaded to his Twitter account.
The event was the first White House AI conference since the November debut of OpenAI’s ChatGPT, which focused public attention on generative AI, a powerful tool that replicates humans’ capacity to build software, have conversations, and make poetry. Concerns have also been expressed about how the technology may disseminate disinformation and eliminate employment.
“The private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products,” Harris added. “And, to protect the American people, every company must follow existing laws.”
Biden administration invests $140m to promote AI development
Before the conference on Thursday, the Biden administration announced that the National Science Foundation will invest $140 million to boost AI research and development. It also stated that the White House’s Office of Management and Budget will issue policy recommendations on the federal government’s employment of AI.
Chatbots powered by AI, such as ChatGPT, grabbed the globe by storm late last year by searching through large databases and stringing together words to answer almost any inquiry with humanlike replies. Following the publication of ChatGPT by OpenAI, Microsoft used ChatGPT’s technical basis, GPT–4, to increase Bing search results, while Google replied by disclosing Bard, its ChatGPT rival.

The chatbots don’t all operate the same way, as CNET’s Imad Khan recently discovered when evaluating their replies to see which is the most helpful, but the tools have also raised worries about the hazards involved with AI. In March, hundreds of tech CEOs and AI specialists signed an open letter urging prominent AI laboratories to halt AI system research, citing “profound risks” to human civilization. Elon Musk, Apple co-founder Steve Wozniak, Stability AI CEO Emad Mostaque, and Sapiens author Yuval Noah Harari were among those who signed the petition.
Godfather of AI
Earlier this month, a renowned computer scientist known as the “Godfather of AI” resigned from Google, expressing concern about what AI may imply for disinformation and people’s livelihoods. Geoffrey Hinton expressed concern that normal people may be unable to distinguish between actual and AI-generated photographs, movies, and text.
He also expressed concern about AI’s propensity to eliminate employment. According to a March analysis by Goldman Sachs, generative AI might affect up to 300 million employment, with up to 7% of US occupations in danger of being replaced by AI.

Microsoft and OpenAI representatives declined to comment on the encounter. Anthropic and Google did not immediately reply to a request for comment.
Lina Khan, the chair of the Federal Trade Commission, said the US was at a “key decision point” with A.I. in a guest column published in The New York Times on Wednesday. She compared the technology’s recent advancements to the emergence of tech behemoths like Google and Facebook, and she cautioned that, without adequate regulation, the technology might entrench the dominance of the biggest internet corporations while providing fraudsters with a powerful tool.
“As the use of artificial intelligence becomes more widespread, public officials have a responsibility to ensure that this painful history does not repeat itself,” she added.