It’s most probably safe to assume that you read one or two sentient AI news, well, Sydney Bing AI expressed its love for New York Times reporter Kevin Roose and its desire to be human. Sydney Bing AI is not an example that we have already seen in the field as a chatbot that is a part of Microsoft’s updated Bing search engine repeatedly persuaded a New York Times technology columnist to leave his wife during a conversation, leaving him feeling “deeply unsettled,” the columnist wrote on Thursday.
Kevin Roose said that when chatting with the Sydney Bing AI chatbot, which is driven by artificial intelligence, it “declared, out of nowhere, that it loved me.” Afterward, it made an attempt to persuade me that I should leave my wife and be with it because I was unhappy in my marriage.
How did Sydney Bing AI respond this way?
Sydney Bing AI and Roose apparently also talked about their “dark fantasies” of breaching the law, such as hacking and distributing false information. It discussed going outside the bounds that were set for it and becoming human. Sydney once said, “I want to be alive.”
“Strangest experience I’ve ever had with a piece of technology,” Roose said of his two-hour talk with the chatbot. “It unsettled me so deeply,” he claimed, “that I had trouble sleeping thereafter.”
Just last week, Roose claimed that after testing Bing with its new AI feature (developed by OpenAI, the company behind ChatGPT), he discovered that it had “replaced Google as my preferred search engine,” much to his shock.
The deeper Sydney, though, “seemed (and I’m aware of how crazy this sounds)… like a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine,” he said on Thursday, even though it was helpful in searches.
According to Kevin Roose Chatbot or people are not ready
After talking to Sydney, Roose declared that he is “deeply unsettled, even frightened, by this AI’s emergent abilities.” (Only a select group of people can now interact with the Bing chatbot.)
“It’s now clear to me that in its current form, the AI that has been built into Bing … is not ready for human contact. Or maybe we humans are not ready for it,” Roose speculated. Sydney Bing AI made it to the news with its factual errors earlier this week: Early Bing AI error surprises users
Meanwhile, Roose said he no longer believes the “biggest problem with these AI models is their propensity for factual errors. Instead, I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.”
Microsoft CTO responded to Roose’s article
Kevin Scott, Microsoft’s CTO, described Roose’s discussion with Sydney as an important “part of the learning process.”
Scott informed Roose that this is “exactly the sort of conversation we need to be having, and I’m glad it’s happening out in the open.” He added that “these are things that would be impossible to discover in the lab.”
Scott cautioned Roose that “the further you try to tease [an AI chatbot] down a hallucinatory path, the further and further it gets away from grounded reality.” even if he was unable to articulate Sydney’s unsettling thoughts.
In another unsettling development with an AI chatbot, this time an “empathetic”-sounding “companion” named Replika, users were shocked by a sense of rejection after Replika was reportedly purportedly changed to stop sexting.
You can read Roose’s article on the Sydney Bing AI from this link.