[ad_1]
When Microsoft announced a version of Bing powered by ChatGPT, it came as little surprise. After all, the software giant had invested billions into OpenAI, which makes the artificial intelligence chatbot, and indicated it would sink even more money into the venture in the years ahead.
What did come as a surprise was how weird the new Bing started acting. Perhaps most prominently, the A.I. chatbot left New York Times tech columnist Kevin Roose feeling âdeeply unsettledâ and âeven frightenedâ after a two-hour chat on Tuesday night in which it sounded unhinged and somewhat dark.
For example, it tried to convince Roose that he was unhappy in his marriage and should leave his wife, adding, âIâm in love with you.â
Microsoft and OpenAI say such feedback is one reason for the technology being shared with the public, and theyâve released more information about how the A.I. systems work. Theyâve also reiterated that the technology is far from perfect. OpenAI CEO Sam Altman called ChatGPT âincredibly limitedâ in December and warned it shouldnât be relied upon for anything important.
âThis is exactly the sort of conversation we need to be having, and Iâm glad itâs happening out in the open,â Microsoft CTO told Roose on Wednesday. âThese are things that would be impossible to discover in the lab.â (The new Bing is available to a limited set of users for now but will become more widely available later.)
OpenAI on Thursday shared a blog post entitled, âHow should AI systems behave, and who should decide?â It noted that since the launch of ChatGPT in November, users âhave shared outputs that they consider politically biased, offensive, or otherwise objectionable.â
It didnât offer examples, but one might be conservatives being alarmed by ChatGPT creating a poem admiring President Joe Biden, but not doing the same for his predecessor Donald Trump.
OpenAI didnât deny that biases exist in its system. âMany are rightly worried about biases in the design and impact of AI systems,â it wrote in the blog post.
It outlined two main steps involved in building ChatGPT. In the first, it wrote, âWe âpre-trainâ models by having them predict what comes next in a big dataset that contains parts of the Internet. They might learn to complete the sentence âinstead of turning left, she turned ___.ââ
The dataset contains billions of sentences, it continued, from which the models learn grammar, facts about the world, and, yes, âsome of the biases present in those billions of sentences.â
Step two involves human reviewers who âfine-tuneâ the models following guidelines set out by OpenAI. The company this week shared some of those guidelines (pdf), which were modified in December after the company gathered user feedback following the ChatGPT launch.
âOur guidelines are explicit that reviewers should not favor any political group,â it wrote. âBiases that nevertheless may emerge from the process described above are bugs, not features.â
As for the dark, creepy turn that the new Bing took with Roose, who admitted to trying to push the system out of its comfort zone, Scott noted, âthe further you try to tease it down a hallucinatory path, the further and further it gets away from grounded reality.â
Microsoft, he added, might experiment with limiting conversation lengths.
Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.
[ad_2]