• Source:JND

MICROSOFT's attempt to integrate Artificial Intelligence in its search engine Bing made headlines recently due to the errors it showed. The chatbot was spotted mentioning incorrect facts and manipulative content during long sessions. As a result, the tech giant restrained one session to a maximum of five questions in the Bing chatbot.

Users can reply to the chatbot up to fifty times a day. This comes as a corrective step to restrain the product from acting strange and keep conversations cordial, Microsoft said. Launched more than a week ago, Bing AI is a result of Microsoft's investment in OpenAI’s ChatGPT.

The option to try the Bing AI chatbot is restricted by a waitlist due to early-stage imperfections. Bing chatbot service is open to a limited audience under beta operations. This is one of Microsoft's attempts to achieve a breakthrough in the internet search market. The company will offer users the option to choose results from Bing or OpenAI's ChatGPT to get suitable output depending on their purpose and needs.

The underlying tech in the search engine was called-out for displaying factually incorrect data, asking a columnist to abandon marriage, marking a user rude and many such conversations on social media.

As the user responses gained traction on social media, Microsoft took cognisance and replied, longer sessions of 15 questions or more can confuse the chat model. As a result, manoeuvring the underlying AI to act bizarrely during discussions.

The responses from Microsoft's Bing and Google's Bard have grabbed eyeballs in recent times. Both tech giants are attempting to integrate AI and extract conversational results from search engines.

AI tech has been used for phishing and other shady objectives in the past. Both Google and Microsoft are continuously addressing the shortcomings in their AI products and working to perfect them.

Also In News