Google’s new AI chatbot, Bard, has gotten off to a start after its first demo resulted in a factual error. The bot was announced on Monday as a rival to OpenAI’s ChatGPT and was set to become “more widely available to the public in the coming weeks.” In a demo shared by Google, Bard was asked about new discoveries from the James Webb Space Telescope and replied with three bullet points, including a statement that the telescope “took the very first pictures of a planet outside of our own solar system.” However, this statement was incorrect.
Some prominent astronomers were quick to point out that the first image of an exoplanet was taken in 2004 and it was not done by James Webb Space Telescope.
Astrophysicist Grant Tremblay tweeted that while Bard is “impressive,” AI chatbots like ChatGPT and Bard have a tendency to confidently state incorrect information. The systems are trained on massive amounts of text and analyze patterns to determine word sequences, but they do not query a database of proven facts. As a result, they can “hallucinate” and create false information, leading one AI professor to label them as “bullshit generators.”
Later, Tremblay shared a screenshot of a similar query on ‘old’ and ‘busted’ Google Search and it was able to provide the correct answer. Alphabet’s shares reacted quickly and slid as much as 9 per cent during regular trading with volumes nearly three times the 50-day moving average.
Microsoft’s Early Release of AI-powered Bing Is not making it easy for Google
Earlier this week, Microsoft announced that they are introducing a new version of Bing and Edge browser that will use an advanced version of the same AI that powers ChatGPT. Google, on the other hand, has released Bard only to “trusted testers”. However, the company claims that it is due to mistakes like these, that Google will do extensive testing before making Bard available to a broader audience.
In response to Bard’s debacle on Wednesday, Google acknowledged the importance of a rigorous testing process. According to a report by The Verge, a spokesperson for the company stated that the program will combine external feedback with internal testing to make sure Bard’s responses meet a high bar for “quality, safety, and real-world information”.