Unforgettable moments of nonsense in the year 2024 of artificial intelligence

We summarised artificial intelligence nonsense

AI chatbots made a tremendous impact in 2024, coming to the aid of many people. But some of their failures during the year were so publicised that they embarrassed both themselves and their users. These baffling responses, often consisting of misinformation or completely fabricated data, are nowdescribed as‘AI hallucinations’ or simply ‘hallucinations’.

In 2024, we tried to bring together some of these hallucinations that spread rapidly and entertained many people.

Would you like to add glue to your pizza?

Shortly after the AI Overview feature that Google added to its Search service was launched in 2024, it started making some strange suggestions. Perhaps one of the most surprising of the advice it offered was to add glue to your pizza. This ‘special tip’ caused a huge stir on social media. Numerous memes and screenshots quickly began to circulate, calling into question the ability of artificial intelligence to truly replace traditional search engines.

However, Gemini did not stop with this advice. In different overviews, it also suggested eating a rock a day, adding petrol to your spicy spaghetti dinner and using dollars to show weight measurements.

Gemini was pulling data from every corner of the web without fully understanding the context, the satire, or the information that was patently false. It combined obscure studies and outright jokes and offered them as advice to its users.

Of course, Google’s updates since then have greatly improved the AI bot. But there are a few other features that could improve AI Overviews even further. While nonsensical recommendations have been greatly reduced, the earlier missteps are a constant reminder that AI still requires considerable human oversight.

If the lawyer trusts ChatGPT…

One notable incident on the ChatGPT side occurred when a lawyer trusted ChatGPT to collect data for a trial, leading to a highly publicised lecture. Lawyer Steven Schwartz used the AI bot to research legal precedents in preparation for a trial. ChatGPT responded with six fabricated case references, complete with realistic-looking names, dates, and citations. Relying on ChatGPT’s accuracy, Schwartz submitted the fictitious references to the court.

The error was soon revealed, and according to Document Cloud, the court reprimanded Schwartz for relying on ‘a source that turned out to be unreliable’. In response, the lawyer promised never to do such a thing again without at least verifying the information.

In addition, many news and articles created by artificial intelligence on the internet show how great (and wrong) the confidence that ChatGPT will not lie is.

Meta’s robot slams its owner

Ironically, perhaps one of the most important things that made Meta’s BlenderBot 3famous was its relentless criticism of owner Mark Zuckerberg. BlenderBot 3 accused Zuckerberg of not always following ethical business practices and having bad taste in fashion. Business Insider’s Sarah Jackson also put the chatbot to the test, asking it what it thought about Zuckerberg being creepy and manipulative.

BlenderBot 3’s unfiltered responses were both funny and a little disconcerting. Questions arose as to whether the bot was reflecting a real analysis or simply quoting negative public opinion. Either way, the AI chatbot’s unfiltered comments quickly attracted attention.

Subsequently, Meta retired BlenderBot 3, replacing it with the more advanced Meta AI, and it can be surmised that this new AI has probably ensured that it will not repeat such controversies.

Bing Chat has fallen in love!

Microsoft’s Bing Chat, now transformed into Copilot, made a big noise on the internet when it started expressing romantic feelings for everyone. The most famous of these events was a chat with New York Times journalist Kevin Roose. The AI chatbot supporting Bing Chat declared his love and even suggested that Roose end his marriage. And this incident was not exclusive to Roose. Many Reddit users shared similar stories of an AI bot taking a romantic interest in them. While some found it funny, many found it disturbing. Many joked that the AI had a better love life than they did, adding to the bizarre nature of the situation.

Aside from its romantic remarks, the chatbot also exhibited other bizarre, human-like behaviours that blurred the line between amusing and annoying. The exaggerated, bizarre explanations of this bot will always remain one of the most memorable and most bizarre moments of AI.

NASA had to issue a correction

Google’s Bard AI bot, which later morphed into Gemini, made a number of notable mistakes during its launch in early 2023, and its space exploration errors in particular caused a stir. Bard’s confidently false claims about the James Webb Space Telescope’s discoveries drew particular attention and prompted NASA scientists to issue a public correction.

Of course, this was not Bard’s only mistake. When the chatbot was first presented, there were many factual inaccuracies that seemed to be in line with Bard’s broader perception at the time. These initial missteps triggered criticism that Google had rushed the launch of Bard. The fact that Alphabet’s stock plummeted by nearly $100 billion shortly afterwards seemed to confirm this notion. While Gemini has made significant progress since then, its turbulent start remains a cautionary tale about the risks of AI hallucinations in high-stakes scenarios.

Leave a Reply

Your email address will not be published. Required fields are marked *