OpenAI's latest AI models are generating alarming levels of misinformation
OpenAI's latest AI models seem to have a big problem. A report reveals that the GPT o3 and o4-mini are producing misinformation at an alarming rate.
AI-generated misinformation, aka hallucinations, are common among most artificial intelligence services. The New York Times has published an investigation conducted by OpenAI that discovered its own models are generating more fake content than others. This in turn has raised serious concerns about their reliability.
GPT o3 and o4-mini have been designed to mimic human reasoning and logic. When these were put to the test in benchmarks involving public figures, nearly one-third of GPT o3's results were found to be hallucinations. In comparison, GPT o1 had less than half of that error rate in tests that were conducted last year. GPT o4-mini fared even worse, as it hallucinated on 48% of its tasks. When these models tackled general knowledge questions, hallucinations soared to 51% for GPT o3, and a staggering 79% for o4-mini.
OpenAI says that the hallucinating problem is not because the reasoning models are worse, but because they could simply be more verbose and adventurous in their answers, and are speculating possibilities rather than repeating predictable facts. Developers initially aimed for these systems to think critically and reason through complex queries; however, this ambitious approach appears to have led to an increase in creativity at the expense of factuality.
This could pose a big problem for OpenAI's ChatGPT, as rival services like Google Gemini, Anthropic Claude, have been designed to provide information more accurately. Unlike simpler models focused on high-confidence predictions, GPT o3 and o4-mini often speculate, blurring the line between possible scenarios and outright fabrications. This raises red flags for users in high-stakes environments, from legal professionals to educators and healthcare providers, where reliance on AI could lead to significant missteps.
The more useful AI becomes, the greater the potential for critical errors. While AI models may outperform humans in certain tasks, the risk of inaccuracies diminishes AI's overall credibility. Until these hallucination issues are effectively addressed, users are advised to approach AI-generated information with caution and skepticism.
Source: Tech Radar
RECOMMENDED NEWS

Google Chrome to display "choose your search engine" prompt
Google Chrome users will soon get a "choose your search engine prompt" when they start the desktop ...

U.S. Judge rules that Google has an illegal search engine monopoly
Google has been found guilty of maintaining an illegal monopoly in the search engine market. This i...

Proton VPN: free VPN users can use the browser extensions now
Proton announced today that it has made the decision to allow free users of its service to use the ...

Google Chrome: big security feature enables automatic security actions
Google is updating Safety Check and improving the handling of site notifications and website permis...

FBI Seizes Cracked.io and Nulled.to in Major Cybercrime Crackdown
The FBI has seized the domains of Cracked.io and Nulled.to, two well-known hacking forums associate...

Rockstar Games Announces Enhanced Graphics and Performance for GTA 5 on PC
Rockstar Games has announced a major upgrade for Grand Theft Auto V (GTA 5) on PC, bringing a host ...
Comments on "OpenAI's latest AI models are generating alarming levels of misinformation" :