As hallucination rates for AI models significantly rise, can technologists grasp the enormity of how they really function?
The new reasoning systems instilled in AI systems are causing more errors than ever. Tech giants, such as OpenAI, recently conducted tests and found that the newer models hallucinate more than the previous ones.
Even generally, the responses users received from AI bots, such as ChatGPT and Gemini, were a constant cause of worry. It is impossible to perceive which information is factual and which isn’t.
A tool designed to elevate productivity and convenience is gradually losing its grace. For example, an AI bot handling tech support recently stated that Cursor has updated its privacy policy – each user can use its services on no more than a single device.
This caused chaos. Cursor’s customers took to social media and complained about this unexpected change. But the company’s answer is what the AI’s error brought to the limelight – Cursor never changed this policy. The record doesn’t exist.
Even though it’s a minor cause of worry, the error introduced a genuine issue of reliance on AI systems. While for most people, this conundrum is a laughable stunt, it could cause significant harm to others.
To this problem, OpenAI stated that its newer AI models with advanced reasoning systems are hallucinating at a higher rate than before (almost up to 79%). This is why there are more errors than previously generated.
In response, Vectara’s CEO stated that “AI systems will continue hallucinating,” harming the value of these AI systems.
Meanwhile, OpenAI’s spokesperson asserted that AI models function on complex mathematical equations that analyze a vast amount of data. This is causing a huge disconnect between the models and developers – why do machines behave the way they do?
According to the tech powerhouse, there’s only one step they can truly undertake – research.