AI-induced hallucinations pose a major danger to scientific progress, warns Oxford study

In recent years, Large Language Models (LLMs) have been increasingly used in chatbots to provide helpful and convincing responses. However, researchers at the Oxford Internet Institute are warning about the dangers of these models, which can “hallucinate” and present false information as accurate.

The paper published in Nature Human Behaviour highlights that LLMs are designed to generate responses without any guarantees regarding their accuracy or alignment with fact. While they may be useful for generating information in response to questions or prompts, the data they are trained on may not always be reliable.

One issue is that LLMs often rely on online sources, which can contain false statements, opinions, and inaccurate information. Users may trust LLMs as a human-like source of knowledge due to their helpful design, leading them to believe that responses are accurate even when they have no basis in fact or present a biased or partial version of the truth.

To address this issue, researchers suggest using LLMs as “zero-shot translators,” where users provide the model with appropriate data and ask it to transform it into a conclusion or code. This approach makes it easier to verify that the output is accurate and aligned with the provided input.

While LLMs can undoubtedly assist with scientific workflows, it is crucial for the scientific community to use them responsibly and maintain clear expectations of how they can contribute. Information accuracy is crucial in science and education, and relying solely on LLMs as sources of knowledge could lead to incorrect conclusions and misinformation.

In conclusion, while large language models can be useful tools for generating helpful responses in chatbots, it is important for researchers and scientists to be aware of their limitations and use them responsibly. By treating LLMs as zero-shot translators rather than knowledge sources themselves, we can ensure that we get accurate information from these powerful tools.

Leave a Reply