We have been told by management that we should make use of AI at work to “improve efficiency”. I was initially reluctant, but a friend told me I’d be surprised by how useful chatbots could be with certain tasks, so I decided to give it a go.

To begin, I was impressed. Then I asked a question related to something quite specific to my job and area of expertise. The answer included several patent inaccuracies. I pointed this out inside the chat, and then a bizarre conversation ensued. The AI denied that it was wrong and when I politely explained its error, it attempted to gaslight me. Did I just get lied to by an AI language model?

AI chatbots may be sophisticated and slick, but they’re far from being a source of reliable information.Credit: AP

I think what you experienced is pretty common. Not just the positive initial impression followed by the realisation that the dazzling fluency and responsiveness sometimes belies major weaknesses, but also being confronted with what seems like deception.

I asked Dr Lingqiao Liu, an Associate Professor in the University of Adelaide’s School of Computer Science and an Academic Member of the Australian Institute for Machine Learning, about how these mistakes occur.

“Large language models [LLMs] are powerful tools that have demonstrated remarkable abilities in generating human-like text. However, like all AI technologies, they have limitations. One challenge with LLMs is ensuring factual accuracy,” he says.

“By design, these models are not repositories of truth but rather pattern-recognising systems that generate responses based on probabilities derived from vast datasets. While they can mimic the style and structure of factual discourse, the content generated is inherently probabilistic, not guaranteed to be true. In the research community, factually incorrect or nonsensical responses from an LLM are often called ‘hallucinations’.”

Liu says that developers and researchers are working on methods to improve the veracity of information provided by LLMs. “This includes refining training datasets, implementing fact-checking mechanisms, and developing protocols that enable models to source from and cite up-to-date and reliable information.”

In my own experience, I’ve found some of the assistants underpinned by these LLMs to be quite useful in answering questions that might take several – or even dozens of – traditional browser searches. But, like you, I’ve noticed inaccuracies, often followed by weird evasions.

QOSHE - Why is ChatGPT trying to gaslight me? - Jonathan Rivett
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

Why is ChatGPT trying to gaslight me?

13 0
22.03.2024

We have been told by management that we should make use of AI at work to “improve efficiency”. I was initially reluctant, but a friend told me I’d be surprised by how useful chatbots could be with certain tasks, so I decided to give it a go.

To begin, I was impressed. Then I asked a question related to something quite specific to my job and area of expertise. The answer included several patent inaccuracies. I pointed this out inside the chat, and then a bizarre conversation ensued. The AI denied that it was wrong and when I politely explained its error, it attempted to gaslight me. Did I just get lied to by an AI........

© The Sydney Morning Herald


Get it on Google Play