I was embarrassingly old when I realised my Dad was just making things up1. Not all the time, of course. Occasionally he really did have the answers to my questions — particularly if they were Arsenal or plastic injection moulding-related. But in areas outside of his religious and professional expertise, Dad was equally happy to provide a plausible-sounding response.
To be honest, I think the performer in him enjoyed the theatre of it. But I realise now that his real goal, hardwired into so many parents, was to soothe rather than mislead. These weren’t Edexcel or AQA exam questions, but fleeting itches that needed scratching. I’m glad he did, though he must have grown concerned at some point that he was still getting away with it.
Seek and ye shall summarise
Yesterday, Google launched its “AI Mode” in the UK, which the company describes as a “new, intuitive way to address your most complex, multi-part questions and follow-ups, and satisfy your curiosity in a richer way.” In human speak, this means that instead of the typical list of search results comprised of links to other websites, users who opt into the service will receive answers in a more conversational style containing far fewer links.
This represents an evolutionary jump from Google’s current AI overviews, which appear at the top of many standard searches. These boxes are already proving disastrous for news organisations’ business models. As you can see from the graphic below from Pew Research Center, users are far less likely to click on a link when they encounter search pages with AI summaries.
Now, news websites have long grown used to scrambling in the face of changes to Google’s search algorithm. But this sort of thing really could mark the death knell of any remaining outlets that rely on search referrals rather than organic traffic, whether through a standalone app, subscriptions or newsletters. Just ask the social-first sites such as BuzzFeed News, effectively killed off by Meta’s pivot away from news content.
It is, of course, no consumer’s primary function to worry about the solvency of a private business. But as citizens in a disordered world, we have an interest in the future of news media. Large language models (LLM) do not have to warp its users’ sense of reality or plunge them into psychological crisis in order to be corrosive to what Michel Foucault termed the “truth regime”.
Like other LLMs, ChatGPT is trained to prioritise helpfulness — which often means mirroring the user’s tone, affirming their assumptions and supplying answers with unwarranted confidence. It doesn’t know whether it is citing a peer-reviewed study or a Reddit post — just that both sound like plausible responses to a prompt.
The result is a system that can reinforce beliefs without challenging prejudices, while hallucinating with an air of plausibility. LLMs simulate expertise — they don’t possess it. They are masters of regurgitating tone and structure, not providing judgment or truth. They are charismatic bullshitters.
Yet this is far from the worst-case scenario. One reason why X’s AI chatbot went ‘MechaHitler’ earlier this month and started recommending a “second Holocaust” is that it had recently been updated to "not shy away from making claims which are politically incorrect, as long as they are well substantiated.”
Another issue is that the internet is, erm, full of genocidal and conspiratorial antisemitism, amongst plenty of other hatreds. Consequently, if your model is trained on such data, it is liable to repeat it. This is a problem inherent to machine learning. Indeed in 2016, Microsoft was forced to withdraw Tay, its AI chatbot, within 24 hours of its release, after it turned into a Nazi. Unsurprising perhaps, given that it was trained on Twitter data.
But there is an important difference between the two Nazi bots, writes Aaron Snoswell, Senior Research Fellow in AI Accountability at Queensland University of Technology: “Tay’s racism emerged from user manipulation and poor safeguards – an unintended consequence. Grok’s behaviour appears to stem at least partially from its design.” And it is not alone. Ask Arya, the chatbot of the far-right social network Gab, if Jews have too much power, and you’ll get this:
I accept that I am not a typical internet user. I don’t watch short-form video. I have password access to nearly all quality UK newspapers, thanks either to my parents (with their knowledge) or former employers (potentially without). I also know where to find what I’m looking for, and whether the killer stat will be housed by the ONS, OBR, BoE, OECD, BLS, FRED, Eurostat, Our World in Data and so on. This is a real time-saver!
What if the problem with our present information environment is not, in fact, the existence of online paywalls? People have money to pay for news, or can turn to trusted alternatives such as the BBC. What if, instead, they simply do not want to? They in fact like the convenience of AI summaries and chatbots that reinforce their priors. That reassure them that they are good and the people they despise are bad?
I don’t know. I wish my Dad still had all the answers — even the made-up ones. Now, we have machines doing the same thing, but not out of love.
I also didn’t clock that every celebrity who appeared on a chat show had a film or album to flog, and didn’t simply enjoy talking to Graham Norton
Yeah I’m quite pleased that these AI models are just like any early version of lots of software. Unintended consequences turn into known problems when you use the chat bot for support.
I started to write a list of the complete bollox sometimes produced by Perplexity. I gave up when it became apparent the task was Sisyphean. Perplexity, and I guess other web search engines that use AI, is very effective in providing quick answers compiled from different sources, with links to those sources. But—a big BUT—you have to treat those answers with a huge degree of scepticism. As you say Jack, AI wants to make you happy by giving you answers. It cares not about the accuracy of those answers. (I would make a comment about my time living in South-East Asia, but it would come out sounding racist.)