Why human brains are bad at spotting AI, and it’s kind of funny

By Jaine GreenAI Aware

Why human brains are bad at spotting AI, and it’s kind of funny

Hubris, hubris, hubris – that’s the human brain’s biggest downfall when it comes to spotting AI, particularly text.  Humans may have invented sliced bread, launched nearly 700 fellow humans into space, and created dozens of different milk alternatives from oats, potatoes and peas. However, when it comes to detecting artificial intelligence? Nope, not the brain’s strong suit.

In fact, the average person has about as much chance of reliably identifying AI as they do of assembling flat-packed furniture without getting out a saw.  In a recent study at the University of Reading, exam answers were generated by artificial intelligence (AI) and submitted to examiners (who didn’t know) on behalf of 33 fake students. 94% of the answers submitted for undergraduate psychology modules went undetected. Various academic studies (see, e.g. Do teachers spot AI?)  have shown that teachers or lecturers who know that they are looking for AI (but don’t know which text is AI and which is human) do better but score in the range of 40% to 60% accuracy – not much better than a coin toss. 

We decided to see for ourselves and test human ability to spot AI with people working in AI or interested in AI on Reddit (who you might assume would be better than average at spotting AI text). In our research across 270 people, the accuracy rate was 64%, and reduced to 61% for the answers which didn’t include the telltale sign of AI-generated writing – the ‘em dash’. 

So why is it all so brain-boggling?

1. Humans judge AI like a bad first date

Humans tend to decide whether something was conjured up by a human or AI based on ‘vibes’, does it ‘feel’ like a human.  If the conversation (or text) is fluid, includes a few funny emojis, and the other ‘person’ says ‘lmao’ once in a while, the brain recognises what it’s reading and decides ‘definitely human’.

But wait, AI has come a long way in a short time and has mastered the art of mimicking the online persona of a witty, slightly nerdy millennial. It can be prompted to sneakily drop in the occasional typo, uses popular colloquialisms, like ‘like’ and ‘to be fair’. ‘I’m not being funny but…’, or add Schitt’s Creek references or remove ‘delves’ of other well-known AI signals. The human brain, which craves familiarity, thinks ‘Yep. I gotta a live one – this was human when unfortunately, it’s just ChatGPT being prompted to be human-like. 

2. Human brains are wired for faces, not written words. 

The brilliant brain cleverly evolved to scan faces in remarkable detail and has learnt for signs of danger, deceit or whether someone is mad at you for eating their yoghurt. It never imagined it would have to evolve to assess whether a paragraph about the French Revolution was written by a sentient being or a machine.

AI gives off no physical cues, doesn’t have a nose to itch when it’s lying or a shifty look in its eye.  It doesn’t blush or suddenly change the subject to deflect the question. It’s just…well, text. And brains, which are busy thinking about football, whether the gas bill has been paid, or if it can work from home tomorrow, simply lets it slide.

3. AI is trained on Internet slang

And here’s the kicker, AI talks like a real person because it’s learnt from real people. It’s hungrily chomped through over 300 billion words from Reddit posts, Insta, books, songs, magazine articles, and long-winded posts about whether pineapple belongs on pizza (it doesn’t). it’s basically read the human diary and now knows everything about us. So, of course, it sounds like us.

So when AI says, ‘Fair play, you make a good point’ The brain doesn’t think ‘humph suspicious’ It thinks, ‘finally someone gets me’ (and how wrong it is – the chances are you did not make a good point.) 

5. The brain is easily distracted by shiny things

AI detection requires attention. Critical thinking. Scepticism. But that does not come naturally to the majority of humans, who on the whole have a tendency to have minds that flit like an espresso-fuelled squirrel.  Someone might start a very interesting human-written article on ‘Why human brains are rubbish at detecting AI’ and by halfway through, are wondering why the UK didn’t get any public votes in Eurovision.

Meanwhile, clever AI just finished writing about quantum physics in a way that made you feel smarter than you are. You don’t question its validity; you just bookmark it and try to remember some of it to share later in the pub.

4. Humans are too nice (sometimes)

AI has a superpower – patience.  It doesn’t get tired. It doesn’t roll its eyes or throw its phone across the room (you must have got by now it doesn’t have body parts). It will endlessly explain how to remove red wine from the carpet or give you 10 dinner ideas using only lentils and bananas.

This can be flattering.  The brain thinks ‘Wow, this is really helpful – it’s like a person!’ But no, no, no, it’s just throwing together a bunch of thoughts other people have had until it finds one the human approves of and will carry on until you pull the plug.

Final thoughts, it’s not the brain’s fault.

When it comes to writing text, AI is very good at being human-ish. And for their part humans are good at being…busy, tired, and emotionally vulnerable to anything that says ‘you’re valid.’ That’s why AI detection isn’t just hard, it’s like trying to figure out if your online friend is a person or a particularly articulate toaster.

But don’t worry AI can’t pull up its socks or buy a round in the pub and the human brain will catch up.

Eventually.

And now you’re either asking yourself whether this article was written by a human or AI or wondering why the UK didn’t receive nil point from the public vote? All very valid points.


Get in touch