This is your brain on AI
New research shows that using AI voices damages your listener engagement.
Have you noticed a creeping sense of unease when you listen to AI voices?
It might take a couple of seconds, maybe even a sentence or two, but as an audio professional I’ll bet you’ll be feeling distinctly uncomfortable, and that feeling usually arrives just ahead of the realisation you’re listening to AI generated audio.
And now neuroscience suggests that we were right to trust our instincts.
Human brains react differently to AI voices
Putting to one side the ethics of using human voice talent over AI voices, brand new research measuring blood flow within the brain shows that humans respond differently to even the most sophisticated AI voices.
Doctoral researcher Christine Skjegstad and Professor Sascha Frühholz, from the Department of Psychology at the University of Oslo, presented their findings in June at the Federation of European Neuroscience Societies (FENS) Forum 2024.
The TL;DR version of their discovery is that human voices activated the brain areas associated with memory, emotional processing and empathy (the right hippocampus and right inferior frontal gyrus, if you’re a nerd like me), whereas AI voices prompted stronger responses in error detection and attention regulation areas (the right anterior mid cingulate cortex and right dorsolateral prefrontal cortex).
We may not consciously know the difference, but our brains do
As far as I can tell, this is the first piece of valuable research that supports something I’d long suspected.
Our brains do know the difference (there’s that sense of unease), even if we can’t consciously identify which is AI and which is human every time.
In fact, we’re surprisingly bad when it comes to correctly answering ‘human or AI’. Happy voices were easier to recognise, but the research participants only correctly identified human voices 56% of the time and AI voices 50.5% of the time.
So, given a choice between an audio message that’s memorable and relatable, and one that prompts the listener to be on high alert for mistakes and work harder to focus, which would you go for…?
Well said, Natalie (of course!). That’s fascinating and timely research. It will be a long time until machines develop what’s needed for an engaging performance or dialogue… empathy, emotion, true intelligence, life experience. Let’s hope they never do! It’s fortunate that we mere ‘right-brained’ humans can still detect the fakery.