digital skills in workforce, Salesforce survey, online buying
x
Recent advances in artificial intelligence (AI) technology mean that we have well and truly crossed the valley. Synthetic faces now appear as real as genuine ones – if not more so. Representational image.

AI-generated faces look more real than real ones, but your brain can tell the difference

AI technology will have serious implications in the near future, and there must be safeguards put in place to mitigate its dangers


For a while, limitations in technology meant that animators and researchers were only capable of creating human-like faces which seemed a little “off”.

Films like 2004’s The Polar Express made some viewers uneasy because the characters’ faces looked almost human but not quite, and so they fell into what we call the “uncanny valley”. This is when artificial faces (or robots more generally) look increasingly human and get very close to resembling us while still showing signs of being artificial, they elicit discomfort or even revulsion.

Recent advances in artificial intelligence (AI) technology mean that we have well and truly crossed the valley. Synthetic faces now appear as real as genuine ones – if not more so.

You may have come across the website ThisPersonDoesNotExist.com. By repeatedly visiting the page, you can generate an unlimited number of images of faces, none of which belong to real people.

Artificial neural networks

Instead, these synthetic faces are created by an AI algorithm known as a “generative adversarial network”. This consists of two neural networks – essentially, computer models inspired by how neurons are connected in the brain.

These networks compete with each other. One generates new, plausible images (faces, in this case), while the other tries to discriminate real images from fake ones. Through a feedback loop, the generator learns to produce increasingly convincing images that the discriminator fails to spot as fake.

By using a large set of real photographs, along with the images produced by the generator, the system eventually learns to produce realistic, new examples of faces. The final generator is what’s producing the images you can see on the website.

Synthetic faces more real than real ones

Researchers have found that people shown synthetic faces mixed in with real ones struggle to tell the difference. Participants classified the faces correctly only 48.2% of the time according to one study – slightly worse than random guessing (which would give 50% accuracy). They also rated synthetic faces as more trustworthy than real ones.

Another study found that synthetic faces were rated as more real than photographs of actual faces. This might be because these fake faces often look a little more average or typical than real ones (which tend to be a bit more distinctive) as a result of the generator learning that such faces are better at fooling the discriminator.

Unconscious awareness in the brain

In another recent study, researchers in Australia delved deeper into our ability to tell the difference between real and synthetic faces. In their first experiment, online participants failed to distinguish between the two types of faces, and again perceived the synthetic faces as more real than the real ones.

However, their second experiment seemed to tell a different story. A new sample of participants, this time in the lab, were asked to wear electroencephalography (EEG) caps on their heads. The electrodes fitted to these caps then measured the electrical activity in the participants’ brains.

During the task, different faces were presented in a rapid sequence, and while this was happening, participants were asked to press a button whenever a white circle (shown on top of the faces) turned red. This ensured participants were focused on the centre of the screen where the images were being shown.

The results from the EEG test showed that brain activity differed when people were looking at real versus synthetic faces. This difference was apparent at around 170 milliseconds after the faces first appeared onscreen.

This N170 component of the electrical signal, as it’s known, is sensitive to the configuration of faces (that is, the layout and distances between facial features). So one explanation might be that synthetic faces were perceived as subtly different from real faces in terms of the distances between features like the eyes, nose, and mouth.

These results suggest there is a distinction between how we behave and what our brains “know”. On the one hand, participants couldn’t consciously tell synthetic faces from real ones, but on the other, their brains could recognise the difference, as revealed by their EEG activity.

Although it may be surprising to think that our brains have access to information that is outside of our conscious awareness, there are many examples of this in psychology.

The blind side

For instance, blindsight is a condition typically found in people who are blind in one-half of their visual field. Despite this, they may be able to respond to objects placed on their blind side that they deny being consciously aware of.

Studies have also shown that our attention is drawn to images of naked people, even when we’re unaware of seeing them. And we’ve all heard of the concept of subliminal advertising, although lab experiments fail to support the idea that it actually works.

Now that synthetic faces are so easy to produce, and are as convincing as real photographs, we should be concerned about fake online profiles, fake news, and so on. Such advances in AI technology will have serious implications in the near future – there must be safeguards and other measures put in place to mitigate these dangers.

Perhaps the cues that our brains seem to use when spotting synthetic faces will prove useful in developing ways to identify these fakes in the coming years.

Read More
Next Story