How to spot a fake image

Every time you pick up your phone, you have to decide what’s real and what’s fake. Here are some tips to help you spot the difference.

Manipulated images may be more pervasive than ever, but they’re not new.

This image is famous for capturing the drama of the battlefield in World War I.

An old sepia photograph of soldiers on a battlefield with shells exploding in the background
Frank Hurley: ‘The Raid’.(Supplied: NSW State Library)

Only, it was branded a fake by Australia’s official historian at the time.

By zooming in on the soldiers in the bottom-right corner of the frame, we can uncover how acclaimed Australian photographer Frank Hurley was playing an almost imperceptible trick.

A black and white photo superimposed over a segment of the previous image
Frank Hurley: ‘The Raid’ with one source photograph on top.(Supplied: NSW State Library)

Here’s the original source photograph for this section of the frame.

Notice how the shell explosions are missing?

That’s because Hurley spliced multiple shots of exploding shells and Australian troops into a single composite image.

A battlefield photo with multiple frames superimposed over the elements
Frank Hurley: ‘The Raid’ with source frames included.(Supplied: NSW State Library)

Other fragments of the image, like the fighter planes, were cut in after the fact too.

Hurley never intended to deceive – he only wanted to more fully represent the chaos he witnessed in the war.

When the images were displayed in an exhibition in Sydney in 1919, they were accompanied by clear labels explaining that they were composites.

“In order to convey accurate battle impressions, I have made several composite pictures, utilising a number of negatives for the purpose,” he wrote in the catalogue foreword.

Splicing and dicing in the 21st century

More than 100 years on, the technologies available to manipulate imagery have become more advanced. And the resulting images rarely come with a disclaimer.

So, how can we pick a fake when we see one? And will these techniques continue to work as tools become more sophisticated?

We asked TJ Thomson, a senior lecturer in communications at RMIT, to help break down the telltale signs of image manipulation.

When Hurricane Dorian was approaching Florida in 2019, several high-profile celebrities and politicians shared this image – and most of them seemed to believe it was real.

To help identify its authenticity, Professor Thomson turned to digital forensics tool WeVerify.

WeVerify’s algorithm adds green-blue blobs where it detects signs of manipulation in an image – in this case, it indicates that the storm is clearly suspicious.

With half of the image covered in blobs and the other half clean, it suggests that two separate images have been spliced together.

Interestingly, when we ran WeVerify on another version of this image, the algorithm flagged the buildings rather than the storm.

While both results suggest the same thing, it’s a reminder of how these tools are imperfect and can’t be 100 per cent relied on to pick a fake.

Suspecting foul play, Professor Thomson next went looking for the source images used to create this composite.

Using a reverse image search tool, he found this stock image of a storm cell in Kansas.

It was flipped horizontally (as we’ve shown here).

It was then spliced onto another photo of Miami, taken on a calm day — a similar technique to the one used by Hurley all the way back in 1919.

Regardless of how an image has been altered or created, sometimes the best way of deciding whether it’s real is to look for clues outside the frame of the image itself.

In this case, a bit of research reveals that the hurricanes that regularly hit Florida look nothing like storms on the plains of Kansas.

A simple image search for ‘hurricanes in Florida’ would help the eagle-eyed reader to establish what Hurricane Dorian was likely to look like.

AI makes fake imagery easier than ever

Generative AI image generators, which allow anyone to create a fake image by simply typing some text into a box, bring a whole new set of problems.

Forensics tools like WeVerify will not help to identify AI-generated images, so experts say your best bet is to look out for logical inconsistencies in the image instead.

Let’s look at an example.

Last year, the US stock market briefly dipped as this suspicious image circulated online.

As with many images posted on social media, the quality of the image is poor, which makes close inspection difficult.

But, even so, there were telltale signs of AI.

Nick Waters, a journalist and online sleuth at Bellingcat, flagged “the way the fence melds into the crowd barriers” as immediately suspicious.

Professor Thomson also pointed out how “the sidewalk and grass fade into one another”.

The facade of the building is also unrealistic, with uneven lines and strange patterns where the windows should be.

But if this crude fake was enough to spook investors, what about something more sophisticated?

Imagine if they’d used this image instead – could you tell if it was real?

The caption reads: "Large explosion near The Pentagon Complex in Washington D.C. - Initial Report"
A made-up social media post showing an explosion at the Pentagon. (Supplied: DOD/Tech. Sgt. Cedric H. Rudisill)

For starters, this actually looks like the Pentagon (unlike the generative AI one).

And the details appear to be logically consistent …

So, let’s slow down and take a closer look.

A zoomed-in version of the scene showing a fire truck and workers beside the building
A close-up of the scene. Supplied: DOD/Tech. Sgt. Cedric H. Rudisill)

The people, cars and fences look pretty realistic.

The windows on the building facade are all perfectly aligned.

Even the American flag has all the right details …

The lack of logical inconsistencies suggest that this image was not generated by AI.

And that’s because it is real – only, the information surrounding it isn’t.

It’s a photo from the aftermath of the September 11 terrorist attacks in 2001 when a hijacked airliner was crashed into the Pentagon.

Posting an image with incorrect context is one of the oldest tricks in the book – but it’s also one of the most effective for those setting out to deceive.

It’s been pulled off with photos from natural disasters and wars in the past, and doesn’t require sophisticated tools or technical knowledge.

Running a reverse image search on the above image will return thousands of results from back in 2001, which is clear evidence that this photo was not taken recently.

Each of the images we’ve looked at so far had flaws that gave them away – from incongruous visual patterns, to signs of digital manipulation, to pre-existing copies on the internet.

But as AI technology improves, these telltale signs of manipulation may not be around for much longer.

In a minute, you’ll start seeing images labelled ! — they’re all fakes.

Professor Thomson fears the evolution of AI tools reaching “the point where there are no logical inconsistencies that the naked eye can detect”.

Earlier iterations of generative AI tools – even those only a year old – struggled with hands and faces, however this is quickly becoming less of an issue.

This example, which went viral in May 2023, has some telling flaws.

A fake version of the pope wears a puffer jacket. His face and hand are zoomed in on to reveal logical inconsistencies
The flaws in a generative AI image that went viral. (ABC News: Teresa Tan)

However, these tools are improving so quickly that what were once obvious tells are far more convincing in newer AI models.

Human hands type on a keyboard. The background is a hazy red glow.
Professor Thomson created this image using generative AI with the prompt: ‘photorealistic stock image of a person’s hands on a laptop’.(Supplied: TJ Thomson)

While it’s not perfectly realistic, it’s less clear whether this is a doctored photo or a complete fiction.

And it’s not just the images themselves that have been blurring the lines between real and fake.

Social media is making it harder to tell

Despite being created through different methods, the doctored images we looked at had something in common: the way they were spread.

They both gained traction on social media, where sharing is frictionless and posts can go viral before they can be fact-checked.

Mathieu O’Neil, an expert in online media literacy at the University of Canberra, says social media platforms have many properties that make the spread of misinformation and disinformation easier.

Images are often compressed, ensuring they’re quick to load and cheap to host, which can obscure the telltale signs of manipulation.

News feeds are also personalised, meaning no two users see the same feed.

When we see different presentations of the world “it makes it very difficult to have a common understanding of what is real”, he explains.

And then there’s the sheer quantity of information to sift through.

In general, anything shared by users – memes, jokes, lies, news articles, opinion columns, cat photos – are all lumped together under a single category. Content.

This means there is no explicit differentiation between real and fake, between news and AI hallucination, between fact and conspiracy.

Making a judgement for every image you see is a massive task, and can easily become overwhelming.

Less than half of Australian adults (39 per cent) are confident they can check if information they found online is true, recent research found.

And there’s reason to believe it’s only going to get harder.

Google’s latest phones are advertised with a feature called ‘Magic Editor’ that lets anyone quickly create realistic fake scenes. iPhones may soon have something similar built in, too.

Learning how to read ‘laterally’

Professor O’Neil believes the critical thinking skills we all need to navigate this new media environment aren’t being taught in schools.

“The education system hasn’t adapted to the attention economy,” he says. “It still teaches people that we need to have deep critical engagement with claims.”

On social media there is a significant chance that what you’re reading is untrue – which means that the time spent engaging with it deeply would be wasted.

At worst, this can lead to conspiratorial thinking, where one looks so deeply that connections will inevitably be found where they don’t really exist.

In search of solutions, Professor O’Neil has been running pilot programs teaching school-aged children to use a method called ‘lateral reading’.

To avoid wasting your attention on a sea of false claims, lateral reading is about moving on quickly when something doesn’t seem right.

“You don’t go deep, you don’t go vertical, you don’t investigate a claim,” he explains. “You just look away and try to find reliable sources.”

Professor Thomson agrees that new forms of media literacy are “increasingly important in the digital age”.

“Looking beyond a single image and asking yourself critical questions … is vital to being a responsible digital citizen.”

Two important questions to ask yourself

Whether images have been doctored using technology invented last century or last week, the same rules apply.

One of the big solutions to picking a fake is to focus on the context surrounding it.

So, aside from the visual cues in the image itself, here are two key considerations for deciding if an image is real or not.

1. Who posted it?

A bit of digging into Brent Shavnore, the digital artist who originally spliced the Kansas storm over the Miami skyline, would reveal he was not out to fool anyone.

His Instagram account is a gallery of dramatic storms looming over global cities.

An Instagram profile containing images of obviously fake storms on city skylines
Brent Shavnore’s Instagram account. (ABC News)

The account that shared the AI-generated image claiming to show an explosion at the Pentagon was also telling.

Despite Twitter’s ‘verification tick’, the Bloomberg Feed account was a fake. It was not linked to media outlet Bloomberg at all.

2. Are other sources reporting the same event?

While debunking the AI-generated image of the Pentagon, Bellingcat’s Nick Waters pointed to how other media was curiously silent about what would’ve been an attack on the heart of the US military.

“Whenever an event like this takes place, it will affect a large number of people,” he wrote.

“Most extreme physical events in populated areas (bombings, terrorist attacks, large fights) have a recognisable digital ripple.”

In the absence of other images showing the same explosion, the same meeting, the same natural disaster, the chances are it’s probably a fake.

Can’t someone just fix the system?

With so much content online, it is asking a lot of people to constantly interrogate everything they see online. So, can anything be done to lighten the load?

Some social media platforms are making an effort to tag problematic posts with additional context.

Meta started applying “made with AI” labels to images, video and audio that it identified as having been AI-generated on Facebook and Instagram. Users could also self-disclose the use of AI.

But, when photographers complained that their work had been incorrectly flagged as AI-generated, the company watered down the label to “AI Info” only a few months later.

A longer-term solution posed by a consortium of technology and media companies called the Content Authenticity Initiative is to add digital infrastructure around media veracity.

They propose adding what they call a “layer of tamper-evident provenance to all types of digital content”, including photos.

This would mean the entire life-cycle of a photo – from capture, through editing, all the way to viewing in the browser – would retain information about where and when it was taken.

Importantly, it would also show whether it has been edited or generated by AI tools along the way.

Some question its practicality, and there’s no doubt it would be massively challenging to deploy at the scale necessary to make a difference across our new media ecosystem.

But without an end-to-end solution, we’re stuck with the honour system used by Hurley back in 1919, with all its pitfalls and vulnerabilities.

© 2020 Australian Broadcasting Corporation. All rights reserved.
ABC Content Disclaimer

- Our Partners -

DON'T MISS

- Advertisment -
- Advertisment -