How digital forensics can prove what’s real in the age of deepfakes

by
0 comments
How digital forensics can prove what's real in the age of deepfakes

Imagine this scenario. The year is 2030; Deepfakes and artificial-intelligence-generated content are everywhere, and you are a member of a new profession – a reality notary. From your office, customers ask you to verify the authenticity of photos, videos, e-mails, contracts, screenshots, audio recordings, text message threads, social media posts, and biometric records. People are desperate to protect their money, reputation, and sanity—as well as their freedom.

The four are in danger on a rainy Monday when an elderly woman tells you that her son has been accused of murder. He has evidence against him: a USB flash drive containing surveillance footage of the shooting. It is sealed in a plastic bag stapled with an affidavit stating that the drive contains evidence the prosecution intends to use. At the bottom is a series of numbers and letters: a cryptographic hash.

sterile lab


On supporting science journalism

If you enjoyed this article, consider supporting our award-winning journalism Subscribing By purchasing a subscription, you are helping ensure a future of impactful stories about the discoveries and ideas shaping our world today.


Your first step isn’t to watch the video—that would be like being trapped in a crime scene. Instead you connect the drive to an offline computer with a write blocker, a hardware device that prevents any data from being written back to the drive. It’s like bringing evidence into a sterile laboratory. The computer is where you hash the file. cryptographic hashingAn integrity check in digital forensics has an “avalanche effect”, so that any small change – a deleted pixel or an audio adjustment – ​​results in completely different code. If you open the drive without protecting it, your computer may be silently modified. metadata—Information about the file—and you will not know whether the file you received is the one the prosecution wants to present. When you hash the video, you get the same series of numbers and letters printed on the affidavit.

You then make a copy and hash it, checking that the codes match. You then lock the original in a secure archive. You take the copy to the forensics workstation, where you watch the video—what appears to be security camera footage—in which the woman’s adult son approaches a man in an alley, raises a pistol, and shoots. The video is as convincing as it is boring – no cinematic angles, no dramatic lighting. You’ve actually seen this before—it recently began circulating online a few weeks after the murder. The affidavit mentions the exact time when the police downloaded it from the social platform.

Watching the grainy footage reminds you why you do it. You were still at university in mid-2020 when deepfakes went from novelty to big business. Verification Firms Report 10 times jump in deepfakes between 2022 and 2023, and Face swapping attacks increased by more than 700 percent In just six months. Deepfake fraud attempts by 2024 happens every five minutes. You had friends whose bank accounts were emptied, and your grandparents withdrew thousands virtual hijacker After obtaining morphed photos of your cousin during her trip to Europe. You entered this profession because you saw how a fabricated story can ruin someone’s life.

digital fingerprint

The next step in analyzing a video is to investigate the source. Alliance for Content Provenance and Authenticity in 2021 (C2PA) was founded to develop a standard for tracking file history. C2PA material certificate Act like a passport, collecting stamps as the file travels around the world. If there is one in the video, you can track its creation and modifications. But most have been slow to adopt it, and content credentials are often stripped when files are transmitted online. one in 2025 Washington Post ExaminationJournalists attached content credentials to AI-generated videos, but every major platform where they uploaded it stripped off the data.

You then open the file’s metadata, although this rarely survives online transfers. The time stamps do not match the time of the murder. They were reset at some point – all are now listed as of midnight – and the device field is blank. The software tag tells you when the file was last saved by a common video encoder used by the social platform. There is nothing to indicate that the clip came directly from a surveillance system.

When you look at the petitions filed in the public court in a murder case, you realize that the owner of the property with security cameras was slow to respond to the police’s request. The surveillance system was set to overwrite the data every 72 hours, and by the time police reached it, the footage had disappeared. That’s what made the anonymous online submission of the video – which showed the murder from the exact angle of that security camera – a sensation.

physics of deception

You initiate Internet spying into what investigators call open-source intelligence, or OSINT. You instruct the AI ​​agent to find an old copy of the video. After eight minutes, it gives results. A video posted two hours before the police download shows a partial record that they say was made from a phone.

The reason you get C2PA data is that companies like TruePic and Qualcomm have developed ways for phones and cameras to cryptographically sign content at the point of capture. What is clear now is that the video did not come from a security camera.

You see it again for the physics which makes no sense. Slow frames go by like a flip-book. You stare at the shadows, the lines of the street door. Then, at the edge of a wall, light that shouldn’t be there pulsates. It is not the flicker of a light bulb but a rhythmic flicker. Someone filmed a screen.

Flickering is a sign that the two clocks are out of sync. A phone camera scans the world line by line, top to bottom, several times each second, while a screen refreshes in cycles – 60, 90 or 120 times per second. When a phone records a screen, it can capture the brightness of the screen updating. But it still doesn’t tell you whether the recorded screen shows the truth or not. Someone may have simply recorded the original surveillance monitor to save the footage before overwriting it. To prove a deepfake you have to look deeper.

fake artefacts

Now you check the watermark – the invisible statistical pattern inside the image. For example, synthID Google is DeepMind’s watermark for Google-created AI content. Your software looks for hints of what the watermark might be but nothing definitive. Cutting, compressing, or filming the screen can damage the watermark, leaving only traces, such as the marks of erased words on paper. This does not mean that the AI ​​created the entire scene; This suggests that the AI ​​system may have altered the footage before it was screen recorded.

You then run it through a deepfake detector like reality protector. Analysis reveals anomalies around the shooter’s face. You divide the video into static segments. you use it InVID-WeVerify Plug-in To draw a clear frame and do a reverse-image search on the accused son’s face to see if it appears in any other context. Nothing comes.

There is other evidence on the drive, including recent footage from the same camera. The brick work matches with the video. This is not a fabricated scene.

You return to the shooter’s face. The street light is harsh, which makes a different impression. There’s thick digital noise on his jacket and hands and the wall behind him, but not on his face. It’s a little greasy, from a clean source.

Security cameras give a noticeable blur to moving objects, and their footage is compressed. There’s a blurry, blocky quality to it, except for the shooter’s face. You watch the video again, zooming in only on the face. The outline of the jaw is slightly jagged – the two layers are sometimes slightly misaligned.

final count

When the shooter appears you step back. He raises the weapon in his left hand. You call the woman. She tells you that her son is right-handed and sends you videos of him playing sports as a teenager.

At last you go into the street. The building’s maintenance records list the height of the camera as 12 feet. You measure its height and the downward angle, using basic trigonometry to calculate the shooter’s height – three inches taller than the woman’s son.

The video now makes sense – it was created by cloning the son’s face, superimposing it on the shooter using an AI generator, and recording the screen with a phone to remove the generator’s watermark. Cleverly, whoever did this chose a phone that would generate content credentials, so that viewers would see a cryptographically signed claim that the clip was recorded on that phone and that no edits were declared after capture. By doing so, the video’s creator essentially created a certificate of authenticity for the lie.

The notarized document you send to the public defender will read like a lab report, not a thriller. “Reality Notary” is no longer science fiction in 2030; This is the person whose services we use to make sure that people and institutions are what they seem.

Related Articles

Leave a Comment