Stop accidentally sharing AI videos – 6 ways to identify real and fake before it’s too late

by
0 comments
Stop accidentally sharing AI videos – 6 ways to identify real and fake before it's too late

Are there any tools to detect AI videos?

Yes. Some AI detection tools exist.

AI cybersecurity firm CloudSEK released a deepfake analyzer Which scans video frames for signs of manipulation. In tests, it successfully flagged viral AI videos like a fake polar bear rescue (it gave the clip a 57% “likely AI” score).

Also: You can try Google’s viral image editing tool in Search right now — here’s how

There are also other services like Was it AI? Or aye-ya-no Which allows you to upload frames or videos to check if they are AI-generated. These tools look for the types of anomalies I talked about above, but from my experience, they can be hit or miss.

For example, completely AI-generated Coca-Cola ad fooled CloudSEK’s detector thought it was probably man-made,

What about watermark on AI video?

I didn’t mention this as a hint because it should be obvious. But if a logo like the Sora watermark is floating in a video, it’s AI. Sometimes, if you get caught up in the moment, it’s easy to miss them. But once you start looking for them, they really stand out.

What is the difference between AI slop and deepfake?

“AI slop” usually refers to low-effort, mass-generated AI content — like the Sora videos you’ve seen in your social feeds. It is usually harmless and created purely for entertainment. A deepfake, on the other hand, can be a more realistic-looking fake. It is also built with AI, but it mimics real people or events and is often meant to deceive.

Want more stories about AI? check out AI LeaderboardOur weekly newspaper.

Related Articles

Leave a Comment