On December 11, OpenAI released ChatGPT 5.2, the latest version of its widely used AI chatbot.
As the company releases minor updates every now and then Appreciated its latest version As a “significant improvement in general intelligence”, it has been called “the best model yet for real-world, commercial use”. In another display of arrogance, OpenAI even went so far as to claim that 5.2 is their “first model that performs at or above human expert level.”
Yet when we ran it through an incredibly simple prompt – to produce an alphabet chart of animals for schoolchildren – the world-beating AI model came up ridiculously short.
ABC’s shortfall was first noticed by Peter Berezin, chief global strategist at BCA Research. Using ChatGPT 5.1, which was released in November, Berezin asked the AI to “make a poster where you say A is for an animal that starts with the letter A, B is for an animal that starts with B, all the way to Z.”
That version thought for six seconds, and came up with an image containing 25 letters, unlike the standard English alphabet, which uses 26. Already for a rough start, 5.1 works fine with “a” to “i”, but as soon as it gets to “k” that says “is for lion.”
It runs like this: “O is for jellyfish,” while “Q is for penguin” and “R is for snake.” By the time it reaches the end, “Z” stands for “urba” – a turtle, apparently – followed by “B”, with a picture of a pig.
“More capital spending is still needed,” Berezin quipped, referencing $1.15 trillion OpenAI alone has committed to spend on hardware through 2025.
We were curious how ChatGPT 5.2 matched up to its one-month-old predecessor – and sure enough, it didn’t disappoint.
Although the latest version of ChatGPT performed slightly better with individual animals, it still identified only 24 letters in the English alphabet, forgetting “U” and “Z”. Instead, “y” for “yak” is listed right after “t” in 5.2. This particular alphabet ends with “X”, which of course stands for “X-ray fish”, but is illustrated by a zebra.
The pictures were also a bit questionable, including a kangaroo with a strange limb, an iguana with two tails, a narwhal with strange eyes and fins, and A bird’s beak, and an elephant with a cat’s face, to name some of the weak points.
A follow-up signal only spread the spread around. This time there are total 25 letters. The “Y” is still a problem area, replacing the “U”, only this time it means “unicorn”, which at first glance is not a real animal. Finally, there are two entries for “X”, one of which is for “fish”, followed by another “X”, then for “x-ray fish”, but with the same zebra illustration.
The second poster also begins to distort the instructions, inserting bits of the hint into the title of the poster: “A is for Alligator, B is for Bear…”

As some users on ex-twitter pointed out, the results are the same. Google’s Gemini, and as a Quick look at Grok Turns out, Elon Musk’s AI engine isn’t even close. But even if ChatGPT is the best at creating an animal poster, it still seems ridiculously small – and certainly not at the level of any “human expert” we’ve ever met.
More On Chatgpt: Grieving families say ChatGPT’s dark side fueled wave of suicides