If a strongly worded email to your landlord about a roof leak needs a second pair of eyes, ChatGPT can be an excellent tool. It also excels at coming up with a rough first draft for non-mission-critical writing, allowing you to carefully take it apart and refine it.
But like all of its competitors, ChatGPT also suffers from a number of well-documented shortcomings, ranging from mass hallucination to a sycophantic tone that can easily trap users into serious misconceptions.
In other words, it’s not exactly a tool anyone should rely on to do important work – and that’s a lesson Marcel Bucher, professor of plant science at the University of Cologne, learned the hard way.
one in column for NatureButcher admitted that he “lost” two years of “carefully structured academic work” – including grant applications, publication revisions, lectures and exams – after turning off ChatGPT’s “data consent” option.
He disabled this feature because he “wanted to see if I would still have access to all of the model’s functions if I didn’t provide OpenAI my data.”
But to their dismay, the chats disappeared without a trace in an instant.
“No warning was forthcoming,” Butcher wrote. “There was no undo option. Just a blank page.”
The column faced sharp criticism over Schadenfreude on social media, with users questioning how Butcher had gone two years without creating any local backup. Others were angry, called on the university to fire him To rely so heavily on AI for educational purposes.
However, some people felt pity.
“Well, congratulations to Marcel Bucher for sharing a story about a seriously flawed workflow and a silly mistake,” Roland Groms, teaching coordinator at the University of Heidelberg, wrote in an article. Post on BlueSky. “Many academics believe they can see the harm but we can all be naive and encounter such problems!”
Butcher is the first to admit that ChatGPT “appears trustworthy but may sometimes make false statements,” arguing that he never “equated its credibility with factual accuracy.” Nevertheless, he “relied on the continuity and apparent stability of the workspace”, using ChatGPT Plus as his “everyday assistant”.
The use of generic AI has proven highly controversial in the scientific world.
Poorly sourced AI slop is flooding scientific journals, turning the peer review process into a horror show. atlantic Reported this week. Entire fake scientific journals are springing up to benefit others trying to get their AI slop published. outcome? AI sloppiness is being peer-reviewed by AI models, further increasing the pollution of the scientific literature.
For their part, scientists are constantly being informed about how their work is being cited in various new papers – only to find out that The referenced material was completely misleading.
To be clear, there is no evidence that Butcher was in any way trying to sell the lack of AI to his students or get them to publish questionable, AI-generated research.
Nonetheless, his unfortunate experience with the platform should serve as a warning sign to others.
In his column, Butcher accused OpenAI of selling his ChatGPIT Plus subscriptions despite not assuring them of “basic protective measures” to prevent his years of work from disappearing in an instant.
in a statement to NatureOpenAI clarified that chats “cannot be recovered” after being deleted, and challenged Butcher’s claim that “there was no warning”, adding that “we provide a confirmation prompt before a user permanently deletes a chat.”
The company also helpfully recommends that “users maintain personal backups for professional work.”
More on the scientific AI slope: The more scientists work with AI, the less they trust it