Curious bosses, awkward mistakes and a looming threat: Employees are training AI to do their jobs AI (Artificial Intelligence)

by
0 comments
Curious bosses, awkward mistakes and a looming threat: Employees are training AI to do their jobs AI (Artificial Intelligence)

Workers grappling with the rapid development of artificial intelligence have said they feel “devalued” by the technology and warned of a decline in the quality of work.

A recent analysis by the International Monetary Fund revealed that AI will impact about 40% of jobs worldwide. Its head, Kristalina Georgieva, has said: “It’s like a tsunami hitting the labor market.”

Workers who have trained AI models to replace some or all of their roles tell the Guardian about their experiences.

editor

‘I now earn less while working long hours correcting mistakes of AI editors’

Christie* edits papers for academics for whom English is a second language. She was asked to participate in a project to train new “assistant editors”, unaware that it was an AI program that would result in her receiving a lower salary.

“There was a huge shortage of qualified editors, so I assumed they were training more[people]to take some of the burden off,” says Christy, 55, who lives in the UK. “Then they asked me to fix these assistant editors’ mistakes. But the new editors were making weird mistakes, like putting unnecessary full stops or changing the names of countries and making nonsense.”

Christie’s says he “carefully and respectfully pointed out these errors”.

However, errors continued to occur and “sometimes they became worse”. Then, a few months later, she found out who the “editors” were.

“In a newsletter, the company acknowledged that these assistant editors were actually AI,” Christie says. “going forward, This will have all the work pre-edited, and our fees will go down, so now I make less money fixing AI mistakes, which takes me more time, than editing from scratch.

“There is this group thinking in the company that they have to implement AI.”

Christy says she feels “devalued, betrayed and angry by this company.”

She adds, “I would prefer work from any other source, but I’m stuck in this toxic cycle because they have the highest amount of work, and I still need to eat and pay rent. But a lot of people have quit.”

palliative care consultant

‘AI had to struggle with patients’ pronunciation’

Mark Taubert, palliative care consultant and professor, said he was excited to work on a pilot chatbot project to explore how the technology could help patients deal with the complexities of metastatic cancer and palliative care.

Taubert, 51, who works at Velindre University NHS Trust in Cardiff, recorded for “several hours” for the chatbot and gave the computer guidelines that would generally inform how it spoke to patients.

“We asked patients to write down all their questions, and added patient information sheets that we had previously written and agreed upon,” he says. “We also considered the questions I might get from the outpatient and inpatient palliative care community, like, ‘Can I drink alcohol while taking morphine?'”

The chatbot was mostly for home patients who might have a question during out of hours, for example about their medication.

Taubert says the chatbot got it about “50% of the way, in a way I would have responded”, but it struggled with the vagaries of human pronunciation and human error.

Palliative care consultant and professor Mark Taubert says he does not feel his role is threatened by AI. Photograph: Handout

“Patients don’t always use correct English and sometimes use the wrong names for medications; for example, they may say ‘morphium’ instead of morphine,” he says. “People structured their questions quite differently. We saw a need for technology to learn about human misspellings, dialects, jargon, variations and pronunciations.

“Subsequent optimizations made the system safer, but we also had to consider how the machine would respond if a patient typed in a more disturbing question, for example, how to end one’s life.”

Taubert says the chatbot, called Rita, was used “with a lot of caveats and caveats around it” before funding ended.

“We’ll say: ‘Try it if you want,’ but we also put links to hospital information leaflets on each area,” he adds.

While Taubert is willing to “embrace new technologies”, he does not feel his role is threatened by AI.

“Everything we do depends on the nuances of language, body language, facial expressions and being in the room,” he says. “In the coming months or years, perhaps my work week could be enhanced by such systems by removing administrative duties and allowing me to actually talk to the patient more.”

translator

‘The overall effect is a decline in quality’

Philip*, 45, needed to train AI-based translation engines, which his supervisors “want to replace ours because they will cost less”, but he says they are still unreliable four years later.

“At first, the results were essentially ridiculous,” he says. “But as we’ve perfected the programs, they’ve improved. However, even after years of this, in addition to having a tendency to produce formulaic results, they’re still unreliable and insufficiently accurate, so we still need to review each AI-generated translation word by word and correct as necessary.”

Philip, who lives in New Jersey, says in his experience, “It doesn’t save time by directly translating the content. I think the overall effect is a drop in quality.” If you need a translation that has a rough idea of ​​what’s being said, AI is generally fine. But it’s not always reliable, and that’s the problem, because some of the time you’ll still run into things that are completely wrong.

He says the moment when he will no longer be needed in his current role “has been looming over our heads for years, but we’re not there yet”.

marketing writer

‘Training your robot replacement feels like digging your own digital grave’

Joe*, 50, an award-winning marketing writer and content manager, says the company he worked for started exploring AI as a productivity tool in early 2024, but he was assured his job was safe.

“I should have seen the writing on the wall when they asked me to spend the first six months of 2025 building our comprehensive ‘AI process workflow’ and ‘best practice documentation.’ In my naivety, I thought I would be managing this system and I would be asked to oversee these processes.”

However, in August 2025, two weeks after handing in his best practice document, Joe was fired.

“In my exit interview I was told it had nothing to do with my work or performance; they attributed it to ‘market conditions,’ and some of that was undoubtedly true, but the timing of it was certainly suspicious,” says Joe, who lives in Milwaukee. “Working for this company and being asked to do this — training their robot replacement — feels like you’re digging your own digital grave.”

Joe is told that much of his former workload has been handed over to junior staff.

“They are following My AI documentation is just for entering prompts into AI clients so I can complete the work I was doing,” he says.

Joe is now considering a career in sales, but says it won’t be easy.

“I wouldn’t necessarily say AI has thrown me 100% out of my career path, but at age 50 and with the threat of AI constantly looming, I’m thinking in my mind, I could take another writing job, but then again am I thinking about another layoff at age 55?”

mathematician

‘Work will look completely different in 10 years, maybe even less’

Filippo, 44, an associate professor in mathematics, is collaborating with two startups on AI projects.

They are developing models to reason about mathematics and prove theorems with very little human input and verify the inputs using the proof support software Lean.

“It’s been three months, and although the results are still somewhat limited, it is clear that these tools are becoming stronger and more efficient by the day,” says Filippo, who lives and works in France. “As most of my colleagues are experimenting with this AI technology, we are convinced that a mathematician’s work will look completely different in 10 years’ time, or perhaps even less.

“AI will be able to replace us in mundane tasks that take up a large portion of our time, such as proving small auxiliary results necessary for our larger goals. Whether mathematicians will still be needed to prove these larger goals is debatable.”

Filippo, who works for a university, says he doesn’t think his role will become obsolete in the immediate future.

“Given that I work for a public institution, that I spend a lot of my time teaching and that these AI tools are not at the professional research level yet, I don’t feel any pressure or anxiety about my job,” he says. “But if I was 25 and had just finished my PhD, my perspective would be completely different.”

*Names have been changed

Related Articles

Leave a Comment