The Guardian’s view on AI: The departure of security staff raises concerns about the industry’s pursuit of profit at all costs Editorial

by
0 comments
The Guardian's view on AI: The departure of security staff raises concerns about the industry's pursuit of profit at all costs Editorial

hHardly a month goes by without some AI grandee warning that this technology poses an existential threat to humanity. Many of these warnings may be vague or naive. Other people may be selfish. Calm, level-headed investigation is required. However, some warnings are worth taking seriously.

Last week, some notable grassroots AI security researchers left, alert Companies chasing profits are ignoring safety and promoting risky products. In the near term, this suggests bullish “anxietyification” in pursuit of short-termism. Income. Without regulation, public purpose gives way to profit. Certainly the growing role of AI in government and daily life – as well as the desire for profits of billionaire owners – demands accountability.

The choice to use agents – chatbots – as the main consumer interface for AI was primarily commercial. The presence of conversation and interactivity promotes more intense user engagement than the Google search bar. OpenAI researcher Zoe Hitzig has caution Which is introducing advertising into that dynamic risk manipulation. OpenAI says that ads do not influence ChatGPT’s answers. But, like social media, they can be less visible and more psychologically targeted – based on broader personal exchanges.

It is worth noting fidji simowho built Facebook’s advertising business, joined OpenAI last year. And OpenAI recently fired its executive Ryan Biermeister For “sex discrimination”. Multiple reports stated that he strongly opposed the rollout of adult content. Together, these moves suggest that commercial pressures are shaping the direction of the firm – and perhaps that of the wider industry. The way Elon Musk’s AI Grok tool was kept active long enough to generate abuse, then restricted behind payment access before being halted following investigation in the UK and EU, raises questions about the harms of monetization.

It is difficult to evaluate more specific systems being created for social purposes, such as education and government. But as the frantic pursuit of profit brings irresistible biases into each of our human systems, the same will be true for AI.

This is not the problem of any one company. A more vague resignation letter by an anthropic security researcher Mrinak Sharma warns of ‘world in crisis’And he “has seen again and again how difficult it really is to let our values ​​dictate our actions”. OpenAI was once apparently entirely non-profit; After committing to commercialization starting in 2019, Anthropic emerged promising to be a safer, more cautious alternative. Mr Sharma’s departure shows that even moderation-oriented companies are struggling to resist the same pull of profits.

The reason for this realignment is clear. Companies are spending investment capital at historic rates, their revenues are not growing fast enough and, despite impressive technological results, it is not yet clear what AI can “do” to generate profits. from tobacco to medicinesWe have seen how profit incentives can distort decisions. The 2008 financial crisis showed what happens when essential systems are driven by short-term needs and weak oversight.

Strong state regulation is needed to solve this problem. The recent International AI Security Report 2026 offered a serious assessment of the real risks – from faulty automation to misinformation – and a clear blueprint for regulation. Yet despite it being endorsed by the governments of 60 countries, the US and the UK refused to sign it. This is a worrying sign that they are choosing to save the industry rather than tying it down.

Related Articles

Leave a Comment