Why is it becoming harder to make AI predictions?

by
0 comments
Why is it becoming harder to make AI predictions?

Inevitably, these conversations take a turn: There are all these ripple effects happening in AI. NowBut what happens next if technology gets better? This usually happens when they look towards me, expecting either a prophecy of destruction or hope.

I’m probably disappointed, if only because it’s becoming harder for AI to make predictions.

Despite that, MIT Technology Review I must say that it has a very excellent track record of understanding where AI is going. We published a neat list of predictions for what’s next in 2026 (where you can read my thoughts on the legal battles surrounding AI), and all the predictions on last year’s list came true. But every holiday season, it becomes harder and harder to work out the impact of AI. This is mostly because of three big unanswered questions.

For one, we don’t know whether large language models will continue to get progressively smarter in the near future. Since it’s this particular technology that underpins almost all the excitement and concern in AI right now, with AI powering everything from companions to customer service agents, its slowdown will be a huge deal. In fact, it’s such a big deal that we devoted an entire list of stories in December to what the new post-AI hype era might look like.

Number two, AI is extremely unpopular with the general public. Here’s just one example: About a year ago, OpenAI’s Sam Altman stood next to President Trump and enthusiastically announced a $500 billion project to build data centers across the US to train large-scale AI models. The pair either did not anticipate or did not care that many Americans would strongly oppose such data centers being built in their communities. One year later, Big Tech is running a campaign uphill battle Winning public opinion and continuing to build. Can it win?

Lawmakers’ reaction to all this frustration has been extremely confusing. Trump has appeased Big Tech CEOs by making AI regulation a federal issue rather than a state one, and tech companies are now hoping to codify it into law. But the crowd that wants to protect children from chatbots ranges from progressive California lawmakers to increasingly Trump-aligned federal trade commissionEach has different objectives and approaches. Will they be able to overcome their differences and rein in AI firms?

If frustrating conversations at the holiday dinner table go this far, someone will say: Hey, isn’t AI being used for objectively good things? Making people healthier, uncovering scientific discoveries, better understanding climate change?

Well, something like this. Machine learning, an older form of AI, has long been used in all types of scientific research. One branch, called deep learning, is part of AlphaFold, the Nobel Prize-winning tool for protein prediction that has transformed biology. Image recognition models are getting better at identifying cancerous cells.

Related Articles

Leave a Comment