“I think it’s going to take a long time for us to actually be like, OK, this problem is solved,” he says. “Unless you can really trust the system, you would definitely want to impose restrictions.” Pachocki believes that very powerful models should be deployed in a sandbox, cut off from anything they could break or use to cause harm.
AI tools have already been used for new cyber attacks. Some worry that they will be used to design synthetic pathogens that could be used as bioweapons. You can put any number of evil-scientist scare stories in here. “I certainly think there are worrisome scenarios that we can imagine,” Pachocki says.
“It would be a very strange thing. It’s extremely concentrated power that is unprecedented in some ways,” says Pachocki. “Imagine you get to a world where you have a data center that can do all the work that OpenAI or Google can do. Things that in the past required large human organizations will now be done by a few people.”
He added, “I think it’s a big challenge for governments to figure this out.”
And yet some would say that governments are part of the problem. For example, the US government wants to use AI on the battlefield. The recent confrontation between Anthropic and the Pentagon revealed that there is little agreement in society about where we should draw the red lines for how we should and should not use this technology – let alone who should draw them. Shortly after that controversy, OpenAI moved to sign a deal with the Pentagon instead of its rival. The situation remains suspicious.
On this I pushed Pachoki. Does he really trust other people to figure it out or does he feel personal responsibility as the chief architect of the future? “I feel personal responsibility,” he says. “But I don’t think it can be solved just by OpenAI pushing its technology in a particular way or designing its products in a particular way. We will certainly need a lot more involvement from policymakers.”
Where does that leave us? Are we really on the path to AI Pachocki’s vision? When I asked the Allen Institute’s Downey, he laughed. “I’ve been in this field for a few decades and I no longer trust my predictions about how close or far away certain capabilities are,” he says.
OpenAI’s stated mission is to ensure that all of humanity will benefit from artificial general intelligence (a hypothetical future technology that many AI boosters believe will be able to equal humans in most cognitive tasks). OpenAI aims to do this by being the first to build it. But when Pachocki mentioned AGI in our conversation, he immediately made it clear what he meant by talking about “economically transformative technology.”
LLMs are not like human brains, he says: “They are superficially similar to people in some ways because they are mostly trained on how people talk. But they are not designed by evolution to be really efficient.”
He added, “Even by 2028, I don’t expect that we will be able to make systems every bit as smart as people. I don’t think that will happen.” “But I don’t think that’s necessary at all. What’s interesting is that you don’t need to be as smart as people in every way to be very transformative.”