My father spent his career as an accountant for a major public utility. He didn’t talk much about work; When he did engage in shop talk, it was usually with other public utility accountants, and it was incomprehensible to those who were not. But I remember a story from work, and that story is relevant to our current engagement with AI.
One evening he told me about a problem at work. It was the late 1960s or early 1970s, and computers were relatively new. The operations division (the one that sends trucks out to fix things on poles) has acquired several “computerized” systems for analyzing engines – no doubt early versions of what your auto repair shop uses all the time. (And no doubt much larger and more expensive.) The question was how to account for these machines: Are these computing devices? Or are they truck maintenance tools? And it turned into a kind of war between the operations people and the people we now call IT. (My father’s work was more about adding up long columns of figures rather than making decisions on accounting policy issues like this; I used to call it the “philosophy of accounting”, my tongue not entirely in cheek.)
My immediate thought was that this was a simple problem. The people doing the operation probably want it to be considered computer equipment to keep it out of their budget; No one wants to spend more than their budget. And people involved in computing probably don’t want all this extra equipment eating into their budget. It turned out that this was absolutely wrong. Politics is all about control, and the computer group wanted control over these weird machines with new capabilities. Did operations know how to maintain them? Dating back to the late 60’s, it is likely that these machines were relatively delicate and included components such as vacuum tubes. Similarly, the operations group did not really want the computer group to control how many of these machines they could buy and where to put them; People involved with computers would probably do something more fun with their money, like leasing a big mainframe, and leave the operation without the new technology. In the 1970s, computers were for running bills, not fixing broken lines in trucks.
I don’t know how my father’s problem was solved, but I know it has to do with AI. We’ve all seen that AI is good at many things – writing software, writing poems, doing research – we all know the stories. Human language can still become a very high-level, highest-possible-level, programming language – the abstraction to end all abstractions. This could allow us to reach the holy grail: telling computers what we want them to do, not just how to do it (step-by-step). But there’s another part of enterprise programming, and that’s deciding what we want the computer to do. This involves taking into account business practices, which are rarely as uniform as we would like to think; Hundreds of cross-cutting and possibly contradictory rules; Company culture; And even office politics. The best software in the world will not be used, or will be poorly used, if it does not fit into its environment.
Politics? Yes, and this is where my father’s story is important. The conflict between operations and computing was politics: power and control in the context of the circuitous rules and standards governing accounting in a public utility. One group stood to gain control; the other stood to lose it; And regulators were standing by to make sure everything was done properly. It is naïve for software developers to think that somehow this has changed over the last 50 or 60 years, that there is somehow a “right” solution that doesn’t take into account politics, cultural factors, regulation, and more.
Let’s look (briefly) at another situation. When I learned about domain-driven design (DDD), I was shocked to hear that a company could easily have a dozen or more different definitions of “selling.” sale? This is simple. But for an accountant, it means bookkeeping entries; To the warehouse, this means moving items from stock to trucks, arranging for deliveries, and recording changes in stocking levels; For sales, “sale” means a certain type of event that may even be hypothetical: something that has a 75% chance of happening. Is it the programmer’s job to rationalize this, to say “let’s be adults, ‘sale’ can only mean one thing”? No, it is not so. Understanding all aspects of “selling” and finding (or, in the words of Neil Ford, “finding”) the best approach is the job of a software architect.least bad way”) to satisfy the customer. Who is using the software, how are they using it, and how do they expect it to behave?
As powerful as AI is, it is thought beyond its capabilities. This might be possible with more “absorbed” AI: AI that was able to sense and track its surroundings, AI that was able to interview people, decide who to interview, analyze office politics and culture, and manage conflicts and ambiguities. It is clear that, at the level of code generation, AI is more capable of dealing with ambiguity and incomplete instructions than earlier tools. You can say to C++ “Just write me a simple parser for this document type, I don’t care how you do it.” But it is still not able to deal with the ambiguity that is part of any human office. It is not able to make a rational decision about whether these new devices are computers or truck maintenance equipment.
How long will it take for AI to make such decisions? How long can it reason about fundamentally ambiguous situations and come up with “least-worst” solutions? we will see.