(Like this article? Read more Wednesday Wisdom!)
Unless you spent the last year under a rock, you will have noticed that there have been significant breakthroughs in the field of Artificial Intelligence (AI). And like in the first era of AI, there have been many outsized predictions on how this is going to upend Everything and Everyone will be out of a job and Computers will make All the Big Decisions.
Please note that today computers are mostly really dumb
and they are already making all of the big decisions.
Personally I hope that this particular AI future will happen soon. If the cost of productivity drops to near zero that means that everyone can have everything for almost nothing. Seems like a utopia to me. Maybe The Culture will actually happen sooner rather than later :-)
All joking aside, what will the future hold for software engineering?
Let's start by acknowledging that nobody really knows. Making predictions is hard, especially where they concern the future. There is an abundance of opinions around, ranging from apocalyptic to not-quite-so-apocalyptic. A lot of people are putting their money where their mouth is, mostly by starting companies that represent a bet on a particular future for this technology. Some of them will win, but which ones?
It's a bit like standing outside of a casino and trying to spot the winners among the people going in. Probably the best course of action is to give a little bit of money to everyone and take a share of their winnings. Or, come to think of it, maybe it is better to invest in the casino itself, because it will inevitably take money from all the losers and will give only some of that money to the winners, turning a tidy profit while they are at it. This last strategy comes down to investing in the people who make the plumbing for AI, which explains why Nvidia stock went bananas recently...
I went to college in 1984, right during the first AI spring (which, as we all know, was followed directly by an AI winter without passing through AI summer or AI fall). This was the time when a lot of clever algorithms were invented that could do things that people until then assumed were truly the domain of the human mind.
One of my associates says that is the true definition of AI: The computer doing things that until very recently were assumed to be a monopoly of the human mind. For instance in the dim and distant past, when it was accepted that computers could do calculations on the trajectories of ballistic missiles, playing chess was thought to be solidly in the domain of the human mind. Then the computer was taught how to play a decent chess game and minds were blown. Turns out that alpha-beta pruning together with a book of chess openings, a bunch of heuristics, and a really fast CPU can really do well in that game. By now the computer routinely beats grandmasters and most people wouldn’t call that AI anymore.
Anyway, during that AI spring there was also an abundance of outsized predictions about what computers were going to be able to do. I was regularly advised against studying computer science because any day now computers were going to program themselves. And to the point that they couldn't, powerful programming languages would allow end-users to write programs by telling them what they needed and the computer would write the code what would do that.
We all know how that ended up :-) But, new round, new chances. What will happen now?
Repeating what I said before: I don't know, and I don’t think anybody does. The general population seems to follow Amara’s law, which states that in the short term we tend to overestimate the impact of technology and in the long term we tend to underestimate it. Software engineers however seem to be prone to a particular modified version of that law, as most software engineers correctly estimate the short-term impact of AI but still hugely underestimate the long-term impact.
Let us remember that every major technological invention wiped out the jobs of many people. The list of jobs that disappeared recently because of new technology includes human computers (yes, that was a thing), lift operators, assembly line workers, and travel agents. There is no reason to assume that this will not happen this time. And since this particular invention is right in the field of human thought, this time it will be the so-called “knowledge workers” who are going to be impacted, maybe for the first time in history.
Looking at what Large Language Models (LLM) can do well, we can solidly predict that if your job consists of plowing through a bunch of information that already exists in order to reorganize and present it in a different form, your job will be wiped out soon.
This explains why LLMs have such an amazing impact in the world of education, because if there is one population whose job it is to take information that is already out there and spit it out again, it is students. Earlier this year I took a philosophy course; after the course had ended I fed the final exam into ChatGPT and got mostly perfect answers. This is not surprising, because LLMs are really good at generating text on topics about which there is lots of text already, like Jean Paul Sartre's concept of bad faith.
Some people think AI will go nowhere because of the well known fact that ChatGPT and friends are prone to making stuff up. Point in case: The recent disaster where two hapless lawyers in New York filed a motion packed with references to non-existing but absolutely plausible sounding cases, courtesy of using ChatGPT for legal research.
There is undoubtedly more of that going on. Some time ago I came across an interesting algorithm question on LinkedIn that I decided to feed into ChatGPT. Here too, ChatGPT came up with an answer that sounded completely plausible but that upon inspection was total bullshit. As part of writing this week’s article I asked Google’s Bard the same question and it performed about as well, but as a bonus, Bard also came up with two bogus references (including URLs) to papers allegedly describing the “algorithm”.
Both in law and in software engineering you need to be really really correct and so people might be forgiven to think that this technology does not apply to these professions. However, I would advise caution. Yes, the example point out some real weaknesses in the systems we have today, but I do not know whether making up random stuff is a fundamental aspect of the models or something that can be remedied.
Remember, most things in our field get cheaper, faster, and smaller.
Except for the software engineers.
In the case of the hapless lawyers, the court fined them $5,000 each and found that it is in itself not inappropriate to use AI technology to do legal research but that, given the current state of the technology, you should not copy the results into your motion without additional checks. The Economist added to this that the first law firms to figure out how to harness AI can make a killing because it would allow them to fire thousands of associates and paralegals who are currently plowing through centuries of legal texts to come up with support for a particular point of view or to find the legal status quo on a particular issue.
So what about software engineering then?
It seems obvious that AI technologies are going to change our jobs. But how exactly? And what can we do about it?
Per usual we can take some pointers from the past. There has been a ton of research into systems that would allow users without any (or a lot of) coding skills to create powerful applications. All of these technologies failed, mostly for the following two reasons:
These kind of systems are usually built around narrowly defined (and hence limiting) application architectures. The system can generate a blank application and the user then configures this application to do whatever it is they want it to be able to do. If the user needs something else, the whole system breaks down. It’s maybe great that you can easily build a calculator with your application builder (a common example), but how many of those do you need and why don’t you just use a spreadsheet?
It turns out that even with powerful application builders, telling a computer what to do still requires “algorithmic thinking”. The person using the application builder needs to be able to convert a complex business process into a series of steps that combine simple actions to achieve the required result.
There is an interesting paper called "The camel has two humps" that seems to argue that at least some of the ability to code is innate and that for those who don't have it, taking a college class in basic computer science does not help much. This insight comports with my experience teaching people to code.
But, like I said, new round, new chances. Maybe new AI-driven tools can do a better job? To try this out I have enabled one of the available AI coding assistants in my coding environments.
Thereby helping the demise of my own profession by giving the machine more things to learn from :-)
It will be no surprise to most of you that the AI codes like an idiot on crack who happens to have a huge memory and can make associations really fast. It mostly shouts out things that might seem like a genius suggestion to someone who cannot code, but that often make little or no sense to someone who can. That said, I am still somewhat impressed. Not because the coding assistant is perfect (it’s not, that’s obvious), but because it is sometimes not entirely wrong. Unfortunately, you need to be a good software engineer to distinguish between the crappy suggestions, the decent looking (but wrong) suggestions, and the good suggestions.
Here is an example: In one of my coding sessions: I needed a function that takes a dictionary (sourced from JSON) and writes out a GraphViz .dot file that shows the relationship between entries in the map. This was in Python so I had started typing my function as follows:
def gen_dot(m: Dict[str, any]):
Yes I use Python3's typing annotations because if you don't you're a moron.
The next thing that the AI coding suggested was this:
print("digraph G {")
Spot on! Not a great job on naming the graph, but okay, naming is one of the two hardest problems in computer science (together with cache invalidation and off-by-one errors :-).
Next, I started coding a for loop, which the assistant then completed like this:
for k in m:
print(f'"{k}" -> "{m[k]}";')
print("}")
To a non-programmer this might look like genius. To a novice programmer this might look impressive. To an actual software engineer who knows what they’re doing this looks a bit hopeless but very nice of the assistant to get some of the boilerplate right.
Of course one of the problems of coding assistants is that they do not have a lot of information on what you actually want to do; it has to try and glean that from the existing code and comments and I can appreciate that this might be hard.
So I tried something else instead, asking the AI to write me that function. I went to Google’s Bard and asked this question: “Write me a Python function that takes a dictionary and generates a graphviz dot file that shows the relationships between the elements in the dictionary. Each dictionary value is itself a dictionary with a key called "related_id" that points to other elements in the main dictionary.”
Bard thought about this for a few seconds and then spit out this code:
import graphviz as gv
def generate_graphviz_dot(dictionary):
"""Generates a graphviz dot file that shows the relations between the elements of a dictionary.
Args:
dictionary: The dictionary to generate the graphviz dot file for.
Returns:
A string containing the graphviz dot file.
"""
dot_file = ""
for key, value in dictionary.items():
if isinstance(value, dict):
if "related_id" in value:
dot_file += f"{key} -> {value['related_id']};\n"
return dot_file
if __name__ == "__main__":
dictionary = {
"a": {"related_id": "b"},
"b": {"related_id": "c"},
"c": {"related_id": "a"},
}
dot_file = generate_graphviz_dot(dictionary)
with open("graph.dot", "w") as f:
f.write(dot_file)
# Render the graph using Graphviz
gv.render("graph.dot", "graph.png")
The result is really impressive and is mostly right, but it has a few problems. First of all it doesn’t generate a valid GraphViz dot file (missing “digraph” header). Another problem is that this code does not work correctly because the call to “gv.render” does not run the “dot” command to turn the dot input file into a png. Additionally, the code ignores the fact that depending on the format of the keys in the map, the resulting dot file might be invalid.
More interesting than this is that it took me a few tries to get Bard to generate this code. The first time round I used the name “id” instead of “related_id” and then Bard got the logic of the “generate_graphviz_dot
” function subtly wrong!
All of this is terrible in a world where correctness really matters. Like law. Like coding. And so you might be forgiven for thinking that this tech will not impact your job anytime soon. But, as I wrote above: in our profession things mostly get smaller, faster, and cheaper; Bard and other AIs will get better at this. AI coding assistants might suck today, but pretty soon they will suck less!
So if AI engines will be able to write good code for clearly defined problems, what will our role be? Fortunately, asking the question that way pretty much answers it too…
In order for Bard to generate something useful I had to give it a pretty concise explanation of what I wanted to achieve. And that explanation required a large amount of insight on my end. As software engineers will understand (but layman typically do not), that is actually the problem of coding: Not the process of turning very specific designs into code, but coming up with those designs! All through my career, once I was able to write down in enough detail what needed to be done, writing the code was never a problem.
Most non-programmers think that programmers write code like the stuff that Bard generated all day every day, but that is not true. Instead what I do is take incomplete and badly specified user requirements, try to understand where the requirements come from, refine them by thinking them through, confirm my understanding with the user representatives, and then finally turn these requirements into statements that are precise enough so that they can be converted to code. Once I am done with all of that, the coding has become almost trivial. Even if an AI would do that perfectly, it is still only a small percentage of my job today.
Getting user requirements right in enough detail requires an enormous amount of understanding of the real world and a capacity to think through all the factors involved in a way that affords eventually solving the underlying problems with a computer. For non-trivial novel problems, LLMs will probably never be able to do that.
Remember though that new technology wipes out the jobs of people who do more or less exactly what the new technology does better. So if you are the kind of software engineer who takes a technical design that lays out what needs to be built and how it needs to be built in excruciating detail, and then writes code for it, an AI will come for your job really soon now. However, I do not know any engineers like that and I have never ever seen a spec that was so detailed that it would allow the actual implementation to be a rote exercise
AI will bring about a huge shift in many professions. It will eliminate certain jobs entirely and lots of other jobs will be severely impacted. Software engineers will not disappear, like legal researchers will not disappear. But the successful legal researchers of tomorrow will be the ones that know how to use AI effectively, and in that same vein the successful software engineers of tomorrow will be the ones who know how to use AI effectively.
Make sure you know how to do that. Do not ignore AI coding tools today because they are not great yet, lest they become great while you are not paying attention and you end up not being positioned for it. It might even be worthwhile to take a class or two on how these AI technologies actually work.
I did not survive for 35 years in this field by ignoring new technologies and I sure as hell am not going to stop paying attention now!
Here's a 8 min audio version of "Software engineering in the time of AI" from Wednesday Wisdom converted using recast app.
https://app.letsrecast.ai/r/aceff2d7-871f-4226-adaa-9494211a25d8