(Like this article? Read more Wednesday Wisdom! No time to read? No worries! This article is also available as a podcast. You can also ask your questions to our specially trained GPT!)
This is a followup to a Wednesday Wisdom article that was written more than two years ago. The wisdom hasn’t changed, or at least not much, but lots of other things have, so I think it is time for a review of where we are.
Transparency note: I am a Member of Technical Staff at OpenAI.
Many years ago, when furry creatures from Alpha Centauri were still real furry creatures from Alpha Centauri, all these real furry creatures were programming in assembler. Then, compilers came around and many of the real furry creatures were full of disdain because of the bloated machine code that the compilers generated.
As evidenced by the fact that many compilers did not seem to understand that setting a register to the value zero could be done with a single XOR instruction and did not need a bloated two or more byte instruction that explicitly stuffed all zero bits into a register.
But, times have changed and for more than twenty years now I have not been able to write better assembler code by hand than most compilers generate. The last time I wrote assembler for a production use case was probably over thirty years ago…
Back in those days, the real furry creatures wrote their code using powerful editors such as ISPF, vi, and emacs. Then Integrated Development Environments (IDEs) came and quite a lot of real furry creatures were abhorred by the complicated and slow editors that tried to integrate the entire software engineering workflow into a single graphical user interface that was admittedly impossible to run over a telnet session. I was definitely one of the last holdouts there, but the times have changed and even I am now regularly using the fantastic IDEs that JetBrains pumps out or some flavor of VSCode with enough extensions to make it go voom.
Old habits die hard though and so I load the vi extension into all of them because otherwise there will be “ZZ”s or “:q”s all over my codebase.
Today, AI coding agents are here and many real furry creatures are really upset by the thought that some AI chat bot can write code for them, insisting that it is all terrible and that these agents will never replace humans when it comes to writing production code. Compared to the earlier furores about compilers, IDEs, fourth-generation programming languages, and other software engineering technology improvements, passions seem to run much higher now.
My guess is that much of that negative passion about AI coding agents is driven by a combination of real fear and equally real hubris.
Let’s start with the fear. Many people make an exceptionally good living being software engineers and the thought that that gravy train could grind to a halt is not very comforting. I have friends with kids in college and they are actively wondering if it will make sense for these kids to study computer science. Mutatis mutandis, I read a LinkedIn post the other day where someone was wondering if they should stop their kid from going to law school because the poster had himself founded a company that is creating an AI legal advisor. Obviously, in the best tradition of LinkedIn, this was more of an attempt to draw attention to that product than it was an honest question, but that doesn’t mean that the question is not real.
On top of that, there is hubris: Obviously, my skills are so amazing that no robot can imitate them. It needs exceptional insight, lots of knowledge, experience, a certain “je ne sais quoi”. The thought that a bag of numbers could do a lot of that too (and sometimes as well as you) is apparently deeply insulting to many people.
One of the aspects of the deep anxiety around artificial intelligence is probably that computers are now doing things that until recently were strictly in the domain of humans. This begs the complicated question of what it means to be human. Even that is a sliding scale though; nowadays, nobody is upset that computers play Chess and even Go better than even the best humans, but in the 1990s there was a not insignificant anxiety about what it would mean once computers started surpassing the best human players. Once that happened it turned out to be a nothingburger and by now we can safely say that that anxiety has completely gone away.
Today’s AI is different though. For me it was always obvious that computers would eventually beat humans in Chess and Go, both full information games with a limited amount of state that is confined to a small board and a few pieces. Today’s models however do things that regularly amaze even its creators and for many tasks they are already much better than most humans. That is a sea change: For our entire existence we have been confronted with other animals that were bigger, stronger, and faster than we are. But there is one thing that made us unique: Our intelligence. When a machine comes along that appears to be intelligent as well, then who or what are we, really?
Answer: Ugly bags of mostly water.
There is a natural tendency to diminish the stature of anything that threatens us. This is somewhat easy with today’s AI models that are not difficult to trick into saying something incredibly dumb or wrong.
I swear, if I see one more post on LinkedIn that some new AI model cannot count the number of r’s in “strawberry”, I am going to explode.
However, the power of AI is not that it will never say something wrong. The power of AI is that it is a force multiplier for the smartest humans. The fact that I can drive a car into a wall does not make the car useless. The fact that today’s AI chatbots are not always perfect does not make them useless.
Here is a realization that should inform your insight into every new technology: Our profession is incredibly good at making things better, smaller, faster, and cheaper (except for the engineers themselves). Compilers weren’t great at first, but now they are. IDEs used to be cumbersome, large and slow, but by now they are good and almost indispensable for writing software (cue hate mail from the vim and Emacs crowd). Almost everything we invented was big, clunky , and expensive at first, but give us a few decades and we will make it small, efficient, and cheap. Again, except for the software engineers themselves 🙂
The case for AI is not helped by muppets making outsized and nonsensical predictions that are impossible to believe and easy to refute. But, realize this: The fact that some people are idiots about it, does not mean there is not a revolution going on. There is, and it will significantly alter the way we work and how many people we need for a given task. Remember that it took about 60 years from Thomas Savery’s first demonstration of a steam engine (1698) to the start of the industrial revolution (~1760). ChatGPT was launched three years ago. To quote one of Amazon’s principals: This is still day 1.
The enthusiasm for replacing software engineers with robots comes from the fact that nobody likes software engineers much: Every company needs lots of them, they are expensive, and a lot of them are prima donnas. A startup founder that just got $5 million in funding does not want to pay half of that for one year of ten junior software engineers, some of whom complain about the quality of the free sushi. One senior engineer and a bunch of ChatGPT credits sounds like a much better deal for that founder, doesn’t it?
It is a dirty secret that a lot of coding is pretty mundane. An awful lot of what I do is of the nature: “Add <this> flag to <these> function calls and when it is set, do <this>”. Or: “Take <this> file and depending on whether the virtual machine we are on is in Azure, AWS, or GCP, upload it to Azure Blob Storage, S3, or the GCP object store.” I haven’t done any research on this, but I would guess that more than half of the code that I write is not rocket science. Then there is the category of code that is more interesting, but that has a lot of plumbing around it. For instance: “Add <this> RPC to <this> service”. In that last example, before we get to the fun of writing the actual implementation of the RPC, there is a ton of boring plumbing to be done like adding the RPC and its parameters to the service description, generating a skeleton for the implementation, and checking the arguments. AI coding agents are already terribly good at all of this.
Then there is SDK complexity. Every time I need to authenticate to a cloud service, I am digging through pages and pages of cloud SDK documentation while ripping snippets of code from StackOverflow and other sources. It’s not necessarily hard, I have done it before, but every time that I need to do it, it still takes me time because I don’t do it often enough that it stays top of my head. And of course: SDKs change or because of a legacy code base I am locked in to an older version of the SDK, so whatever I remember might not be exactly on point.
AI coding agents are also already brilliant at this. Talking to an AI coding agent is like talking to somebody with amazing pattern matching skills and with perfect recall; someone who has read all books, all websites, and all publicly available code in the world. And to top it off, they can type much faster than I can. AI agents do not have to write all of my code to be useful; if they only write the mundane code that I totally know how to write but that still takes me time to write, it is already a huge value add.
Recently, I have spent quite some time writing code with the help of ChatGPT and to be honest, I am very enthusiastic. Not because it never makes mistakes (hint: I make those too). Not because it invented a spectacular new algorithm for a particular problem (hint: I rarely do this myself). But just because it was able to generate lots of working code very quickly for things that I had to write, that would not at all have been a problem for me to write, but at which the bot was faster and, in some cases, better because, guess what, it knows all the patterns, all the APIs, all the SDKs, and all the example code much better than I do.
In my other field, I recently had to write a paper about what Immanuel Kant would have thought about a particular legal problem. Obviously, entire encyclopedias have been written about what Kant would have thought about a whole range of topics and ChatGPT has read them all. To prepare for the paper I spent a happy half hour with ChatGPT Deep Research in which it read a few more books and articles and then wrote me a half-way decent paper about the topic. Not within the word limit, not with references formatted the way that the university likes to see them, but it did come up with a few angles and concepts that I hadn’t thought of and that were interesting. I took those angles and concepts, researched them some more, by reading additional articles, and then wrote my own paper, within the word limit and with references formatted according to Dutch legal standards.
The university was less than enthusiastic about this approach and after a frank conversation with a professor who asked me if I had used an LLM (answer: “Yes, because I am neither stupid, nor incompetent), they awarded me the minimum passing grade for my efforts. It’s okay, I know they are Luddittes and I vividly remember having discussions with my professors during my law school undergrad about using Google Search and Wikipedia. Many lawyers do not only apply Roman law, they are also technologically still stuck in Roman times.
All of this goes to say that I think that AI is here to stay. Like many technologies that came before, it will take some time to mature and settle in, but the successful people of the future will be the ones that know how to use these tools well. The question is not whether AI agents will replace all the work an individual software engineer does; the question is whether, in the future, today’s teams of seven software engineers will be able to get to the same output by using only three software engineers and a gaggle of AI coding agents. I would say that the probabilities for that future look very good.
This means that if you are a software engineer today and you are not learning how to be a better and faster coder with AI agents, you are doing yourself a disservice that might have significant ramifications for your career. Do not miss this bandwagon.