The agile waterfall
We are agile, tell me what you are delivering in December of next year!
(Like this article? Read more Wednesday Wisdom!)
Way back in college, they taught us the PanData System Development Methodology (SDM). This is a traditional waterfall method where you build software in sequential stages. Being sequential, under no circumstance should you start a stage before the previous stage’s outputs are complete and agreed upon.
Mutatis mutandis: Never start your Thursday before having read Wednesday Wisdom. Subscribe today!
In SDM, you start by engaging with the customer on the scope of the problem and the core requirements: The definition study. In the next stage, you start thinking about the problem at a high level and from that you create a functional design (a.k.a. “basic design”), which contains a logical data model and mocks of screens and reports. Once this is signed off, you go on to create the technical design (a.k.a. “detailed design”) which focuses on the low-level system design and the physical data model. When you get this document signed off, you start coding and then testing. The process finally culminates in a user acceptance test and the handover of the final product to the customer.
For a long time, this way of working was the holy grail because most software at the time was written without clear requirements and without much upfront thought. During my college’s "family night", one of our professors told a harrowing story about building a house without requirements and without plans. In his narrative, the builders just had a chat with the clients and went forth and built whatever they had in mind. Their “plans” continuously had to be modified when the clients said things like: “Can we have a pool on the roof?”, “we need a basement!”, and "of course we want a staircase from the first to the second floor!"
It was a hilarious and well-told story. At the end of his talk, when the building had fallen over and sunk into the swamp, the professor said: "Obviously nobody builds a house like this, but everyone still writes software this way!". He then launched into an explanation that they taught the kids and siblings of the attending family members to build software like we build houses: From an approved design that captures all the requirements of the clients up-front and takes these into account into a coherent design.
One of the problems with this analogy is that most people understand that you cannot easily add a basement or remove a supporting wall once building has started, but unfortunately not many people have a clear understanding of why exactly you cannot add or change a feature in software. It is soft, is it not? How hard can it be? To make things even worse, some features are easy to add or change once coding has started, while other ones require you to go back to the drawing board at huge cost to the client, not seldom causing great delays and lots of extra cost.
If you want to listen to a great podcast on how this went for the biggest and most expensive US infrastructure project ever, listen to The Big Dig.
One of the big problems with the waterfall approach is that building software takes a long time and during that time clients often change their minds about the requirements (assuming for a moment that they had a good grasp on the requirements at the start of the project, which is a pretty wild assumption to begin with). This is not always the clients' fault because, as time moves on, the outside world changes and with that often come changes to the requirements.
Fixing the requirements is a problem though, because by doing that you remove one of the two degrees of freedom you have.
When building software we have three important dimensions that govern its completion: Requirements, time, and money. Only two of these can be fixed, which inevitably means that the third must be variable. If requirements are fixed, that means that either time or money is variable, and this leads to systems being delivered way too late (money was fixed) or way over budget (time was fixed).
In some cases, fixed requirements also led to a system being delivered according to spec, but so late that the system was no longer usable because the external circumstances had changed.
Out of the ashes of the waterfall methods grew a family of "agile" methods. According to Bard, "Agile methodology is a project management approach that involves breaking down projects into smaller phases and guiding teams through cycles of planning, execution, and evaluation. It's an iterative way of managing projects and developing software that helps teams deliver value to their customers more quickly and effectively."
Agile methodologies took the world by storm and pretty much everyone I meet in the industry these days are saying they embrace an agile methodology.
To which my inevitably reply is: “Aha, and are you the scrumbag then?”
The whole crux of agile is that instead of dumping a huge pile of paper on the client's desk and telling them that you'll be gone for a year or two while you are building software, you are building the software in iterative cycles (typically six-week sprints) and that after every sprint you will talk to the customer again to show them what you have done and to get some guidance on where to go next. Done right, agile methodologies are awesome because they get systems that do something in the hands ff the clients sooner while giving the client the opportunity to influence priorities and requirements during the development.
There is unfortunately some stress between agile methodology and the realities of software engineering. Often when writing software, you make design choices that are really hard to undo if an unexpected feature request comes in that conflicts with that choice. For instance, if you choose a relational database as your core data storage layer and then later the customer says that they want a global footprint, that storage engine choice might turn out to be the wrong one, and an expensive one to undo at that.
Because of this, using agile requires more and better judgment on the part of the senior engineers overseeing the project. In the terminology of my current employer, they need to be aware which decisions are one-way doors and which decisions are two-way doors. A one-way door is a door that, once you pass through it, you cannot go back through without incurring huge costs. A two-way door on the other is a door that you can easily walk back out again to walk into another door. For instance your choice of build system is often a two-way door. If you start using Ant and then later figure out that you want Gradle, that is some hassle for sure, but a few days of work by a single engineer will probably fix that. Replacing your storage engine on the other hand probably means weeks, if not months, of work by the whole team.
In agile methodologies, the senior engineers require good judgment to recognize whether doors are a one-way door and which ones are two-way doors. A lot of judgment, probably influenced by experience, is required to figure out what kind of door you are walking through and to make the right choice when you are going to walk through a one-way door.
When confronted with a one-way door, many engineers are liable to make the choice that will always work, no matter what the future requirements are. That is often the safe choice, but not typically the best choice. Making the safe choice often means making a more expensive choice whereas in the end a cheaper choice would have worked just as well, if not better. This is where the judgment comes in. It means knowing enough about the space and the unspecified future requirements to be confident that a particular choice is going to work out well.
I often see choices being made that overshoot any future requirements that you could imagine. I once worked with a team that wanted to use a fancy, expensive, and relatively unknown document database that could store petabytes of large blobs. When I asked them why they didn't use the pedestrian key-value store that we had lying around, they mentioned as yet unspecified future requirements. In response, I led them through an exercise that showed that even in the most expansive future that I could imagine the key-value store would still work for them. To top it off, I made them realize that the fancy document database they wanted to use was actually implemented on top of the key-value store, so if worst came to worst, they could probably implement the features they would need on top of the key-value store as well.
Despite the popularity of agile methodologies, the waterfall mindset persists, especially in the world of headcount planning and yearly roadmaps.
I regularly sit through yearly planning cycles where executives ask what they are going to get next year and how many engineers that will take. The real answer is of course that nobody knows, but you don't get to be an executive senior vice president with that mindset, so people are pressed for numbers.
The problem with pressing people for numbers is that, if you do it long enough, you will get a number. It might not be a good number. It might be a number that has no relation to reality. But it is a number.
This is an area where there is significant stress between agile methodologies and organizational realities. The crux of agile is that requirements are no longer fixed and that we are going to do the best we can given the time and money (people) given to us. However, organizational planning processes want us to say what we are going to get by when for a given set of people, in essence fixing all three dimensions.
I know we are agile and all that, but tell me what I will get in December of next year and how many engineers you need for that!
The only way to fill the gap is with judgment. And using that judgment comes with risk. As a senior engineer involved in these planning cycles, you use your superior judgment gained from experience and in the end you minimize, but then accept, the risk.
For instance, I once sat through a planning session where one of the line items for the coming year was "Support new payment formats" (this was at a bank). The core question here of course is which new payment formats. How complicated are they? What does it take to implement them in our system? What exactly do we expect to be able to do with these new payment formats? Support them in uploads? Batch payments? Transaction exports? Do they include German payment formats (which are hopelessly over-engineered and notoriously hard to implement)?
The answer of course is: Nobody knows, but the customer wants to know if they can expect that to be implemented by the end of the year. What to do?
This question is impossible to answer with any certainty, but we can take a stab at it. Taking this stab requires in-depth knowledge of the domain we are in (payments), of our system, of the client, and of our development environment. This is where you as a senior engineer come in. We know this question is impossible to answer with certainty, but we need your best answer based on your knowledge, experience, and judgment. If all the data was there for everyone to see, the answer would be easy. The value of seniority is to come up with the right answers in absence of data.
Hopefully nobody in your organization requires perfect answers in the face of unknowns and equally hopefully you will be in a position throughout the year to influence the project so that it can meet your best guesses.
When agile meets the waterfall we are in a casino and there is some risk involved. And if things go horribly wrong because you misjudged the odds you can always defend yourself by saying that you gambled and lost. As long as the bet was not crazy, you should be fine…
A bet that is not crazy is subscribing to Wednesday Wisdom. Really, you cannot lose! Subscribe today!