Reject the RAT and learn as you go
I recently read GeePaw Hill’s post The RAT: Rework Avoidance Theory. It’s a great post that really resonated with me, and you should go read it yourself. This post is my attempt to reframe that concept and others that GeePaw has talked about into my own thoughts.
This part in particular really reflects how I and many other developers I’ve worked with conceive making changes to an application:
A second metaphor is that of seeing the change as a finished product, built in standard parts by isolated individuals, assembled at the end. It makes similar assumptions to the footrace idea, but also assumes free cost of parallelism and high bonuses from specialization.
We often think of what we’re doing as akin to crafting an intricate mechanical device, putting as much polish on it as we can to ensure that every button, switch, and lever works exactly as intended, including on whatever new functionality we’ve just bolted on. We want to deliver perfection in one fell swoop, a dramatic unveiling that solves the stated need without a single bug. We then want to move on and hopefully not need to revisit this change for weeks, months, or years, having delighted the user and switched to some other task.
Conceiving of software development with that mental model often leads one to adhere to Rework Avoidance Theory, or RAT. If we need to add fields A, B, and C to an existing form that a customer is filling out, possibly involving modifications to multiple APIs, multiple tables in the database, and all the associated code reviews, testing, and other required processes, we often have the tendency to want to get these changes all done at the same time. That can take the form of including them all in the same story, to be moved along the software lifecycle as one unit. We tell ourselves that we want to avoid treading the same ground more than once within a short timeframe.
Or perhaps you’re trying to give smaller stories a go and put A, B, and C in tickets to be worked seperately, but when it comes time to start work on A, tickets for B and C are chilling there in the backlog, gnawing at the back of your brain. You’re already touching components in the codebase that you’d need to modify again for B and C, why not throw those in the same branch while you’re at it?
Where this line of thinking fails us, however, is in ignoring the benefits of first getting A all the way deployed to production, then B, then C.
We learn from the doing of A. We learn exactly what the different parts of the system were that we had to modify. We can add to our test suite and ensure that we don’t backslide on important business requirements, and the next time we traverse the territory we will possess a better map and a better idea of strategy. We validate our assumptions on even the broad strokes of the feature being useful and worth sinking more time into. We refine our technique and approach and get that faster feedback that is supposed to be at the heart of an agile process.
Turning away from perfection towards “Do no harm”
On a related topic, we as developers have to get away from the idea that when feature A goes to production that it has to be the most perfect version of A of which we’re capable. Much has been said about feature flags, which are pretty essential to rapid deployment of changes. What needs to accompany the use of feature flags is a shift in mindset from “We will work at this new feature until we make it the absolute best we can, trying to catch all problems”. Instead, the feature flag should create a boundary. With each deploy, then, the question to answer before deploying your changes becomes:
Have my changes broken anything outside of the boundary?
Where broken can mean:
- Have I made existing features stop working?
- Have I degraded performance?
- Have I opened up any security holes?
Essentially, “first, do no harm.” Not among these is “is the new feature working 100%?” Obviously with each release the hope is that we are moving the needle closer to the feature being ready for primetime, to deliver value to the customer. For deploying often, however, we have to get away from the idea that it has to be absolutely perfect. We want the code to be of reasonably good quality, we want to attempt to get closer to meeting the needs of the users based on our current knowledge, but we know that we’ll learn more in the doing and delivering of one small piece of the whole. The whole might even change or we might discover that the thing we were trying to achieve is best accomplished with a completely different solution.
This presupposes that you have support for such a process in your organization. Everyone has to be on board with a given feature not being “done” after a single release. You also need to have the ability to quickly deploy changes, or they will pile up whenever your next deploy finally does happen, reintroducing the risk that you were trying to minimize with the whole “small changes” philosophy.
Additionally, being able to answer the above questions for each deploy candidate can be either extremely tedious and time-consuming or relatively quick and straightforward based on how much automation you have and how well you have defined what to look for in respect to things like OWASP security vulnerabilities and the like. If you’re able to put in the underlying tooling and process, however, you’ll find that you’re able to ultimately get new and useful functionality into the hands of your users faster and adapt to their changing needs in a much more agile way.
“Oh, Rats” by MTSOfan is licensed under CC BY-NC-SA 2.0