TODO_BEFORE(): A Clean Codebase for 2019


Published January 1, 2019 - 0 Comments

Today’s special guest post is written by Aurelien Regat-Barrel. Aurelien is a senior developer with 15 years of experience in C++, mostly spent working with legacy code. Over the years he has learned to find excitement and gratification in working on challenging code bases, learning great lessons from the journeys it is to restore enthusiasm and motivation in a team overwhelmed by the complexity of its project. You can find Aurelien online on Twitter @aurelienrb and on LinkedIn.

It all started with a joke: create a compilation “time bomb” based on static_assert and the __DATE__ macro to celebrate the new year. Quite fun to do! But also quite useless, isn’t it?

Well, the more I think about it, the more I’m convinced it can be used to control and reduce technical debt. So let’s see how to take a good New Year’s resolution based on that trick!

TODO_BEFORE C++ clean code 2019

The Genesis

How and why do we accumulate technical debt in the first place? How do we find ourselves in a situation where things seem to be out of control? And more importantly: how to recover and improve from such a situation?

In the beginning, Software was waste and without form. And the Creator said, Let there be a new project, and developers producing code, and code giving features, in which is their bugs after their sort: and it was so.

And there were many evenings, and there were many nights – but no many mornings. And the developers said: Let the code give birth to all sorts of comments: and it was so. And Developers made the TODO after its sort, and the FIXME after its sort, and everything highlighted on the face of the screen. And Developers saw that it was good.

And Developers gave them their blessing and said to them: Be fertile and have increase, be rulers over the files and over the dependencies and over every line moving on the screen.

And Developers said, See, we have given every reason to refactor that code. But the Creator had refused to give his blessing for their Big Rewrite Plan, and no one was never assigned to the cleanup work. At that moment their eyes were opened, and they suddenly felt shame at their nakedness. They saw everything which they had made, and it was not very good.

And the Creator said to Developers, What is this you have done? And the Developers said, we were tricked by the deceit of the technical debt, and we took it. And the Creator said to them: Because you have done this, Your burn-up chart will crawl on its belly, and dust will be your food all the days of your life. And there will be enmity between you and you managers; with painful labor you will give birth to new features. And so it was.

Once we started to bite into the Forbidden fruit of technical debt, it becomes addictive. All of a sudden, it feels like to be caught in quicksands: the more we try to move the code, the deeper we sink into a heavy mud, draining enthusiasm and motivation of the entire team. This heavy mud is often called “legacy code”.

But an aging code base should not be confused with something we lost control on. There are many valuable pieces of software out there that are both old and under good control. On the other hand, many teams tend to lose control of their own work in less than two years. How long can we work the way we work before losing control? Why is it happening that way? How to improve the situation?

Somehow this is just an optimization problem. If the developers are the computers, and the code is the processed data, then the team workflow is the faulty algorithm. So let’s have a closer look at why at some point the garbage collector stops working, allowing more and more leaks to happen.

The broken windows principle

The Broken Windows principle is a criminological theory introduced in 1982 with the following words:

Consider a building with a few broken windows. If the windows are not repaired, the tendency is for vandals to break a few more windows. Eventually, they may even break into the building, and if it’s unoccupied, perhaps become squatters or light fires inside.

The basic idea is that if early signs of degradation of a structure are not immediately fixed, they are likely to encourage a “nobody cares” atmosphere which will pave the way for more serious degradation, including vandalism and crimes.

This is something we probably all experienced: we should not commit a specific change because it is hacky and dirty, but a quick glance at the other parts of the code gives us a comforting justification to do so because “there are other issues to be fixed in this file, so we’ll fix all of them at once, some day…”.

And here begins the downwards cycle: the more we degrade the code, the less visible each degradation becomes, paving the way for more unnoticed degradations. For example: if your application requires 1 minute to start, will you even notice that a recent commit made it 5 seconds slower? When it could launch in 1 second,  5 seconds would have been a very noticeable change. But after 10 minor degradations, it has become invisible; “minor” being here the sneaky snake since it’s a moving scale.

The progressive and unnoticed accumulation of technical debt is a key factor in the degradation of software. Once a certain threshold has been reached, most developers will lose their motivation and switch into a I don’t care anymore mode. The worse it is, the worse it becomes.

The same applies to everything else: compilation time, number of warnings, size of classes, number of TODOs, etc. Until we reach the point where  “why losing time doing things properly: it’s already completely messed up!”. And that’s how the broken window has evolved into a hazardous place that is works its way to a minefield. Last year was bad, this year has been terrible, 2019 will be like hell!

Or will it be? Once we find ourselves in such a situation, how to find an exit route?

Introducing the TODO_BEFORE() macro

Now that we are more aware about how we can accidently turn ourselves into “vandals” of our own work, let’s try to teach self defense to our code! Since C++11, not only we have the __DATE__ macro, but we also have static_assert and constexpr keywords. And used together, they can create compilation time bombs! Here’s a basic C++17 example which can be used as such:

Based on that simple example, I wrote a TODO_BEFORE() macro (C++14 implementation available here) to be used that way:

The idea is to force the developer to think more precisely about what should be done, and when. I’ve been using it for just a few weeks, and I can already confirm it forces me to think twice before postponing some more work: it reminds me that I’ll really have to do that work in a not so distant future (yes, the macro does not accept dates that are too far away from now).

Maybe you’re thinking: “Ok, it’s a fun trick to use, but is it really that easy? How to make sure developers (including myself) will actually use that new macro? And what about the hundreds / thousands of existing lines of code that need to be updated? So much work to catch up… And we miss so much time… and motivation too… We will never be able to improve anything that way!”.

Gaining back control and motivation

A good starting point in any progress is measurement. How can you know you are improving it if you can’t measure it? In our case it’s even more critical than with code optimization: it’s about making our good work visible and understandable to everyone, including the non tech people of the team.

When a project is out of control, when deadlines are never met, when big refactoring plans have not produced much more than regressions, developers are no longer trusted. Eventually, they lose confidence and interest in building something they can be proud of. That’s a very uncomfortable situation.

On the other hand, being able to observe the problem as a simple graph is a simple and effective start for change. And seeing the dreadful curve reaching a plateau then adopting a decreasing trend for the first time ever is a very powerful tool to gradually restore confidence and motivation.Suddenly, the future is no longer feared but looked forward to: I can’t wait to be in 6 months when we will have removed that big pain!

So here’s the starting point: choose something simple to measure, that impacts all developers all the time. You know, the kind of “minor” broken window that tend to accumulate very quickly because taken alone none of them is a big deal. But when 10 developers commit a new warning once a week, that’s make 2 new warnings per day!.

In my case, for 2019, I decided to tackle the growing number of TODOs in my code. I started with a quick measurement with the following command:

I ran it on the code as it was one year ago: I got 82 results. One year later, I get 153. So we clearly have a leak here. Actually I realized it was becoming serious when I caught myself writing the following comment:

Then the “broken window alarm” triggered in my head: “Come on! It’s a two minuntes task, I can do it right now!”. And indeed I was able to do it immediately. In other words, I caught myself being in the process of postponing quality for the reason other parts of the code were doing the same thing. Broken window in action!

Now that we have a real problem, and an easy way to measure it, it’s very easy to stop its growth: add a script (in the CI or pre-commit hook) that rejects any change that worsens the current situation:

This is the first and easiest action to be taken: stop the leak! Starting from now, our dreadful curve has reached a limit that must be strictly enforced. Even if you don’t immediately improve the situation after that first action, simply share the graph showing how the leak has been stopped to send a powerful message to your team… and slowly spread the desire to see it grow downwards.

The comment in the script is self explanatory: sometimes it can be legitimate to add some sort of temporary degradation into the code, especially during a transformation process. Our goal is not to make things harder in the already quite difficult cases, but to make sure the most complex parts of the system don’t propagate everywhere.

So if you really can’t do without decreasing the quality somewhere in the code, you can still balance the impact by improving the quality in another place. And with hundreds of places to improves, that’s an easy task!

Last, but not least, we need to adjust a little bit the build process. Indeed, with this macro we introduced some sort of randomness in our build system: today it builds fines, but in 6 months the exact same code is likely to fail. This is not acceptable from a build system point of view: we want repeatable builds.

The approach I have chosen is to enable the macro by default but to explicitly disable it (via a CMake option) when building from the CI on the master or develop branch. In all other cases (local build on a developer’ machine, or CI build on a feature branch), the macro will be enabled. I think this is strategy to be discussed and adapted by each team.

What is measured is improved

The next action to be taken in order to gain back control is also very simple: book 30-minute sessions in your agenda (ideally twice a week) to work on decreasing the total number of defects. Note that if you suffer from long build times, you don’t need any booking in your agenda 🙂

Of course if you can spend more time, do it. The main point here is to find a free slot of time that won’t require any validation from your managers. Only your tech team should have something to say about what you do. To make things nicer for everybody, don’t hide your good work, focus on the least difficult tasks first, and submit small chunks to be reviewed so that nobody can seriously blame you.

Don’t forget it’s not just the code that you are changing but the working culture of your environment: it may require a bit of patience.Once the improvement is done, don’t forget to update the MAX_COUNT value in the CI script, and report the new value in a spreadsheet so that week after week you can track and share the progress.

Constancy is the key of success here: it’s like introducing frequent degradation in the code but the other way round… to produce the opposite result!

If you doubt in your capacity to be consistent on this task, make sure to have some visible updates of your curve showing how long it has been stalled for. Not seeing any improvement in a while is likely to give you some motivation boost. The best help you can get is from another member of you team that shares the same excitement in improving the situation.

That was for the general case. In our specific case, thanks to the TODO_BEFORE() macro, we can take a shortcut to speed up this second stage: rather than processing each “wild” TODO one by one, simply convert them into TODO_BEFORE() statements (if they are still relevant). This allows much faster sorting and cleaning of the legacy tasks, helping to show impressive progress on the “dreadful curve”.

So the grep command in the CI script needs to be adjusted:

That way we only count the TODOs that have not yet been converted to TODO_BEFORE(). And we can go as far as forbid any “wild” TODO in the code to force the use of the TODO_BEFORE() macro. Then we just let the source code reminds us the deadline… how comical to use C++ to say “it’s time to garbage collect me”!

Should we care about the total number of TODO_BEFORE() in the code? I decided to keep it simple for now and not put any max count: I believe the code will naturally limit how many of them can exist at the same time. But I’m curious to learn from different approaches: please send me your feedback in a comment once you played a little bit with this macro!

Future work

We can imagine going even further. For example, we could write simple a command line tool used in CI to scan all the TODO_BEFORE() macros in the code, sort them by expiry date, and post into a Slack channel the ones that should require attention within the next 15 days.

That way the team will be informed in advance about the tasks that should be processed or rescheduled before they triggers a build error. The same tool could also automatically publish TODO metrics to a database connected to a Grafana dashboard in order to track and follow the progress… These are some ideas we got in my team, and we plan to work on such tools this year. Well, at least it’s on our TODO list…

Conclusion

In an ideal world, legacy code should be the pride of a team, the result of a careful respect for our own collective work that we managed to turn into a competitive advantage: “old is gold”. But in practice, the gold has to be carved out of the mud, and maybe that’s what makes it so precious: it requires hard teamwork to be found and isolated.

But I think this is why legacy code is precious: it is a great master that puts us in front of ourselves: if we mistreat are own work, we will be rewarded by frustration and demotivation. But if we decide to value and protect what we do, we will be rewarded with pride and enthusiasm.

Or as the Tao of Programming says: “A well-written program is its own heaven; a poorly-written program is its own hell!”.

Happy new year 2019!

You will also like

Become a Patron!
Share this post!     Don't want to miss out ? Follow: