
I've always found that writing about the things I've learned, in my own words, is an effective way to solidify the knowledge and remember the key themes later. For that reason this blog is as much for me as for anyone else. But if anyone else gets any value from the things I write, what a bonus that would be. To this effect, all blog posts were written entirely by me, no AI here.

This book by Forsgren, Humble and Kim describes and summarizes the results and conclusions of four years of research by the authors in an attempt to find a way to evaluate modern software delivery performance of organizations by using measurable indicators. It asks an important question - how can managers measure the performance of their teams objectively? This is an important skill because only once we can evaluate performance in a trustworthy manner can we then determine whether changes to our habits and processes are having a positive effect. The book is divided into three parts:
Although the research methods section is important and interesting, the first and third sections are what have provided me with the most lasting value.
The overarching theme of the conclusions is that organizations with high software delivery performance have measurably higher overall organizational performance. In other words, it affects the bottom line of the business. The next subsections describe, in bullet form, what I consider the most valuable takeaways from this book.
The biggest lesson I have learned is that quality must be the number one focus of any leader, and sometimes that might mean making hard decisions and telling those above yourself in the org chart "no". It can even mean denying incentives such as time-based delivery bonuses (another lesson I've learned is that time based incentives are a mistake). It can be difficult at times not to be distracted from the mission of quality. Customers and initiative sponsors want results and get impatient, set deadlines and pressure teams to produce results as quickly as possible. Although it is important for teams to produce quantity (we need to maximize value to our customers, beat competitors to market etc.), in my experience it is vastly more important to produce quality. If something is late but works, that is always better received than something on time but broken. I've learned that lesson the hard way and will now always use the quality/stability metrics of Mean Time to Restore and Change Fail Percentage as my north star.
I've always been interested in unit testing and producing the smallest, atomic units of code possible so that they can be easily and properly tested. After reading this book I see the connection between that and loosely coupled architecture. I've still never personally seen a "mature" codebase that was perfectly loosely coupled. In fact, most architectures I've seen contain terribly tightly coupled components and systems. I have no data to back it up, but I've always wondered if the reason most large systems are in such poor shape is that the "winners" that become large products are those that were able to get to market first. From what I've seen, product incubation happens in an environment where there hasn't yet been much investment and the proper resources to ensure quality do not exist. A prototype is built as quickly as possible and often doesn't need to scale to impress the first few customers. Only later when products take off and companies scale is the underlying technical debt noticed. Just my personal theory. But one fact I can confirm is that most codebases contain tightly coupled elements and it's our job as effective leaders to identify where we can loosen this coupling (and of course always build new initiatives out with loose coupling in mind).
I've seen the "shift-left" transformation in a product company first hand. I am grateful to have had the opportunity to see it happen and to have had the support needed to do my part as a leader during that change. During my time at that company I saw us go from taking 9 months to deliver a particular release, to having the ability to release multiple times per week. A large part of that was reforming an archaic quality control process by "shifting left" on quality. In practice this meant a much higher level of collaboration between developers and quality analysts. I'm excited to see where I can apply these lessons in the future to become the primary driver of such a transformation.
One of the first "side of the desk" initiatives I ever undertook was during my time in consulting and was designed to reduce deployment pain as we were experiencing it regularly. I had just assumed deployments always took until midnight and that manual steps were inevitable to at least some degree. I was working at a consulting company that had both developers and what we called "Cloud Technologists" (CTs). At this company, changes made by developers and CTs were rarely discussed during the course of the project, and when change sets were integrated and attempted to be deployed it always led to issues. One of the main issues was that configuration changes made by the CTs would break the automated tests created by the developers to test their code. One of the goals of my initiative was to create a testing framework that would be more resilient to such configuration changes. Thinking in terms of the ideas from this book, this was an example of one of the prescriptions to reduce deployment pain - detecting and tolerating failures in environments. Another one of the prescriptions for reducing deployment pain - keeping all information needed to reproduce environments in version control - is exemplified in the next section as well.
Again thinking back to my days in consulting, I remember that my CT colleagues were responsible for delivering configuration changes. These configuration changes were able to be captured and stored as XML metadata. At the time I remember proposing that we start to capture all of those changes in the source control and deploy them automatically, rather than having CTs separately document and make these configuration changes manually on every deployment, as they were currently doing. I received pushback because "CTs don't want to learn git". I accepted that uncritically at the time, thinking "they do their things their way, I do my things my way" - a perfect example of Bureaucratic culture. I wasn't mature enough at the time to understand that my idea was good, and that it was in the organization's interest to implement this change. Armed with the concepts from this book I would build my case a lot more strongly in a similar situation in the future, communicating my idea with empathy, while tying my reasoning back to our shared goals. Making that change would have been an example of Generative culture. These are ideas I now keep close at hand.