Software projects are fundamentally change programmes, and here in 2024, all business change programmes are really just software projects in a suit.
Some of these projects deliver successfully, but many don't. Why this happens is entirely not the reason Dan Davies wrote The Unaccountability Machine, and yet he manages to shed a lot of insight on the matter in passing.
Davies' actual focus is the 'polycrisis' – why everything seems to be going wrong all of the time in some weird, complicated, interlocking way. To address the causes of the polycrisis, he takes a necessary, meandering and weirdly fascinating journey through the history of a discipline called cybernetics (not about building robots) and the remarkable life of a man named, implausibly, Stafford Beer – who, if possible, looks even more unlikely than he sounds.
Forgotten by history
You will probably be aware of Beer at some level, even if you didn't know it. As well as inventing cybernetics, he was fundamental to Chile's attempt, in the early 1970s, to execute a project called Cybersyn.
Cybersyn was intended to deliver a planned economy through the power of computers. For many reasons (some of which Davies alludes to), it never quite worked and was then ended abruptly by the US-sponsored Pinochet coup. British-born Beer fled Chile and spent much of his later life working to rescue his colleagues from the Pinochet regime.
It is a great shame that Beer's work seems to have been mostly forgotten by history. Davies does a very good job of explaining the core concept of Beer's Viable System Model, and in a much better way than Beer himself does.
Beer's own work (which I felt compelled to try to digest after reading The Unaccountability Machine) is much harder to read because of how badly it has dated. It also isn't helped by his love of weird terminology. He calls something an ‘anastomotic reticulum’ that we would probably just call a graph these days). When you've lived through the computing revolution, much of what he laboriously tries to explain is obvious to us. It was a remarkable feat of imagination for him at the time, however.
Davies uses Beer's Viable System Model as a tool to analyse neoliberalism and, in particular, the way firms have changed since Beer's time. All of which is fascinating, and I urge you to read the book; there’s genuine insight into the global economy and the relationships between governments, private equity and the management of large firms.
And, to return to my point, it’s also relevant to the way software projects are conceived and run.
Filtering the information flow
Davies identifies a fundamental feature of management that Beer calls ‘attenuation’. This represents the way that management is unable to consume the full information flow from a team, so they need to filter it.
In practice, this means coming up with particular methods for summarising or reporting on a business function. In our world, this often means things like dashboards.
Your choice of what to include on your dashboard fundamentally represents your understanding of the things you are managing. In many ways, it defines the semantics with which you relate to it.
This is an extremely live topic in our field and it’s hotly debated. You will find a broad range of opinions from experienced, thoughtful people, and a huge range of practices in different organisations.
The hot ticket
The fundamental unit of work on software is the ubiquitous ‘ticket’. A friend of mine who works in TV has recently been moved onto a software project and said she couldn't believe everyone spent all their time in meetings discussing tickets.
“When do they ever actually do any work?” she asked.
These tickets get assigned to sprints or placed on kanban boards or both. They get refined and split and reworked. They gain huge histories of comments and then eventually get marked ‘done’, when they either fall off all the boards entirely, or have defects attached to them, causing them to rise again.
Those who have been fortunate enough to work for many years in business without ever encountering a software project, typically react with something between disbelief, disgust and fury when they see how software projects are managed. To them, it feels so fundamentally bad.
“Why,” they ask, “is nobody able to tell me whether we are making any progress?”
Building the right thing
Of course, there are many ways of trying to measure and represent progress, including burndown, velocity, deployment frequency, lead time, restore time, change failure, work in progress, cycle time, throughput, points estimates, hours estimates, defect rates and many, many more.
But there are, I think, two fundamental problems with these approaches, which are pretty broadly understood in the industry.
The first is that these metrics really only tell you if you are ‘building things right’, not if you are ‘building the right thing’.
And building the right thing is the primary measure of progress that we really care about. Diligently building the wrong thing to a high standard is just waste.
Typically, building the right thing is expressed as ‘business value’. If you could represent the production of business value over time, you might get the primary metric you want to see.
Unfortunately, the business value in a unit of work on software is usually impossible to discern in an objective way. It is either uncertain (in a Knightian sense) or so interdependent with other factors that you can't isolate it.
The part of the viable system model that is relevant here is what Beer calls System 5. System 5 “is responsible for policy decisions and to steer the organisation as a whole”.
Beer calls System 5 ‘philosophy’ or ‘identity’ and this provides the context for any judgement about business value. Any part of the software is only valuable if it advances the mission of whoever caused it to be built, and that is very much a philosophical question.
Ultimately, why you want to build something informs the judgement of whether what you have is worthwhile.
Measuring value
The second problem with these metrics is what James Scott, in Seeing Like a State, calls the distinction between ‘techne’ and ‘metis’.
When you choose to measure some things, you inevitably choose to not measure others – attenuation is fundamentally a process of discarding data.
When you do this, there is always a bias to retain things that look objective and measurable (techne) and discard the fluffy ‘fingertip feel’ of metis. But the metis is critical to these sorts of activities. This is covered in great detail in the discussion of attenuation and Beer's use of the wonderfully Victorian-sounding ‘Ashby's Law of Requisite Variety’.
Together, the inability to measure real value and the discarding of crucial information just because it feels too fluffy can result in the sort of ‘the operation was successful but the patient died’ kind of software project which so many large organisations routinely experience.
Beer's viable system model is a genuinely valuable approach for analysing complex interlocking systems and processes, and I think it can be particularly relevant for software projects. This book is a great introduction to it, and you'll learn a lot about the polycrisis while you’re about it.