On Developer Productivity Link to heading

When we think of productivity, the name “Alexei Stakhanov” likely doesn’t come to mind for most, yet his story provides valuable lessons in how to approach productivity, even in fields far removed from his own. Alexei was a Soviet coal miner who rose to fame in the 1930s, even landing on the cover of Time magazine in December 1935.

On August 30, 1935, Alexei allegedly mined 102 tonnes of coal in one day with just three colleagues, over 14 times the target quota. Alexei’s extraordinary output earned him significant recognition from the Soviet state and even personal praise from Stalin.

What set Alexei’s team apart? Essentially, they applied assembly-line principles to coal mining and adopted new technology—a shift in both mindset and approach. Although doubts have been raised about the accuracy of Alexei’s claim, there are still valuable insights we can draw from the story. One important question is: why couldn’t this level of productivity be scaled across the USSR’s coal mines? And what lessons can we apply to improving developer productivity today?

Developer productivity remains a critical goal for any organization because it directly fuels innovation, giving companies a competitive edge. However, as an engineering organization grows, maintaining or improving productivity can become increasingly difficult.

In this article, we will define what developer productivity means within engineering organizations and explore pathways to enhance it as teams scale and complexity increases.

Defining Developer Productivity Link to heading

At its core, organizational developer productivity is about making progress quickly, together. Three main factors drive it:

  1. Speed
  2. Direction
  3. Scalability

In physics, velocity is defined as speed with a specific direction. Similarly, developer velocity is the result of both speed and direction.

Speed Link to heading

Speed refers to how fast features are developed and shipped. This might be measured by how many tickets are resolved per sprint, how quickly pull requests are approved, or how frequently deployments occur.

Speed is essentially a measurement of time: from the creation of a ticket to when the corresponding code lands in production.

Direction Link to heading

Direction pertains to code quality. In my Adaptive Systems Design framework, this is measured by _adaptiveness_, or the minimal amount of complexity added to the system. You don’t achieve developer velocity just by shipping more code—you need to deliver high-quality code that implements the required features while adding minimal complexity to the system.

We can model this by comparing actual complexity growth with the minimally viable complexity required for a feature. The deviation from the ideal direction of growth can be expressed conceptually in mathematics, for example, using cosine similarity to measure how close the team’s work is to the optimal path.

Scalability Link to heading

Scalability refers to the ability to replicate high developer velocity across an entire organization.

As Charles Darwin noted in The Origin of Species: it is not the most intellectual of the species that survives; it is not the strongest that survives; but the species that survives is the one that is able best to adapt and adjust to the change. This principle applies to organizations as well: engineering teams must be adaptable. The challenge lies in scaling productivity—while individual or small teams may work efficiently, integrating these gains across large teams or organizations can introduce bottlenecks. At an organisational level, integrations both between software modules and teams of human create the enigmatic barriers to achieving progress faster together.

It’s not enough to simply equip teams with “productivity” tools. The organization must also have the structures and practices that enable these tools to be used effectively as it grows.

What About Alexei Stakhanov? Link to heading

Applying this model of productivity to Alexei’s story:

  • Speed: Clearly, Alexei’s team operated at incredible speed—producing 14 times the baseline quota.
  • Direction: Assuming the coal mined was of high quality, their direction was on target.
  • Scalability: However, their productivity was limited to Alexei’s team. Despite attempts to spread these practices, the USSR’s overall coal production did not see a significant increase.

Measuring Developer Productivity Link to heading

We often assume that measuring productivity should be an objective process. Frederick Winslow Taylor is another prominent figure who has dedicated a significant part of his career down this path as well. Taylor was an American mechanical engineer who focused on improve industrial efficiency. Taylor’s methdo focused on decomposing workflows in the industries and optimize individual parts in order to achieve a more produtive process.

In one instance, Taylor made specific measurements on the optimal amount of load per shovel swing that yields the highest output per man per hour - and concluding that the answer is 21.5 pounds.

Unfortunately we cannot directly translate this practice into measuring developer productivity. Sure, we can isolate a section of a code base and analyse the Big O complexity, but it doesn’t actually help developers to be more productivity in creating software - unlike how more effective shovel swings directly contribute to more products being loaded or unloaded and create economic value in the process.

The first challenge is that it is difficult to approximate the value of the engineering output in it’s various shapes and forms related to software development. We cannot objectively determine the optimal implementation tactic as there are too many factors that contributes to the notion of quality, value and effectiveness. (Though one may be able to argue that having pull requests with more than 500 LOC is the most efficient way as these PRs tend ot get fewest comments and revision requests.)

Measuring developer productivity is hard. It is seldom objective or standardised effectively. But we still must try. Haivng a bad but consistent measurement still provides us with more insights than none. As the nature and complexity of developer’s day to day tasks vary based on the software module or specific specialisation, it is almost impossible to baseline every team in an organisation relative to each other. So if we shift to focus more on opinionated measurement metrics for each team, we must compare this to the team’s historical performance. That being said, there are different stages in a software module’s lifecycle and that also influence the types and complexities of tasks and we have to be flexible.

Key business performance metrics are also an important part of the equation. This may seem surprising at the first glance. But at a team level, business success KPIs of the applications that this team are responsible for giving us a glimmer of insight in the direction. We are moving in the right direction if we are improving (in the build stage) or maintaining (in the operational stage) the key business metric indicators.

At an organisational level, the north star measurement is the Total Customer Value (TCV) of all your revenue generating users. This is calculated by Customer Lifetime Value * Number of revenue generating users the software system could support. This is an influenced outcome of team-based business goals and metrics.

However, we must avoid driving productivity at the expense of developer well-being. Unrealistic goals can lead to burnout and resentment (even Alexei himself reportedly received death threats from fellow miners).

A Formula for Organizational Developer Productivity Link to heading

We can express organizational developer productivity with a formula:

Organizational Developer Productivity = ( Speed * Direction ) ^ Scaling

In terms of more specific metrics:

= ( { TCV increase / actual added complexity } / { time period } ) ^ { scaling factor }

Alternatively, we can compare actual complexity to the theoretically minimal complexity needed:

= ( { TCV increase / time period } * { minimally viable complexity } / { actual added complexity } ) ^ { scaling factor }

Scaling Developer Productivity: The Roadmap Link to heading

Unfortunately there is no one size fits all roadmap that improves the Organisational Developer Productivity. I create and reference many ‘best practices’ but everything must be contextualised to one’s unique challenges and operations. There is no single defined path to arrive at the productivity goals but it’s all about experimenting and reviewing measurements to make sure that we are always inching closer to the theoretical ideal.

1. Team Structure Link to heading

Create team structures with distributed software architecture principles - high cohesion and low coupling. We want teams that are enabled to make fast decisions and have as little dependency on people and processes outside that team as possible.

Build cross functional teams with different skillsets required to both implement and operate software systems independently and rapidly.

Experimentation Link to heading

Next, there must have an experimental culture in the organisation. One have to embrace failure and quickly iterate based on the insights gained, especially in the unknown-unknown territories. One must constantly start new experiments to explore new ways of operating that may result in greater productivity gains as an organisation.

3. Scaling Practices Link to heading

Successful experiments should be shared across teams, and individuals who lead productivity improvements should be recognized. Sharing knowledge is essential, but adoption of best practices is even more important. Every new practice must be tested and adapted to fit each team’s context.

The sharing activity outcomes must be measured to gauge the adoption rate of these new practices. We want great presentations from the practitioners, and even more importantly we want adoption from attendees. A mitosis replication of ways of working that works. Each adoption of a shared practice from a different team should be in the form of a new experiment that provides further feedback to the practice with unique tactical context from the adoption team.

4. Platform Engineering Investment Link to heading

Once a particular new experiment or a new way of working has produced sufficient improvements on the productivity across multiple teams that have adopted this, we should document this and automate as much as possible as part of platform engineering.

The goal is to remove as much friction for new teams to adopt within your organisational structure and policies. This would involve clear documentation on the problem that it solves, benefits of the new practice, potential failure modes, setup guide and automation scripts to be made available to all software engineers in the organisation.

Encourage opionated ways of development - strike a balance between formalisation with organizational standardization and customization in each team. There must be foundational ground rules that are strictly enforced around critical topics such as security. But when development practices are strictly defined and enforced, it diminishes the opportunities for further experimentation. We want to create a culture like fertile soil that makes productivity experiments and adoption scaling as seamless as possible.

Conclusion Link to heading

At an organizational level, developer productivity is a moving target. As our individual practices evolve, an effective, bottom-up mechanism is necessary to propagate these improvements to the wider organization and put in the necessary engineering effort to reduce the barrier of adoption. Every change unlocks new possibilities of improvement. That’s why we must focus on practising gradient descend and find the new minimum points of friction for shipping software. We must focus on the pursuit and allow the outcome to be crowd source by the real builders.