đź’¨Hire more people or accelerate the team you have?

Originally published on LinkedIn

The struggle to attract, grow and retain software talent has, frankly speaking, lost all reason. Rampant salary inflation forces organisations into an ongoing reality check on what they can realistically hire and afford to retain. 

While the talent crunch is stressful for teams and leadership alike, we can treat it as a healthy constraint. It is a forcing function to understand the performance of our current delivery system - do we need ten extra engineers? or can we find the capacity we need with our current team?

Imagine if a could team operate 20,50, 100% faster? And had a bonus of an extra Engineer per squad, or an additional two+ months delivery time each year? You can now find a ball-park estimate on how much spare capacity the team has with a couple of hours of work and a spreadsheet. 

⚠️ Your mileage may vary depending on current delivery performance, but these gains are possible, and each year, more teams realise them. While it can take time to reach the highest levels, your team may be sitting on substantial easy wins.

How fast is fast?

The Accelerate metrics combined with our own experience working with teams show that a two-pod ten-person group (with a healthy backlog) deploys over one thousand times a year to their production environment. That's fast.

Not only fast but the practices and safety nets needed to deploy changes within minutes also means these teams have up to 50% time spent on rework, incidents, compliance and security responses.

But wait, there's more —teams working in productive environments happier and motivated to keep things constructive.

How fast is your team?

DORA Quick-check tool

It takes 2 mins to check, but you have to be honest! The DORA quick check tool is a great delivery performance measure in absence of any formal metrics. When using it, be realistic - consider a range of services that the team works on regularly - the young shiny microservice, the intricate front-end app and the bloated older deployable that causes people to suck their teeth whenever modernisation is suggested. 

Do the math.

With your current performance estimated, book an afternoon and work up a spreadsheet. Don't overthink this bit - do enough for a one-off assessment of your team and components benchmarked against the medium, high and elite performance metrics. Factor in your annual costs, amount of pods and any anticipated growth.

This assessment will show you the size of the productivity prize relative to your team. You now have the inputs for a business case and a way to validate it. Depending on your organisation's context, one or more of the following is the appropriate motivation to lead the conversation:


  • Time to market (get it delivered sooner)

  • Retain Market dominance (customer trust, startup-speed, enterprise-scale)

  • Attract, grow and retain talent (learn from the best, do your best) đź’Ş

  • Reduce waste (more customer value, less extra headcount)

Taking Action - release the brake(s)

Learning we need to change doesn't necessarily lead to change - talk to a doctor who still smokes! So, as with our personal lives, to achieve a 10x improvement, we need to lean into three straightforward and challenging steps. Although the tactics vary with team size and organisation culture, the pattern for any team is the same:

  1. Prepare the team for an open objective conversation. Go hard on the problem, not the people. The path to elite performance depends, of course, on the starting point; however, at some point, it is guaranteed to demand change across people, processes and technology.

  2. Call out the significant constraints, and make a plan for them. Long-running systems often have technical, architectural or capability limits on how fast the team can safely make changes to them. Use your math on the potential gains to prioritise these limits and make a plan for them - seek outside help if necessary.

  3. Promote the right culture. Acknowledge and reward reliability just as much as delivery - we ignore either at our peril. Look for and celebrate the small wins. Programmes of work may tackle the significant constraints; however, long-term continuous delivery relies on the team quickly identifying and taking action to make delivery smoother and ownership easier

What needs to be true if we want confidently deploy new changes within 30 minutes?

The power of measurement.

The most powerful step in any performance improvement is to benchmark your current state. Having context on which components are efficient to work on and which are time vampires for your team will lead to cultural change. 

I've yet to find a team that reaches a moment of realisation and is content to let things be. This is also echoed in the latest Thoughtworks Technology Radar.

Don't overcomplicate it

Metrics tooling for software delivery is an evolving space. If you're starting out, I recommend you do start with some one-off manual measures:

âś… Delivery Lead time - how much time from code entering the master branch to production deployment?

âś… Deployment frequency - how often do these deployments happen?

❌ Change Failure Rate - how many deploys cause problems. Skip it. Measuring CFR has team side-effects like reluctance to raise an incident in time, and doesn't cover naturally occurring failures.

❌ Mean Time to Recovery - how long does it take to stop the problems from occurring. Skip it. MTTR is hard to measure and has little value if starting out.

If you're going to invest in metrics, start with the quality of service customers are experiencing.

âś…SLIs, SLOs, SLAs - Some effective availability measures to customers over time. Ensure you're the first to know when something is wrong, then track this to balance the delivery speed metrics. Availability is easier to measure and more valuable to teams starting with metrics.

Previous
Previous

🤔 Quality. QC for speed, QA for scaling.

Next
Next

⚖️Software Delivery - It’s ok to measure it.