SPACE Framework
Beyond Lines of Code
The SPACE framework came out of Microsoft Research, the University of Victoria, and GitHub in 2021. It exists for a simple reason: every previous attempt to measure developer productivity (lines of code, story points, commit counts) fell apart because it tried to squeeze a complex creative activity into a single number. SPACE takes a different approach. It says you need to look at least three of its five dimensions to get any kind of useful signal.
The Five Dimensions
Satisfaction and Well-being captures how developers actually feel about their work, their tools, and their team. You measure it through quarterly surveys with questions like "I have the tools I need to be productive" and "I would recommend my team to a friend." This one matters more than most leaders realize. Declining satisfaction is a leading indicator. It predicts attrition 6-12 months before people start handing in resignations.
Performance measures outcomes, not raw output. For a platform team, that might mean API availability or how long it takes to onboard a new service. For a product team, it could be feature adoption rates or error rates on user-facing flows. The important thing: performance metrics should tie back to business or user impact, not to internal proxies that nobody outside engineering cares about.
Activity covers the countable stuff, like pull requests opened, code reviews completed, deployments shipped, and incidents responded to. These are the easiest things to instrument automatically, which is exactly why they're dangerous on their own. A developer doing deep architecture work might not open a single PR for two weeks while creating enormous value for the company.
Communication and Collaboration looks at how well knowledge flows through teams. Useful things to track include code review turnaround time, documentation contributions, cross-team PR reviews, and knowledge-sharing sessions. When collaboration breaks down, you see it in long review queues, siloed knowledge, and the same questions popping up in Slack over and over.
Efficiency and Flow is about uninterrupted focus time and pipeline wait times. You can measure this through calendar analysis (meetings per day, size of focus blocks), CI/CD pipeline duration, and developer self-reports on how often they hit a flow state. Most engineering orgs discover that meetings are their biggest efficiency killer, not technical debt.
Implementation Strategy
Start by picking one metric from at least three different SPACE dimensions. Run satisfaction surveys quarterly. Hook into your toolchain for activity and efficiency data. Have engineering managers assess performance qualitatively during monthly team reviews. After two quarters of data, you'll have enough trend information to see which dimensions need investment. The point is never to produce a single productivity score. It's to build a multi-dimensional view that keeps you from optimizing one thing while accidentally making everything else worse.
Key Points
- •Five dimensions: Satisfaction & well-being, Performance, Activity, Communication & collaboration, Efficiency & flow
- •No single metric captures developer productivity. SPACE requires measuring across multiple dimensions
- •Satisfaction surveys are a leading indicator; declining satisfaction predicts future attrition and velocity drops
- •Activity metrics (PRs, commits) are only valid when combined with outcome metrics to avoid Goodhart's Law
- •Developed by Nicole Forsgren, Margaret-Anne Storey, and others at Microsoft Research and GitHub
Common Mistakes
- ✗Cherry-picking only the Activity dimension because it is easiest to measure automatically
- ✗Running satisfaction surveys but never acting on the results, creating survey fatigue
- ✗Measuring individual developers instead of teams. SPACE explicitly warns against this
- ✗Treating SPACE as a replacement for DORA instead of a complementary framework