How to Run Across a Lake
The Simplest Continuous Testing

There's No Such Thing As Software Productivity

Bill Caputo, through repeated conversations we've had, has convinced me of something very surprising. It was something that changed the way I think about the world, and how I do my job.

There is no such thing as software productivity.

As Martin Fowler observed almost a decade ago, productivity in software cannot be usefully measured. The reason why is it just doesn't exist in The Realm of Relevant Things. Put another way, productivity has no applicability as a metric in software. "How much did we create today?" is not a relevant question to ask. Even if it could be measured, productivity in software does not approximate business value in any meaningful way.

This is because software development is not an activity that necessarily produces anything. Here's a thought experiment: Let's say that you have a couple of developers working on the same project, and by accident, both of them pick up the same task on the same day. The first one, Frank, hauls off and writes a 1000 line framework that solves the problem beautifully. The code is well written, well tested, and the deployment and operation of it is well documented. The second developer, Peter, heads off to to the park for the day, where he thinks about the problem while he feeds the pigeons. Around 4:45, Peter wanders back to the office, deletes 100 lines of code, deploys the change...and the problem is fixed.

Which of these two developers was more "productive" today? The answer is: It doesn't matter. What matters the that Peter solved the problem, while simultaneously reducing long term maintenance costs for the team. Frank also solved the problem, but he increased maintenance costs by producing code, and so (all other things being equal) his solution is inferior. To call Peter more "productive" is to torture the metaphor beyond any possible point of utility.

I would argue that what good software developers do is remove problems. The opposite, in fact, of production. The creation of technological artifacts such as code, documentation, data, etc...are all necessary evils to achieve the goal of removing problems. That's why, sometimes, the most effective solution to a problem is a 5 minute conversation.

This post has been truncated. Everything after this paragraph was a rant, and not relevant to the central point. Kind of ironic, right? Thanks for reading!

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

MattRogish

"Programmer productivity *measurement* is a myth" oops

Isaac Gouy

@Ben Rady >> The act of production is not necessarily related to the delivery of business value <<

Programmers solving problems "is not necessarily related to the deliver of business value" either.

Programmers solve problems that didn't need solving, solve last years problems, create solutions that destroy more business value than they deliver, ... etc etc

If "we're better off dropping metaphors altogether" then you should be talking about $$$ -- "As Martin Fowler observed", "If my four successes yield $1 million profit each, but Joe's one success yields $10 million more than the cost of all his projects combined - then he's the one who should get the promotion."

Right place, right time, right contacts - to be on the lucrative project - trumps technical ability.

Ben Rady

Issac G -- Dropping metaphors and talking about money (savings or revenue) is great. We should all strive to do that, whenever and however we can. Absolutely. Also agree that focusing on valuable problems is more important that technical ability.

And I agree that solving problems that don't need solving can be an issue...but I do think it's easier to hide waste in "productivity" than it is in fake problem solving. When you're building something, it's really easy to hold it up, lie to yourself (and others), and say "I did something today"...even if what you did doesn't help anyone do anything.

On the other hand, if you got someone who's paying you to solve their problems, and their problems aren't being solved, it's pretty clear that you're not doing your job.

William Payne

Just because productivity is hard to measure does not mean that it is nonexistent. Having said that, however, most of the proxies that we use to quantify productivity (because it is difficult to measure directly) are pretty useless on their own.

Productivity is a complex, multi-dimensional phenomenon that defies accurate measurement. There exist a number of metrics that appear to be correlated with productivity, but using one (or a small number) of these has historically proven to be a poor way to measure performance. Some people have spent time thinking about this: http://hackerboss.com/why-your-metrics-suck/ ... but it is clearly not an easy nor straightforwards problem to solve.

In my mind, productivity metrics are only a small facet of a larger problem: How do we conceptualise the management of the discipline of software engineering. What models do we use to organize our teams in the most effective way?

Whilst I am a mere dilettante in this area, and am nowhere near to reaching any conclusions, I have done some thinking of my own:
http://williamtpayne.blogspot.co.uk/2012/11/modeling-team-productivity-hacking.html

BenLinders

I tend to agree, you canĀ“t measure productivity directly, and express it in 1 figure. There are companies that measure things that they call productivity, I have my doubts if they are useful to provide insight, to manage and improve productivity (see also http://www.benlinders.com/2011/not-everything-that-can-be-counted-counts/).

However as this article and several responses already mention, there are factors that influence how much value a software team deliveres to it's customers. Like problems that have been solved, user stories delivered, reduced maintenance costs, defects delivered, increased test coverage, team morale and happiness, etc. Careful measurement, analysis and discussion of some of these factors can certainly help to improve the value that is delivered by software teams.

Ceperez

Nice post. However, how then do we measure programmers so that they can be adequately compensated?

At current wages, a extremely skilled and experienced developer rarely commands a salary that is more than twice a recent graduate out from school. Does that even make sense?

CuriousAgilist

Classic case of where removing a few lines of code woud have saved weeks of a team's wasted work: remove the draw feature. http://p4r.buzzleberry.com/?p=481&utm_source=buffer&buffer_share=794fb

Jitterted

So let's take this further with a hypothetical situation:

Alice takes a story card (feature "Omega") and in 2 days implements a feature by writing very little code and doing some great refactoring work. Turns out, Bob took the same story card and took 5 days to implement it. Charlie takes another story (feature "Epsilon") and takes 4 days to implement it.

Turns out, however, that feature "Omega" (Alice's version, because Bob's missed the rollout train) caused dollar sales of product "O" to drop 12% over a 1 week period, whereas "Epsilon" ended up increasing product "E" dollar sales by 12% in the same week. (Yes, I'm suspending belief a bit here, as Matt says, we can't -- necessarily, or perhaps easily -- attribute sales to an individual implemented story, even if they're for different products.)

So, which developer was most productive? Charlie, because his feature increased business value, or Alice, because she finished the feature in time for the rollout?

Answer: it doesn't matter, because the question makes no sense.

The problem is: what are we trying to measure, and _why_ are we measuring it? Are we trying to objectively evaluate how well Alice, Bob, and Charlie are doing so we can reward (or fire) them? Or are we trying to figure out if the company is getting a good ROI for the team?

If it's individual productivity, how could we possibly say that the business value is what matters, after all, is it Alice's fault that the feature (which worked as the product manager wanted) caused sales to drop? Is it even the product manager's fault, who did lots of analysis and user testing? And what about Bob, is it his fault the story tracking system allowed him to pick up the same story as Alice?

This isn't to say that all developers are equal, but it's very dangerous to try to use metrics -- whether it's features implemented or business value gained -- to evaluate or compare individuals in a system.

;ted

The comments to this entry are closed.