Wednesday, February 02, 2011

More facts and figures from Capers Jones

I continue my reading of Capers Jones Applied Software Measurement as discussed a few entires ago - Software Facts and I’d like to report some more of Jones findings.

These numbers are very insightful but, and its a but Jones acknowledges, the data is very shaky. As Jones says “software measurement resembles an archeological dig. One shifts and examines large heaps of rubble, and from time to time finds a significant artifact.”

He readily admits that there is not really enough data to make solid conclusions (less than 1% of the data needed is available) but this is where we are at. This is as good as it gets. Before anyone rushes to say “Capers Jones is wrong” ask yourself: “Do I have better data to make conclusions one?”

I find two weaknesses in Jones work. Firstly his reliance on Function Points. I agree with him that lines of code is not a good measure but I have some doubts about function points for two reasons.
  • They do not incorporate algorithm complexity
  • Automating function point counting is difficult and becomes an approximation. Thus true function point counts are expensive which makes them difficult to work to calculate often
But, function points, for all their faults seem to work.

My second issue with Jones work concerns his assumption of the waterfall model. Yes he acknowledges Agile, he even says it is the most productive approach in some circumstances but all his data and assumptions are cut through with the waterfall. I suppose this is only natural, it was (is) the dominant model in the industry for 30 years or more.

OK, on with some numbers and conclusions, again these are almost entirely US based but are probably very similar elsewhere....

Productivity
  • Jones repeatedly states and shows how quality and productivity are related. The most productive teams have the lowest bug counts and shortest schedules.
  • The three biggest costs on a software project, in order: Rework (Bug fixing), paperwork and meetings & communication. Sometimes managerial costs are actually greater than coding costs.
  • Jones states as little know fact “excessive staffing may slow things down rather than speed things up”
  • Inspections (code, design, requirements) are the most efficient known ways of preventing problems.
  • Studies show that some developers are 20 times more productivity than others, and some make 10 times as many errors as others.
Costs
  • Paperwork (requirements, technical specs, etc.) on the largest systems can be larger than one person’s ability to read during an entire career. (Think about that: you join a new project at 21, by the time you hit retirement you haven’t finished reading the documentation.)
  • Most corporate effort (time and money) tracking systems are incorrect and miss between 30% and 70% of the real effort.
  • Thus this data is essentially useless for benchmarking, creates cost over runs because they are so inaccurate and is actually dangerous because it puts future projects at risk when the data is used for estimates. (Before you question this statement go and check, does your tracking system measure unpaid overtime?)
Projects
  • “More than half of large projects are scheduled irrationally” - a delivery date is set external to the development group without reference to capabilities.
  • Over 50% of large (more than 10,000 function points) projects are cancelled and the rest are almost always late and over budget.
  • In 2008 at least 25% of new projects used some elements of Agile.
  • Maintenance is now the dominant form of software development and is likely to stay that way.
  • There are several Y2K like problems likely to occur in the next 40 years.
  • Even waterfall projects overlap phases - typically 25% of any phase is incomplete when the next starts. This rises to 50% when the handover from design to coding is considered.
  • It is even difficult to determine when a project really starts - less than 1% have certain start dates - and 15% have ambiguous end dates.
Outsourced & Offshore
  • Outsourced software development companies are generally better than their clients and have a better record of delivering large systems.
  • Offshore work tends to be less product than onshore work in the US - that does not mean it is not cost effective but it is less productive in terms of function points per man/time period
  • Inflation and other cost changes mean that by 2015 the economic advantages of offshoring development work are likely to be gone. (Considering that it takes time to transfer work and offshore teams to become productive it is likely that very soon it will not make sense to send work overseas.)
  • Worryingly Jones reports that 75% of companies have deficient requirements practices. Since offshore work is particularly dependent on good requirements I would expect offshore projects to be more troublesome in this regard.
Standards and Quality
  • Jones finds no solid evidence that ISO 9000 improves quality or the ability to deliver; almost the opposite it does increase the cost of projects because it increases the amount of paperwork. Ironically some ISO 9000 advocates have defended the approach to him on the grounds that ISO 9000 is not designed to improve quality! (Sounds like many people have been misled over the years)
  • Military systems are the largest systems in the world, some over 350,000 function points. They are also the most expensive and usually paperwork driven.
  • Department of Defense (US) standards do not seem to improve software quality, they have sometimes been impediments and have certainly raised costs.
  • Microsoft, Oracle and SAP are singled out as having very poor quality control.
  • Interestingly he also suggests that once 25% of a large COTS application needs to be customised it is less expensive to build the same application from scratch.
Jones says that as of 2008 there were 700 known programming languages with one a month appearing. The average life of a language is 15 years but the average life of a major software system is 20 years. These figures might be a little misleading, C is now nearly 40 years old, other languages survive only a few. But the point is clear: there are lots of dead languages and orphaned languages out there (over 500 Jones thinks.)

As noted before, most schedules are planned with an externally determined date. Jones gives this as the number one reason for project failure - number two is excessive schedule pressure which is very similar. Later he proposed schedules are so bad as to constitute professional malpractice.

This is interesting. One the one hand businesses need those deadlines. On the other, if projects were left to run their natural duration would the failure rate be so high?

He also comments that US development teams tend to be strong on technical skills but are let down by weak management and organisational issues. Later in the book he attributes much of this down to western managements pre-occupation with cost control which he contrasts with eastern concern with quality.

This argument seems reasonable but I’m not sure it stands up to analysis. Although it might explain why Scrum can be “successful” even when technical practices are ignored.

In my experience US and European teams don’t always have the technical skills they should have - certainly talk to Steve Freeman or Jason Gorman and you will hear some horror stories. While India has more CMM certified organisations than anywhere else my personal experience is, while India has some very good developers the IT bubble has attracted in not just second rate developers but third rate ones as well. Its also worth pointing out that Japan isn’t a software powerhouse, Toyota even struggle to apply Lean thinking to software.

Jones believes, from his studies, that management is the greatest contributor to success and failure at the project and company level. In some ways that is reassuring, it shows that there might be something in good management.

Interestingly Jones also supports one observation I have frequently made myself: when redundancies occur Test and QA staff are usually the first to be let go, and this is counter productive in anything except the very short tun.

I’ll continue reading and maybe blog some more data. In general, the Applied Software Measurement has a tendency to repeat itself and I wish he used more graphs for his data but I still think it is worth reading and I recommend it.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.