Re: [sc] Origins of 10**-4 SW Failure Rate

Date view Thread view Subject view Author view Attachment view

From: Bev Littlewood (b.littlewood(at)
Date: Tue 26 Nov 2002 - 10:17:36 GMT

At 1:29 pm -0500 25/11/02, Nancy Leveson wrote:
>       Perhaps this question has already been discussed
>       by this mailing group, but I was wondering what
>       the origin was of the common notion that software
>       has at best a 10**-4 failure rate. I seem to recall
>       that there was a paper that stated that this was the
>       best failure rate that could ever be confirmed through
>       verificaiton testing. However, since I don't have a reference
>       on this, I can't be sure of my recollection.
>There was a paper by Littlewood and someone else who did say this.
>But the origins are really fuzzy -- basically someone got away with
>putting this in a fault tree and everyone else started following
>suit.  Of course, this is nonsense.  If this worked, then we could
>just put down 10**-6 for hardware, 10**-2 for human error and build
>all fault trees with three boxes (and then have the same risk for all
>systems).  Actually, I've seen published papers that did this :-).
>Why would all designs (including software designs) have the same failure
>rate?  (good ones, bad ones, ones you tested, ones you did not test well,
>those written by skilled programmers, those written by monkeys, etc.).
>That doesn't make any sense any more than a single failure rate for
>all hardware makes sense.
>       Also, since software doesn't fail in the conventional sense
>       associated with mechanical and electrical components,
>       I'm assuming that this was a measure of failure to achieve
>       required system behavior due to software specification and 
>design errors.
>       Perhaps I'm wrong about this assumption.
>Software reliability is defined as not satisfying the specified
>requirements.  Therefore, software specification is not included --
>only coding errors.


I'm sorry to say that I think your description above is a travesty of 
the scientific position on this issue. Readers interested in an 
accurate account should go to the original papers: e.g.

R. W. Butler, G. B Finelli, 'The infeasibility of quantifying the 
reliability of life-critical real-time software', IEEE Trans Software 
Engineering, vol 19, no 1, pp3-12, 1993;

B. Littlewood, L. Strigini, 'Assessment of ultra-high dependability 
for software-based systems', Comunications of ACM, vol 36, no 11, 
pp69-80, 1993,

or for a more 'popular' account:

B. Littlewood, L. Strigini, 'The risks of software', Scientific 
American, November 1992, pp62-75.

The work reported here addresses the limits to what might be claimed 
for the dependability of software, based upon feasible quantities of 
evidence. In both Butler and Finelli's work, and in our own, the 
emphasis is upon a rigorous and formal approach to this problem. 
Speaking only for myself, an important motivation was to move the 
debate away from the hand-waving informality and special pleading 
that seemed to characterise much work on this problem at the time. 
Sadly, we do not seem to have succeeded.





Bev Littlewood
Professor of Software Engineering and Director
Centre for Software Reliability
City University, London EC1V 0HB

Phone: +44 (0)20 7040 8420  Fax: +44 (0)20 7040 8585

Email: b.littlewood(at)

Date view Thread view Subject view Author view Attachment view