Re: [sc] Proposed RCFA International Standard

Re: [sc] Proposed RCFA International Standard

From: Peter Bernard Ladkin <ladkin_at_xxxxxx>
Date: Sun, 22 May 2011 09:11:19 +0200
Message-ID: <4DD8B717.4030207@xxxxxx>
On 5/21/11 4:23 PM, Derek M Jones wrote:
>> .....the responsible committee of
>> their national standards organisation. However, it can be abominably
>> hard to find out what that organisation may be or which committee may be
>> responsible. That is one way in which, as Martyn proposes, the
>> standardisation process is "broken".
> Standards organizations tend not to provide much information on the web.

To the contrary, you gave us the entire list of projects being considered by TC 56, on the WWW. The 
information is obviously nominally public and accessible that there is an international effort to 
standardise, for example, RCFA.

However, to repeat, it is woefully obscure. The RCFA standardisation project doesn't turn up in the 
first 12 pages of a Google search on "Root Cause Failure Analysis".

What does turn up is a couple of hundred references to companies and "guidelines" and courses which 
will show you how to find *the* root cause of an engineering failure.

There seems to be quite a lobby there for an obviously mistaken belief. And where there is a lobby, 
someone can try to standardise that belief and they are obviously trying to do so, viz. the quote I 

> The best approach is probably to spend a few hours on the telephone
> working your way around your local standards bureaucracy.

How does one know what question to ask? "Hello, here is Peter Ladkin again with my weekly call 
asking whether someone is thinking of instituting a standardisation effort on the following topics 
on which I am expert......" Given the thousands of engineers in Britain with true expertise in, say, 
half a dozen topics each, that is probably O(10^4) phone calls a week, say each one taking on 
average 6 minutes, gives O(10^3) hours, which is O(10^2) people just to man the telephones for this 
alone. That doesn't seem to me like a practical modus operandi.

So if that is the "best approach", then that is further proof that the process is broken.

>> A third way in which the process is "broken" is that any meagre
>> influence I might have (have had) on the German committee has taken
>> quite a lot of effort on my part. I have written papers with less. That
> This is not "broken", it how the process should work.

It is made hard for someone like me, expert in causal failure analysis and known to be so, to engage 
myself in a critique by which I am trying improve the state of engineering standards, and have that 
critique evaluated (and, I hope, accepted) by peers. Under what construal of "should work" is this 
situation any advantage at all to anyone who cares about good standards?

Compare. A technical paper gets submitted for publication. Currently the process works as follows. 
It is routinely assigned to an editor, who actively attempts to find out who else in the world is 
expert in these matters, and actively to solicit their views on the paper.

Imagine now that it worked as standards work. The editor puts the paper on hisher list, and waits 
for people to discover it and volunteer to review it. Such a process would be hugely vulnerable to 
abuse. People with nothing else to do could devote their time to finding out what papers are 
available for review and writing critiques - the more the merrier - and thereby having influence, 
possibly widely exceeding their intellectual capacity, over what is published and what not.

I can't imagine anybody who cares about the scientific quality of publications advocating such a 
process. It is so obviously wide open to abuse. Why should it be different with standards?

As anyone who has worked in the review business will know, the major problem is finding appropriate 
people (that is, people who are expert and whose views on the work can be regarded prima facie as 
scientifically trustworthy) who will devote the time and effort, to a deadline, to review a paper. 
It is not a question of morals or intent, it is simply a practical question of behavior. And 
reviewing a paper is a one-time thing, whereas working on a standard is a persistent devotion of 
time. Nancy has eloquently pointed out the result, and I note she has experience in this matter 
widely exceeding that of most other people on this list, including myself and you.

> Standards are written by the people who do the work.

This statement is a tautology but I assume you mean it to indicate rather more than that.

Let me try to interpret. People get interested and engage themselves in standards writing, without 
explicit quality control, and you think this is appropriate. I assume you think that technical 
quality is an emergent property of this process.

Well, to many of us, it obviously isn't an emergent property. And there is no general reason to 
expect it would be. Imagine a similar situation. You want to build a house. You advertise in the 
paper and get lots of people willing "to do the work". They come along and pile brick upon brick and 
put a roof on and you have your house. That's the way it is done in lots of countries in the world. 
And along comes a magnitude five quake and many of them collapse with hundreds dying. Turkey, 
Armenia, China, and so on. But not in California. I've been through plenty of magnitude 5 quakes, 
indeed a magnitude 7+, which broke a large bridge (but only one of four large ones in the region). 
Why not? Quality control. You can't just get people to pile bricks up. Indeed, you're mostly not 
allowed to use just bricks in California. Someone with expert knowledge (rather, many - an 
architect, as well as an engineer and a geologist) must review plans and opine objectively that your 
house is going to hold up. And the houses do! Unlike those in places without this quality control.

Now consider the case at hand. There is obviously a large group of people who believe the trope that 
there is just one root cause of any failure - twelve pages worth, maybe more, of Google references 
to all of them who will take your money to find "it" - and all reinforce each other's belief in 
"the" one (true) root cause. Enough to form self-appointed committees to standardise this trope, 
probably all over the world.

But the trope is wrong. More than that, it is *obviously* wrong to anyone who has made any study of 
the nature of causality - philosophers, statisticians, natural scientists, social scientists, the 
chairpeople of Royal Commissions or Presidential Commissions appointed to investigate severe 
accidents of significant social import, and even some engineers who have performed failure analysis 
such as myself, Nancy, John McD, Martyn Thomas, Rob Alexander, Marta Peralta, Tony Foord, Paul 
Stachour, Mike Holloway, Andy Loebl, Scott Jackson, James Inge, Jeff Joyce, Jan Sanders, Bernd 
Sieker, Jens Braband, Oliver Lemke, Tecker Gayen, Uli Maschek, Babette Fahlbruch, I Made Wiryana, 
and indeed anyone who took part in any of the Bieleschweig Workshops dealing with Root Causal 
Analysis. How does one ensure this mistake is detected and corrected?

I think the answer is obvious, because there is only one way known to work. Some of those people who 
know and can prove that this trope is wrong must be engaged at some point in the process to make 
their views known and to ensure that these views are taken into account.

The current standardisation process lacks such a formal step. It is only by taking such a step that 
one stands a chance of moving outside the opinion of the group of believers in the false trope.

The harder problem is to detect when something is wrong but everybody believes it nonetheless. This 
problem occurs regularly in the most significant mathematics. Andrew Wiles, who was in a better 
position than anyone else in the world to know that he had done so, proved Fermat's Last Theorem. 
Everybody believed it. Everybody. Except he hadn't. The gap was found by painstaking reconstruction 
of every single step of his proof. And he didn't fix it by himself. Engineering standards are so far 
away from such intellectual rigor that one might consider it a major ethical problem - and some of 
us do!

>> Simplest way to fix the process would seem to me procedurally to involve
>> recognised scientific experts in any area of proposed standardisation.
>> This - obviously - doesn't always happen by itself.
> This would prevent the standards' creation process from being open to
> all interested parties,

I don't see how requiring explicit quality control would prevent anything except poor standards. If 
that is the best argument you can come up with for why proposed standards should not be reviewed by 
internationally-recognised scientific experts, then I can consider my point to have been well taken.


Peter Bernard Ladkin, Professor of Computer Networks and Distributed Systems,
Faculty of Technology, University of Bielefeld, 33594 Bielefeld, Germany
Tel+msg +49 (0)521 880 7319
Received on Sun 22 May 2011 - 08:11:28 BST