Ken Frith (ken.frith(at)vega.co.uk)
Fri, 29 Sep 2000 11:09:35 +0100
I have been watching, over the past few months, the many erudite (and fascinating) discussions on various aspects of ensuring and maintaining software integrity. I have, however, noticed that we sometimes tend to avoid answering 'difficult' questions (possibly because they_are_difficult). In particular I notice that nobody has mentioned the problems of using so-called 'intelligent' software in safety related systems. I raise this because several of our clients seek to use varying degrees of machine intelligence - from KBS to neural nets - and have come for advice on how to implement them in safety related systems. Particular current problems relate to data fusion techniques for sensors in combat systems. So far we have usually resorted to the stock answer "you don't - at least not for safety critical functions", but this becomes increasingly difficult to enforce, even if your legal and moral ground is sound. Customers are increasingly pleading the need for additional functionality and for utility to have precedence over safety (!!) The thought of having to apply formal proofs to intelligent systems leaves me cold. How do you provide satisfactory assurance for something that has the ability to change itself during a continuous learning process? I can only assume that one would resort to black box testing, with all its inherent shortcomings and uncertainties - in particular, a black-box test would only apply to the version tested, and not to subsequent evolutions, there being inadequate facility for change management. My fear is that the longer we ignore this problem, the more likely that users will simply ignore the safety community and press on regardless (precedents from US naval combat systems and commercial operating systems??). Can anyone offer pragmatic advice to customers who are likely to use IKBS anyway? Personally I think that I prefer Nancy's view that we stand back from the system and deal with hazard and accident sequences from there ('...eliminate or mitigate the hazard at the system level...'), thus avoiding putting unvalidatable systems in the safety-critical firing line. But for how long can we continue to achieve this? Does anyone have any useful views or grains of comfort? Regards Ken (ken.frith(at)vega.co.uk)