Research

Research Home
Current Research
Past Research
Circus
DepNet
DARP
DCSC
UTC
Publications
Seminar Series




    

HISE Seminar Series


Tuesdays at 12.30, room CSE/082


If you are interested in delivering a seminar please email Ana Cavalcanti


Summer Term 2014


22nd April: Frank Zeyda

29th April: Concurrency Workshop

6th May: Jim Woodcock

20th May: Tim Kelly

27th May: Alvaro Miyazawa

3rd June: Simon Foster

10th June: Richard Hawkins


Summer Term 2013


25th June: Developing a Common "Language" for Certification - practicalities and challenges - Philippa Conmy and Katrina Attwood

The OPENCOSS project has developed a meta-model representing many assurance concepts within a Common Certification Language (CCL). In this talk we present the CCL, and discuss the practicalities of its application to support cross-domain re-use of assurance/certification data. We also address wider "philosophical" questions as to the desirability and potential challenges of re-use.

18th June: Creating Environments to Give Robots a (Simulated) Bad Time - Rob Alexander

I'll be talking about an interesting project I did with an MEng student last year and why this kind of work is methodologically compelling.

11th June: Refining SCJ Mission Specifications into Parallel Handler Designs - Frank Zeyda

Safety-Critical Java (SCJ) is a recent technology that restricts the execution and memory model of Java in such a way that applications can be statically analysed and certified for their real-time properties and safe use of memory. The work I am going to present is towards the development of comprehensive and sound techniques for the formal specification, refinement, design, and implementation of SCJ programs, using a correct-by-construction approach. As part of this, I will give an account of laws and patterns that are of general use for the refinement of SCJ mission specifications into designs of parallel handlers used in the SCJ programming paradigm. For this we use is a combination of languages from the Circus family, supporting state-rich reactive models with the addition of class objects and real-time properties; overall it is a first step to elicit laws of programming for SCJ.

4th June: What happened at Deepwater Horizon? - Ralph Eastwood

The Deepwater Horizon blowout that occurred in 2010 resulted in the largest oil spill in history. Due to the extensive legal proceedings against BP and the associated companies that were implicated for contributing to the disaster, the oil spill is unique in that it has publicly available accident reports with very detailed accounts of the events leading up to the disaster. In this talk, I will provide a simplified summary of the events that occurred and the failures that eventually lead to this disaster. I will conclude how this relates to my research into introducing Intelligent Decision Support for Safety-Critical Applications.

21th May: ISO26262 - The Automotive Functional Safety Standard - Roger Rivett

ISO26262 - Road vehicle Functional safety is the automotive version of the generic safety standard IEC61508. Although based on IEC61508, ISO26262 differs in many aspects, particularly in it approach to assessing and managing risk. This talk will summarise briefly the content and structure of the standard and then present a personal view of how meeting its requirements can be used to argue that a "safe" product has been produced.

7th May: Isabelle/UTP: Mechanised Theory Engineering for Computer Scientists - Simon Foster

In this talk I will introduce Isabelle/UTP, an Isabelle based implementation of Hoare and He's Unifying Theories of Programming which has been developed within the context of the COMPASS project. I will introduce how the library works in terms of its core value, predicate and relation models, and show how we encode the denotational semantics of the basic operators of imperative programming and the associated laws. The talk will culminate in a brief tutorial of the mechanised theory of Designs (imperative programs with total correctness). I will outline how this library can be used to give a semantics to real programming language, as exemplified by our effort to mechanise VDM and CML (the COMPASS Modelling Language).

30th April: Searching for Hazards in Complex Air Traffic Systems: the Results of the ASHICS Project - Rob Alexander

I'll present the results of the ASHICS project that I've been working on with Kester Clegg over the past two years. I'll talk about some methodological problems we hit, and lead a discussion about what we might do about them.

Spring Term 2013


5th March: Designs with angelic nondeterminism - Pedro Ribeiro

The UTP of Hoare and He is a predicative framework of relations suitable for the rigorous study of different programming paradigms. It promotes the reuse of results through the linking of theories. Particular aspects of programs can also be studied in isolation.

In the UTP, the theory of designs provides not only a model for terminating programs (where pre and postcondition pairs can be specified), but also a basis for characterising state-rich concurrent and reactive programs. These are programs whose interactions with the environment cannot simply be described by relations between inputs and outputs. In this context, process calculi such as CSP and Circus have been given semantics in the UTP through the theory of reactive designs.

Angelic nondeterminism is a useful specification construct that allows for a high degree of abstraction. It has traditionally been studied within the refinement calculus. Previous work has proposed a theory of angelic nondeterminism in the UTP through a predicative model of binary multirelations. Such models, however, can only model terminating programs. In this work we propose a new UTP theory of designs with angelic nondeterminism with the long-term aim of developing a model for process calculi.

26th Feb: Contracts in a State-rich Timed Process Algebra - Kun Wei

Circus is a comprehensive combination of Z, CSP and Morgan's refinement calculus so that it can define both data and behavioural aspects of a system. Over the years, Circus has developed into a family of languages for specification, programming and verification. The Circus family of languages is a collection of process algebras based on a combination of Z, for data modelling, and CSP, for behavioural modelling. Circus has a well-defined semantics based on Hoare and He's Unifying Theories of Programming. A refinement calculus has enabled its practical use for reasoning about implementations of control systems in a variety of languages, including Ada and Java. The work presented in this paper enables the use of pre and postconditions for reasoning in the context of Circus Time, the timed version of Circus. It is based on constructs of Timed CSP, but takes advantage of the complete lattice defined by its UTP theory to cater for deadlines that capture restrictions on the environment. Here, we present a new UTP theory that builds on the existing semantics of Circus Time by providing a simpler mathematical model for capturing observations of traces and refusals, and more rigorous operators for describing behaviours of processes. For example, we change the definitions of prefix and external choice by weakening their preconditions and strengthening postconditions. Most importantly, all operators of our theory are defined in a way that allows the calculation of pre and postconditions of timed processes. This not only supports contract-based reasoning about models, but simplifies proof of Circus Time laws. We illustrate this point by exploring a comprehensive set of laws for the deadline operators.

Autumn Term 2012


13th Nov: Concurrent Library Correctness for Multicore C / C++ - Mike Dodds

I will discuss the problem of specifying concurrent library code on multicore processors. Most multicore systems permit relaxed-memory effects, where different threads can observe different, apparently contradictory orders of events. Concurrent library code must use synchronisation to avoid unwanted relaxed effects. However, to avoid expensive synchronisation, libraries may allow clients to observe some relaxed-memory effects, and library specifications must capture these. In this talk, I will focus on the relaxed-memory primitives offered by the new ISO C11 / C++11 standard. I will give an informal introduction to relaxed memory and the C11 model, and I will describe a criterion for abstraction (based on linearizability) which can relate library methods to relaxed specifications. This new criterion represents the first sound technique for specifying C11 / C++11 code.

6th Nov: Modelling & Verifying FreeRTOS with Z/Eves - Shu Cheng

FreeRTOS is an open-source real-time microkernel. It receives more than 70K downloads a year. We specify the behaviour of FreeRTOS in Z notation, and we verify its consistency using Z/Eves theorem prover. This includes a precise statement of preconditions for all API commands. During the verifying, we also detect a problem about FreeRTSO itself. In this talk, I will present the whole structure of our specification. But will focus on the first part of the specification, due to limited time. Further, the problem we found about the FreeRTOS will also be presented.

30th Oct: A technical introduction to TM1 in the context of formal definition and automated test generation - Duncan Pearson

TM1 is an in-memory online analytic processing (OLAP) database that has a flexible cross-dimensional calculation definition language and that, in contrast to many other OLAP engines allows direct updating of the raw data. It is IBM's lead product in the Financial Performance Management segment and its writeability allows it to be used for planning, budgeting and forecasting as well as the more traditional static multi-dimensional analysis.

The runtime characteristics of TM1 are relatively simple to describe informally in terms of the structure of the data and the application of simple declarative rules defining data within particular regions of the structures as derived from data in other regions. However while it is simple it is also flexible and this flexibility allows for the generation of very complex models. This makes it difficult manually to generate sufficiently comprehensive tests to give high confidence in the integrity of an implementation. This is important both in the regression testing of developments to the existing engine implementation and in testing the integrity of any new implementation. Additionally in the context of concurrent writes to and reads from the database certain consistency of calculation is expected. Any comprehensive test suite would be required to cover this aspect with descriptions of all essentially different concurrent interactions and the expected behaviours in each.

Obviously the full set of potential tests is infinite. However it should be possible to generate a sequence of tests of increasing complexity and coverage such that as more of these tests are passed confidence in the implementation increases. There needs to be discussion of what constitutes a sufficiently comprehensive set of tests, and in particular the criteria for deciding that two TM1 models, or two interactions with them, are essentially similar.

It is hoped that the generation of a formal definition of TM1, in a language such as Z or Circus, could form a context in which such questions might be addressed and could potentially be the basis for automatic generation of those tests.

The talk will introduce TM1 with a comprehensive description its multi-dimensional structure and it's calculation definition language. This will be illustrated by simple examples in the product showing, among other aspects, the data aggregation effect of hierarchically structured dimensions and the application of calculation statements to overlapping rectilinear regions of the dimensional space. The talk will then cover the expected behaviour of the system in the context of concurrent reads and writes of cell data to the system, with particular emphasis on what constitutes consistency with respect to the system definition.

Secondly we will discuss testing, and in particular how to decide which models and interactions to test, whether those models and interactions cover all the characteristics of the system, and which tests we can ignore as being essentially duplicates of others.

The final part will be a discussion of the feasibility of defining the characteristics of TM1 using formal definition methods and whether it might be possible to use such a definition to drive creation of a set of comprehensive tests of the system, both for regression testing of developments of the existing implementation and for integrity testing of new implementations.

16th Oct: The Cardiac Pacemaker Case Study and its implementation in Safety-Critical Java and Ravenscar Ada - Neeraj Singh

The cardiac pacemaker has emerged as a case study for evaluating the effectiveness of techniques for the verification and design of embedded systems with complex control requirements. This talk reports on the experiences of using this case study to evaluate the concurrency model of two programming language subsets that target safety-critical systems development: Safety-Critical Java, SCJ (as subset of the Real-Time Specification for Java), and Ravenscar Ada (a subset of the real-time support provided by Ada 2005). Our conclusions are that for SCJ, the lack of explicit support for watch-dog timers results in a software architecture where the time at which significant events occur must be saved, and polling must be used to detect their absence. Although this results in a less efficient system, the scheduling implications for the resulting software architecture are clear. In contrast, Ravenscar Ada’s support for primitive timing events allow the construction of a highly optimized reactive solution. However, the timing properties of this solution are a little more complex to determine. Furthermore, the Ada solution requires a redundant task in order to prevent the program from terminating prematurely.

Summer Term 2012


8th May: Formal Development of Cardiac Pacemaker using Refinement Approach - Neeraj Singh

Building high quality and zero defects medical software-based devices is a critical task and formal modeling techniques can effectively help to achieve this target at certain level. Formal modeling of high-confidence Medical devices, such as those are too much error prone in operating, is an international Grand Challenge in the area of Verified Software. To model and to verify the correctness of cardiac pacemaker are one of the proposed challenge. We consider the formalization of cardiac pacemaker using an incremental proof-based development. The incremental proof-based development is mainly driven by the refinement between an abstract model and its detailed design through a series of refinements, which add functional and timing properties to the abstract system-level specifications using some intermediate models. The properties express system architecture, action-reaction and timing behavior. This talk will cover elementary information about the cardiac pacemaker, operating modes, timing properties, and modeling approach using refinement of of one- and two-electrode pacemaker. Models express in Event-B modeling language, which integrates conditions (called proof obligations) for checking their internal consistency with respect to invariants and safety properties and events. Generated proof obligations of models are proved by RODIN tool and desired behavior of the system is validated by the ProB tool.

12th June: The COMPASS base notations: CML and SysML - Alvaro Miyazawa

COMPASS (Comprehensive Modelling for Advances Systems of Systems) is a research project aiming at extending industrial tools and practice on Systems of Systems modelling. This is based around two notations: CML (Compass Modelling Language) and SysML (Systems Modelling Language). The first is a formal notation based on VDM++ and CSP, and the second is a graphical notation closely related to UML. In this talk, I'll first present CML through an example and motivate the use of refinement as a central notion in CML. In the second part of the talk, I'll present our work on linking the two base notations, and how we envisage exploring this link.

Spring Term 2012


17th January: Research Priming - John McDermid

Are you full of good ideas for possible research, but unsure of how to take them further - i.e. how to develop ideas into bids, what kinds of thing get funded, what sources of funding are available, how to get information about opportunities, how to build up interest or attract a consortium or industry backer? John McDermid has agreed to use this slot to talk about 'Research Priming'. So please come along and take advantage of his experience and expertise and hopefully kick-start a few initiatives.

7th February: Using Argumentation to Evaluate the Common Criteria - Patrick Graydon

Many people and organisations rely upon assurance standards to provide confidence in software intensive systems. For example, people rely upon the Common Criteria for Information Technology Security Evaluation to establish justified and sufficient confidence that an evaluated IT productÕs contributions to security threats and threat management are acceptable. Many standards contain both requirements for evidence and a discussion of the objective of those requirements. These can be reconstructed as an argument: the requirements supply evidence and the objective statements explain that evidence. I evaluated the Common Criteria by capturing and critically reviewing its implicit argument. Our evaluation revealed 121 issues with the standard, ranging from vagueness in its text to failing to require evidence that would substantially increase confidence. In this talk, I will present the method, the findings, and some thoughts about how to repair the standard.

21st February: Using search based software engineering to develop robust safety critical software - Philippa Conmy

This talk will describe some research that looks at using search based software engineering to produce logicially diverse solutions to control problems. A case study looking at the well-known inverted pendulum problem is presented, along with a discussion of the difficulties of resolving the tension between solutions that are both good and diverse within the fitness function.

28th February: What doth a good safety analysis method make? - Oleg Lisagor

Oleg will lead a discussion on how can we evaluate a safety analysis technique. His starting premise is that safety analysis methods cannot be evaluated as a whole but rather have to be decomposed into smaller constituent parts which can be assigned more clear quality attributes. Using Fault Tree Analysis as a concrete example, we will try to establish what are quality attributes of different "parts" of FTA and what approaches can be used for evaluating or arguing about individual attributes.

The overall goal is to have a seminar that combines some introductory slides and relatively free discussion in equal proportions.

6th March: Reversing Babel? Towards a Common Language for Certification - Katrina Attwood

The recently-started EU-funded OPENCOSS project seeks to provide "a common certification framework to facilitate the reuse of safety arguments, safety evidence and contextual information about system components" across the railway, avionics and automotive domains and thereby to "make certification more cost-effective, precise and scalable" (OPENCOSS Project Description of Work, 2011). One of the enabling technologies for this work will be the 'Common Certification Language' (CCL), which will provide a synthesis of the requirements and terminologies used in different industry sectors and will inform intra- and inter-domain reuse of assurance artefacts.

This seminar will address the motivation for and challenges posed by this work, drawing partly on well-established theoretical work in semiotics and translation studies. Possible modelling approaches will be outlined, as will the potential use of the certification language in safety cases.

Work on the CCL is at a very early stage. It is hoped that the seminar will generate discussion within HISE, from which we can draw out some high-level requirements for the modelling approach and the usage of the language.

13th March: Evolving Robust Networks for Systems-of-Systems - Jonathan Aitken

This seminar will present some recent work investigating new techniques for evolving configurations for Systems-of-Systems. The focus of this work is to lay out a framework to design and model (using the Network Description Language) Systems-of-Systems networks which can provide strict guarantees about performance, using simulation and mathematical techniques such as Network Calculus.

As part of this talk I'll give an insight into some of the technologies that we are using in the initial stages of this work. This will include some initial results (on a simple demonstrator system) about how we have coupled evolutionary techniques into the mix to enable automatic network design with hard guarantees. I'll also continue giving some of our ideas about how we can extend the work from here.

Autumn Term 2011


11th October: Circus Time Action Model for SCJ - Kun Wei

As a member of Circus specification language family, Circus Time is applied to capture timing behaviours and properties in the development and verification of Safety-Critical Java programs. However, the old version of Circus Time is not sufficient to undertake this task since it is short of a key timed operator, deadline. Therefore, in this talk, I will introduce how we extend Circus Time to accommodate deadline operators and other useful timed operators. On the other hand, verifying Circus and Circus Time programs is always hard work. Here, I also introduce two threads of preliminary work on it. One is to collapse parallelism through developing a number of algebraic laws, the other is to model check Circus programs via translating them into SAL ( a model checker on LTL and CTL) input language.

18th October: Model Driven Integration - Alek Radjenovic

An opportunity and a potential value added lies in providing support for model driven integration. This is particularly important when models are described using different platforms such as UML, SysML, MODAF, Matlab Simulink, and many others. The importance of detecting system integration problems very early on, during the system design phase - before any new code is written and before there is a buy-in from a supply chain software manufacturer - cannot be underestimated. I will draw attention to the significance of combining system components that were not created by the same supplier or by using the same platform.

The key technical requirements for this problem are twofold. Firstly, there is a need to be able to bring together system components at the model level in order to able to reason about their compatibility, both at the structural level as well as at the behavioural level. And secondly, a tool framework (for the above) that is compatible for various modelling technologies in widespread use, as well as being extensible and scalable to provide future capability, would be hugely beneficial.

I will give an overview of the approach that was proposed (as a solution to the problems stated above) during the recent work carried out for SSEI.

25th October: The Safety-Critical Java Mission Model: A Formal Account - Frank Zeyda

Safety-Critical Java (SCJ) is a restriction of the Real-Time Specification for Java to support the development and certification of safety-critical applications. It is the result of an international effort from industry and academia to provide a version of Java that can be effectively used in the context of safety-critical real-time systems development. It is fairly new stuff, a draft of the technology specification for SCJ has only been officially published early this year. In this seminar, I will present the first formalisation of the SCJ execution model, covering missions and event handlers. The formal language we use is part of the Circus family; at the core, we have Z, CSP, and Morgan's calculus, but we also use object-oriented and timed constructs from the OhCircus and Circus Time variants. The formalisation of the SCJ mission model is a first step in the development of refinement-based reasoning techniques for SCJ, which is our main goal in the HiJaC project. I will try and give an overview of the modelling approach, and also explain some of its elements in more detail.

The talk is mostly intended as a practice run as the work is going to be presented later on in the week at ICFEM 2011. So, first apologies for having given a somewhat similar talk already in February, and secondly I'll be very grateful for suggestions from those you can attend (and promise not to overrun! ;-))

1st November: Diagnostic Monitoring in Anaesthesia using Temporal Pattern Matching - Richard Jones

The identification of temporal patterns plays an important role in many medical diagnostic applications. Template systems that identify events and landmark points directly from time-series information have been shown to work well in various applications. However few such systems directly account for the uncertainty and vagueness often associated with medical decision-making. This seminar describes a template system that uses fuzzy set theory to provide a consistent mechanism of accounting for uncertainty in the existence of events, as well as vagueness in their starting times. The intelligent monitoring of patients in the operating theatre is the application addressed. Here fuzzy trend templates are generated for temporal measurements taken from a patient. A specific diagnosis of a problem is made by recognising corresponding dynamic changes in some of the patient measurements. Evidence-based reasoning is used as the inference mechanism to provide belief and plausibility information on whether a certain patient condition has occurred.

8th November: Asking Questions that can be answered - Andrew Rae

Not all questions of importance in system safety or computer science have clear cut answers that could be independently replicated by other investigators. For questions such as "Is my system safe?" or "Which standard is best?" there may actually be no feasible research which could give an answer.

This is very easy to see in student projects. There are some student projects which, if they produce a result, have undeniably added to human knowledge. There are other projects where even completion produced no building blocks for further advancement.

There are other questions where any answer, however small, is at least a step in the right direction. "What are the experiences of starting safety engineers?". "Do most accidents involve hazards that have previously been identified?". "What are the common mistakes people make in risk assessment?".

There are two parts to this seminar. The first part involves thinking about what type of questions are most likely to result in answers - can we predict in advance, for instance, student projects which are capable of generating publishable results? The second part involves my experience in trying to put this into practice.

Plenty of time will be allowed for discussion and brainstorming.

22nd November: ASHiCS: Automating the Search for Hazards in Complex Systems - Kester Clegg

With increasingly complex systems to manage, safety analysts are starting to express concern that large complex systems are becoming too difficult to predict or guarantee safety when part of the system is changed or placed under stress. In order to help analysts discover hazards within complex systems, we propose a new generation of tools that make use of search heuristics and simulation to uncover hazards that might otherwise be missed using traditional (manual) safety analysis.

The ASHiCS project is part of the WP-E long term research and innovation program managed by the SESAR (Single European Sky for ATM Research) Joint Undertaking, which aims to bring gate-to-gate 4D trajectories to European air space by 2020. The ASHiCS project is just 6 months old, however we have already started work on an API into RAMS Plus: a fast time, air traffic control simulation software suite. This short talk looks at the approach taken, our choice of the simulation software environment and some of its capabilities.

29th November: Verifying SCJ Memory Safety - Chris Marriott

The introduction of SCJ has seen a novel programming paradigm that is designed specifically for make Java more applicable to safety-critical systems where certification is essential. Memory safety is an important property in any program, however, the failure of a safety-critical system could potentially have serious consequences. It is not possible to apply the tools and techniques for Java programs to SCJ; therefore, this presentation discusses what is necessary to reason about memory safety in SCJ programs. We describe a starting point that contains an abstract language and a series of rules that must be obeyed in order to guarantee memory safety.

Summer Term 2011


Thursday 21st April 11:15: Use of Models in High-Integrity Software Product Lines - Stuart Hutchesson

Academic and commercial approaches to software product line development have concentrated on the rapid instantiation of source code assets to minimise product time to market. Generative programming and model-based software engineering approaches have been suggested as effective ways of achieving this. For high-integrity software systems the instantiated product has to be accompanied by evidence that demonstrates and support the product assurance arguments for the single product instance. This talk discusses the technical and economic challenges of using models and model transformation to develop software product lines for high-integrity applications.

17th May: A World Without Safety Standards - Philippa Conmy, Patrick Graydon and Richard Hawkins

The effectiveness of safety standards in ensuring safety is often questioned. So why do we bother having them? What would happen if we didn't? And is there a better or alternative way forward? In this seminar we will discuss these issues - please come along and share your views.

24th May: A New Approach to Creating Clear Safety Arguments - Richard Hawkins

In this seminar I will introduce assured safety arguments, a new structure for arguing safety in which the safety argument is accompanied by a confidence argument that documents the confidence in the structure and bases of the safety argument. This structure separates the major components that have traditionally been confused within a single safety argument structure. Separation gives both arguments greater clarity of purpose, and helps avoid the introduction of superfluous arguments and evidence. I will describe a systematic approach to establishing both arguments.

31st May: The Safety-Critical Java Memory Model: a formal account - Ana Cavalcanti

Safety-Critical Java (SCJ) is a version of Java for real-time programming, restricted to facilitate certification of implementations of safety-critical systems. Its development is the result of an international effort involving experts from industry and academia. What we discuss in this talk is, as far as we know, the first formalisation of the SCJ model of memory regions. We use the Unifying Theories of Programming (UTP). In this way, we enable the integration of our theory with refinement models for object-orientation and concurrency. In developing the SCJ theory, we also make a contribution to the UTP by providing a general theory of invariants (an instance of which is used in the SCJ theory). The results presented are a first essential ingredient to formalise the novel programming paradigm embedded in SCJ, and enable the justification and development of formal reasoning techniques based on refinement.

7th June: System of Systems Requirement Engineering and Guided Emergence - Kewei Yang

There are two parts to my presentation.

Firstly, I will introduce my research group "System of System Requirement Modelling and Analysis", which includes four research areas: SoS Requirement Description and Specification, SoS Requirement Architecture Framework, SoS Requirement Management, Acceptability of SoS Requirement and Assessment of SoS Architecture. I will outline some projects and research cases concerning the above areas.

Secondly, Guided Emergence in System of Systems Requirement Analysis is my new research area. Semi-autonomous Agent will be introduced for steering emgent behavior to the desired direction. We use the Parallel Executive Experiment and Simulation to evaluate the SoS's plan.

14th June: Refinement-based verification of implementations of Stateflow charts - Alvaro Miyazawa

Simulink/Stateflow charts are widely used in industry for the specification of control systems, which are often safety-critical. This suggests a need for a formal treatment of such models. In previous work, we have proposed a technique for automatic generation of formal models of Stateflow blocks to support refinement-based reasoning. Here, we present a refinement strategy that supports the verification of automatically generated sequential C implementations of Stateflow charts. We discuss how this strategy can be specialised to take advantage of architectural features in order to allow a higher level of automation, and how it can be extended to support the verification of parallel implementations.

21st June: A Software Engineering Cock-Up - Ian Nussey (IBM)

This cautionary tale is of a apparently trivial software project which didn't go too well. After posing some questions, it explains what happened, mentions some useful survival tools and techniques that would certainly have made things easier and ends with a couple of stories with enduring relevance. Something more than computer science was needed. Relevant are 'luck' and quality – corner stones of success which are hard to define and even harder to achieve.

5th July: Just what the doctor ordered? Issues in risk communication in the healthcare sector - Katrina Attwood

In recent years, the healthcare domain has seen a considerable shift towards patient empowerment, patient responsibility and shared decision-making. Rather than simply accepting that 'the doctor knows best', a patient is encouraged to play an active role in understanding his/her condition, weighing up treatment options, managing his/her own treatment and maintaining his/her recovery. This presents several challenges, not the least of which is the communication of risk to a non-specialist audience.

In this seminar, I shall draw on theories of cognition and language to examine the barriers to public understanding of risk, and consider what types of evidence and rhetoric are likely to be persuasive (or to fail to persuade) in this context. I hope then to draw out a wider discussion of risk communication issues relevant in the context of safety-related systems.

Spring Term 2011


27th January: Reasoning about Safety Critical Java - Chris Marriot

Chris Marriott will present his literature review seminar on the work conducted so far in his PhD. He has been looking specifically into methods of reasoning about Safety Critical Java and will discuss various existing techniques related to Java, RTSJ and SCJ.

3rd February: A Simulated Annealing Approach to Task Allocation on FPGAs - Philippa Conmy

FPGAs are becoming increasingly popular for use in safety critical systems. They offer many benefits over traditional microprocessors such as predictable timing and parallel processing. These benefits are not being fully exploited due to over-estimates of the risk associated with hardware failure. This talk presents the initial results of a study using a simulated annealing approach to task allocation on an FPGA. The approach has been developed to use n-modular redundancy to achieve reliability targets, whilst attempting to minimise resource usage.

10th February: A Formal Account of the Mission Model in Safety-Critical Java - Frank Zeyda

Safety-Critical Java (SCJ) is a programming model and subset of classical Java that resulted from an international effort to provide a framework and platform for the development of safety-critical applications in the Java language. SCJ is built on principles of Java for Real-time Systems (RTSJ) but further restricts them in order to make Java applications amenable for static checking, formal analysis and ultimately verification. This includes not just functional behaviours, but also real-time properties and memory-management issues.

The hiJaC EPSRC project (Ana Cavalcanti, Jim Woodcock and Andy Wellings) is concerned with the development of reasoning techniques for SCJ programs. As part of this wider agenda, I present our first efforts in formalising the execution model of SCJ in the Circus language. For this I will give an overview of how SCJ safelets are executed by the framework, what are the major components involved, and how do we translate this execution model into a Circus specification. We use Circus as our language of choice as it gives us similar benefits like CSP in terms of specifying concurrent behaviours and communication, but additionally supports state-rich models and data operations in Z or as guarded commands. In the seminar I will also talk about validation in FDR, and address current limitations of our existing notation, explaining the need for a new and shiny, highly integrated language based on the Circus family of languages.

The seminar mainly draws from a paper we submitted to FM 2011, and a report will be made available shortly that in detail explains the formal mission model. The FDR model can be downloaded here.

17th February: Relating Assurance and Safety - Zoe Stephenson

Safety is concerned with an acceptably low rate of hazardous events and accidents from a system. Assurance is concerned with the confidence placed in any estimate of safety. This seminar is an exploration through group discussion on the relationships between assurance and safety, with questions such as: When does increased assurance mean increased safety? When does increased safety mean increased assurance? The aim is to expand the range of relationships that could be studied in future safety and assurance research. I hope to see many of you there!

24th February: Making An Autominous UGV Dangerous - Black Hat Session - Rob Alexander

In this "black hat" session, we'll be trying to come up with as many ways as possible that an autonomous vehicle could be made dangerous, by actors who are malicious, misguided or just plain lazy, or just by bad luck and unforeseeable error.

The session is scheduled for about 90 minutes. We’ll spend about 60 on brainstorming attacks, then 30 on how we might counter them (e.g. through safety processes or through cleverness of design).

If you want to get the most out of the seminar itself, you might want to look at the attached document and start thinking about problems you could raise.

The notes from this this session can be viewed on this wiki page.

3rd March: Assurance Based Development - Patrick Graydon

Assurance Based Development (ABD) is an approach to the synergistic construction of critical software systems and their assurance arguments. In ABD, the need for assurance drives a unique process synthesis mechanism that results in a detailed process for building both software and an argument demonstrating its fitness for use. A key element of ABD process synthesis is the success argument, an argument that documents developers rationale for believing that the development effort in progress will result in a system that demonstrably meets an acceptable balance of stakeholder goals. Such goals include safety and security requirements for systems using the software as a component, and time and budget constraints.

ABD has been evaluated through case study development of software for two systems. The University of Virginia LifeFlow artificial heart pump utilises active magnetic bearings and a carefully designed flow path to reduce damage to blood cells. Using ABD, my colleagues and I implemented control software for the pumps magnetic bearings in SPARK Ada and subjected this implementation to a full formal verification using PVS and other tools.

The Tokeneer enclave protection system is a National Security Agency challenge problem inspired by existing NSA systems. In 2003, Altran Praxis (then Praxis High Integrity Systems) implemented Tokeneer software using SPARK Ada and a process designed to comply with the Common Criteria’s EAL5 package of Security Assurance Requirements. Using ABD and hypothesising a customer who insisted on the use of C, I implemented a portion of the Tokeneer software and created an argument demonstrating compliance with Tokeneers Common Criteria Security Target.

17th March: Introduction to EPF, an open source framework for designing processes and process models - Malihe Tabatabaie

The report on which this talk is based can be found here. The discussion will hopefully motivate discussion of examples that could benefit from this framework.

Autumn Term 2010


26th October: Specification coverage for Circus - Ana Cavalcanti

Circus is a state-rich process algebra for refinement based on a combination of Z and CSP. We have previously presented a theory of testing for Circus; it gives a symbolic characterisation of tests. Each symbolic test specifies constraints that capture the effect of the possibly nondeterministic state operations, and their interaction.
This is a sound basis for testing techniques based on constraint solving for generation of concrete tests, but does not support well selection criteria based on the structure of the specification. We will present a labelled transition system that captures information about a Circus model and its structure. It is useful for testing techniques based on specification coverage. The soundness argument for the new transition system relates the new transition relation to the Circus relational model and its operational semantics.

2nd November: A tool for the automatic generation of Circus specifications from Simulink diagrams - Chris Marriott

Previous work has described techniques to translate control law diagrams expressed in Simulink into Circus specifications. Several tools have been developed to automate parts of this translation, however, they remain standalone tools. This seminar introduces a new tool which incorporates the entire translation process and automates all possible stages. Modifications and extensions to existing tools and techniques, which have allowed a greater selection of Simulink diagrams to be expressed in Circus, will be discussed.

9th November: Evaluation and Integration of COTS in Evidence based Assurance Frameworks - George Despotou

COTS have increasingly been used by industrial practice as a means of maintaining low development costs of a product, whilst offering significant capability upgrades. COTS are multi purpose products driven by commonly used functionality. However, being general purpose products raises certain challenges regarding their ability to be certified. Previously used (process-based) standards stipulated a process that the product needed to adhere to. This involved production of a generic set of evidence known as the certification pack (CertPack). Being the product of a generic test process, the available (CertPack) COTS evidence may not be sufficient or suitable to support the developers’ safety claims. The challenges raised by use of COTS in such assurance frameworks can have ramifications on a project both from a managerial and safety assurance perspective. The paper presents an analysis of the challenges from the use of CertPack and their impact on assurance and project management.

16th November: Safety Assessment Problems of China Railway Systems and MBSA methods used for Communication Based Train Control Systems - Niu Ru (Beijing Jiaotong University)

Just like aviation and aerospace, railway is also a tipical safety-critical system, especially its signaling system which is used to protect train from overspeed and collision. Our lab has involved in several projects of designing and building signaling systems of subways and main lines. We developed the first indigenous Communication Based Train Control (CBTC) system in China, the new generation of signaling system for subway, and applied it in Yizhuang Line. Based on the experience of those projects, we identified several problems of applying safety assessment methods, and proposed a model-based method so that the behaviours of failure propagation can be analyzed.

23rd November: The Evolution of Reputation - John Clark

In this seminar I will outline some work that started as an MEng student project (by John Atkinson) and has continued a little thereafter funded by the International Technology Alliance ITA) and EPSRC.

Assessing whether to trust an individual is hard. A variety of schemes are available. In particular we see a variety of "reputation based systems", such as the way eBay operates. Agents provide accessible information on the quality of their interactions with an individual (or otherwise provide some opinion). This can be used by agents considering further interactions. Thus, I may look at feedback on eBay before buying from a particular seller.

For tactical ad hoc networks, we may envisage agents wishing to assess the trust of other agents and might ask "What is a reasonable mechanism to use to do this?" There may be no centralised site recording interaction-related information. Furthermore there may be constraints on how long you can reasonably wait for information to be supplied. This work is the first to take a step back and form a "language of reputation", a language that allows reputation mechanisms to be stated. We evolve expressions in that language, simulate them on a Mobile Ad Hoc Network Simulator to assess how useful they are, and use evolutionary pressure to evolve high performing ones. We also put a spanner in the works by allowing self-regarding cliques to exist that habitually provide excessively glowing reports of each other's reliability when asked. This is still work in progress but it is fun.

30th November: A simulated annealing approach to task allocation on FPGAs - Philippa Conmy

FPGAs are becoming increasingly popular for use in safety critical systems. They offer many benefits over traditional microprocessors such as predictable timing and parallel processing. These benefits are not being fully exploited due to over-estimates of the risk associated with hardware failure. This talk presents the initial results of a study using a simulated annealing approach to task allocation on an FPGA. The approach has been developed to use n-modular redundancy to achieve reliability targets, whilst attempting to minimise resource usage.

7th December: Human Factors in Automatic Risk Assessment for Systems-of-Systems - Jonathan Aitken

The level of complexity in Systems of Systems (SoS) is increasing as more complex functionality emerges from the interaction of individual components. As networks become more complex it becomes more difficult for an individual to identify potential safety risks. We know, from previous accidents, that poor understanding of networks can be dangerous. SSEI Task 19 is investigating methods for automating risk assessment in such systems. This talk will focus on two areas of work already undertaken:
  • The impact of such a process on the decision making capabilities of network operators
  • Representation of Risk in a SoS


Summer term 2009


23th June: Escaping the Non-Quantitative Trap - Rob Alexander

Quantitative modelling and analysis is common in safety engineering, but it is often criticised. Objections include the difficulty in acquiring probabilities (e.g. for human error), the dubious assumptions often needed to manipulate them (e.g. independence of events), and the inherent uncertainty involved in making decisions based on probabilistic predictions. Clearly, poor predictions are of little value and may be dangerous. Faced with this danger, many people respond by eliminating quantities altogether. This is a trap, as we have no guarantee that the resulting model or predictions will be better; indeed, the subtlety of expression offered by numerical probabilities has been lost. This paper discusses some alternatives to the non-quantitative trap, and explores their significance for the issue of safety case assurance.

16th June: High Integrity Systems in the Health Environment- Mark Nicholson

Patient Safety is an important issue to us all. In this talk I will present some of the statistics on patient safety and the ways in which these issues are being addressed for medical devices and Information Technology. The health service through "connecting for health" is undertaking a massive programme of introducing IT, including electronic patient records using archetypes. I will give an overview of these approaches, what the department is already involved in, and the potential for future research in this area.

9th June: (Mis)Understanding formal specifications (Part 2) - Leo Freitas; Zoe Stephenson

Following on from last week's discussions on choosing appropriate formal methods and levels of detail, this week we present some examples of cases where at first glance specification looked correct and the tools seemed to process them correctly, but a devil was found to be lurking in the detail. Our aim is to give an idea of how such issues can arise even though the tools do their job 100% correctly. Our examples come from various domains such as automotive suspension systems and electronic money.

2nd June: Understanding formal specifications (Part 1)- Leo Freitas; Zoe Stephenson

We use great number of tools to create and manipulate specifications. These tools are based on some fairly routine mathematics, but the interesting parts that are shown in papers and presentations assume some background and may seem daunting to some.

In this seminar, we aim to chart out some of the main techniques used in writing specifications and show where these techniques have practical and industrial application within specific tools.

The format will be slightly different to the majority of the seminars --- an initial session to find out what topics the assembled audience are most interested in, followed by two short discussion sessions to explore already identified topics these in more depth.

Consequently, this seminar should be pretty light on slides and extremely accessible to the whole audience. Even people trained in formal methods may find it useful in bridging the gap between different communities or on how to present good evidence for the industrial-scale use of the methodologies presented.

26th May: An Automatic Theorem Prover for Real-Valued Special Functions (Large 35MB file) - Larry Paulson (Cambridge) (*distinguished speaker*)

Logical formulas involving special functions such as ln, exp and sin can be proved automatically by MetiTarski: a resolution theorem prover modified to call a decision procedure for the theory of real closed fields. (This theory in particular includes the real numbers with addition, subtraction and multiplication.) Special functions are approximated by upper and lower bounds, which are typically ratios of polynomials. The decision procedure simplifies clauses by deleting literals that are inconsistent with other algebraic facts; the net effect is to split the problem into numerous cases, each covered by a different approximation, and to prove them individually. MetiTarski simplifies arithmetic expressions by conversion to a recursive representation, followed by flattening of nested quotients. It can solve many difficult problems found in mathematics reference works, and it can also solve problems that arise from hybrid and control systems.

19th May: Verified POSIX flash file store (a summary) - Leo Freitas

An international Grand Challenge was established in 2003 aimed at software verification. In this talk, we present a summary of results within the second PILOT project on a verified POSIX Flash file storage. It includes specific fault tolerant features required for space flight.

The POSIX project, led by the University of York, has been a collaboration with other Universities (e.g. Cambridge, Oxford, MIT, UT-Austin, TCD, etc) as well as industry & research centers (e.g. Microsoft Research, SRI International, etc.).

This week's seminar was presented at Intel HQ in a visit in April. The talk was to an audience responsible for Intel's storage technology, where the ONFi (Open NAND Flash Interface) Standard body is hosted. York became an ONFi member since 2007, and the talk envisaged to summarise most of the work related to flash-memory we've been doing as part of the Grand Challenge on Software Verification.

12th May: Small Script on Failure Logic Modelling - Oleg Lisagor

Model-based safety assessment is great! Especially if you use the family of approaches we call Failure Logic Modelling (e.g. Papadopoulos’s HIP-HOPS and Fenelon’s FPTN). The FLM approach is based on characterising dependencies between components in terms of the deviations of their behaviour in presence of failures from the ideally expected behaviour. The approach is compositional, reusable, intuitive and efficient. It is suitable for the complexity and scale of real industrial safety critical systems. Too good to be true? -Well...

In this talk I will discuss some fundamental problems with the FLM approach that we have identified over the last year. I will show that the approach doesn’t necessarily yield compositional or reusable models. I will use two systems – ARP Wheel Braking System and a simplified A320 Electrical Power Distribution System – to illustrate some problems. On a more positive note, I will also propose possible solutions that are faithful to the FLM philosophy. Whether what emerges at the end is intuitive, efficient or neither is a subject to a debate (which will be encouraged in the seminar).

5th May: Applying ACARP to Software Safety - Richard Hawkins

The ALARP principle requires that all risks should be reduced to a level which is as low as reasonably practicable. For a risk to be considered ALARP, that risk must be assessed to be tolerable, and the cost of taking measures to reduce that risk must be grossly disproportionate to the benefit gained from reducing the risk. The greater the risk is, the more, proportionately, the developers of the system are expected to spend to reduce it.

When considering the software aspects of systems, the ALARP principle is difficult to apply. Most of the activities undertaken to support the safety of the software, such as testing or static analysis, are used to generate confidence in the software. As such they do not directly reduce risk, but instead provide information about that risk. In response to this, the concept of ACARP (As Confident As Reasonably Practicable) has been suggested as a way of assessing the sufficiency of the measures taken to demonstrate the safety of software. Although appealing in principle due to its relationship to ALARP, the ACARP concept has never been fully developed.

In this talk I will describe how the ACARP concept may be applied in practice, and how ACARP can be used to help the developer of safety-related software make justifiable decisions about the most appropriate techniques and approaches to adopt. In discussing the concept, I will consider the relationship between risk and confidence.


Spring term 2009


13th January: Formally Specifying Operating Systems: Problems for the Immediate Future - Iain Craig (University of Northampton)

In this talk, I will be discussing some of the problems that have been raised while specifying operating systems. Some of the work to which I will refer is published, while some remains in manuscript. The problems I want to concentrate on are:
  • Layering;
  • Pointers and object sizes (in specification and refinement); and
  • Specification of processes using shared memory.
If there is time, I also want to make a few observations on target languages and compilers.

20th January: A Timed Model of Circus with the Reactive Miracle - Kun Wei

In this work we propose a timed model of Circus which is a combination of CSP and Z. Based on the semantics in [1], all processes in this model are described as reactive designs. In addition, we introduce the reactive miracle to make use of a complete lattice with respect to the implication ordering. The involvement of the reactive miracle provides us with more power and flexibility in system specifications which we cannot properly represent using the failures-divergences model of CSP.
[1] M. V. M. Oliveira, A. L. C. Cavalcanti, and J. C. P. Woodcock. A UTP Semantics for Circus. Formal Aspects of Computing, 2007.

27th January: Correct synthesis for Handel-C programs (work in progress) -
Juan Perna

The major adoption of Field Programmable Gate Array (FPGA) technology has made the compilation of Hardware Description Languages (HDL) a very attractive topic for research. In particular, the correctness in the translation when generating a programming file for an FPGA device from a source HDL program has gained significant relevance in the context of safety or mission critical systems. Our PhD project in this area is focused on applying the algebraic approach to generate a compiler for Handel-C (a C-like language with hardware output) that is proved to preserve the exact semantics of the source language.

In this talk we will present the results we have obtained so far towards this goal. We will begin by outlining the subset of Handel-C that will be accepted as input for our compiler. We will then introduce the reasoning language: a superset of the input language with constructs allowing us to define and prove the compilation rules. Finally, we will present the target for our compilation (normal form) and show how each construct in the language can be reduced to it. We will conclude by briefly outlining a method for mapping our normal form into hardware together with the results we still need to produce.

29th January: Virtual Aircraft Project: Model Based Safety Assessment - Felipe Ferri (EMBRAER, Brazil)

The Virtual Aircraft is a project from the Technology Research division of EMBRAER, a brazilian aircraft manufacturer, whose intention is to establish a Model Based Design process for supporting aircraft design. System Safety Assessment is a set of analyses whose objective is to demonstrate that a system complies to the current safety regulations. The Model Based Safety Assessment team of the Virtual Aircraft project aims to develop methods, procedures and tools in order to aid in the execution of such analyses by using modeling and simulation techniques. In this seminar, it will be presented an implementation of the HiP-HOPS method for automatic fault tree synthesis developed by the MBSA team.

30th January: Mergers & Acquisitions and Implications for the IT Management - C. Adam Albayrak (Harz University, Germany)

Mergers and acquisitions take place all over the world. What does this mean for information technology (IT), especially for the management of IT? Prof. Dr. Albayrak of Harz University of Applied Sciences, Germany, will give a short introduction to the field of IT management and will talk about the experiences he made in merging IT organizations.

3rd February: A Formal Information-Sharing Security Policy for Xenon, a High-assurance Separation Hypervisor - Leo Freitas (HISE), John McDermott (Naval Research Lab, US)

The up-front choice of security policy and the formalism used to model it is critical to the success of projects that seek to enforce information-flow security. This paper reports on the Xenon project's choice of policy and formalism. Xenon is a high-assurance separation hypervisor based on re-engineering the Xen open-source hypervisor. Xenon's formal policy both guides the re-engineering and serves as a basis for formal modelling. Definitions of information-flow security can be difficult to apply, because in general they are not preserved by refinement. Roscoe, Woodcock, and Wulf have defined an information-flow policy that is preserved by refinement, but it is defined in a purely event-based formalism that does not directly support refinement into state-rich implementations like hypervisor internals. Circus is a combination of Z, CSP, and Hoare and He's unifying theories of programming. Circus is suited for both event-based and statebased modelling. In this paper, we show how to define an information-flow policy in Circus that is also preserved by refinement. Because Circus retains the human-readability of Z, heuristic application of the policy to re-engineering is simplified and a larger open source community can be supported. Because Circus can easily model state-rich implementations of event-based security policies, the Xenon model can support complete policy-to-code modelling in a single language.

10th February: Mechanised Translation of Control Law Diagrams into Circus - Frank Zeyda

In this seminar I will talk about some theoretical and practical results on translating control law diagrams into Circus. These shall be, in particular, the generalisation of an existing translation strategy and the reporting on a tool, the Circus Producer, that fully automates the generation of Circus models from Simulink diagrams. By using the Circus language as a vehicle for defining a semantics for control laws, we may not only characterise the functional aspects of a diagram but moreover capture concurrency resulting from the independent execution of blocks and subsystems. This, unlike existing techniques, allows for the verification of parallel implementations of control systems. To facilitate the application of a verification strategy for control laws based on Circus, we construct models that are sufficiently aligned to a given implementation so that most steps in the refinement can potentially be automated. We also build upon the experience and functionality of existing tools such as the Community Z Tools (CZT) and ClawZ. I will outline how this influenced the development of the Circus Producer tool.

17th February: A Safety Case Architecture of CBTC - Chao Liu

Communication-based train control (CBTC) is automated control system for railways that ensures the safe operation of trains using digital radio-based communication between various control entities that make up the system. CBTC application introduces key technologies of moving block, safety communication protocol, train location to achieve ATO&ATP goals, and in consequence, CBTC system becomes a distributed real-time safety-critical system. This talk represents a goal-decomposing approach to generating rational architecture of GSN-structured safety arguments, which supports the development and maintaining safety case of CBTC system.

24th February: Probabilistic analysis of failure propagation and transformation - Xiaocheng Ge

Understanding how a system behaves in the presence of failures is important in the safety domain. It is very often that a system fails due to failures of individual or a small number of its components. FPTC is one of component-oriented techniques which identify the failures of a system starting from an analysis of its component failures. In the seminar, I will talk about how to make the analysis quantitatively based on the algorithm of FPTC and how to use the probabilistic model checker (i.e, PRISM) to help the safety assessment in our approach.

3rd March: Putting Formal Specifications under the Magnifying Glass : Model-Based Testing for Validation - Emine Aydal

A software development process is conceptually an abstract form of model transformation, starting from an enduser model of requirements, through to a system model for which code can be automatically generated. The success (or failure) of such a transformation depends substantially on obtaining a correct, well-formed initial model that captures user concerns. Model-based testing automates black box testing based on the model of the system under analysis. This paper proposes and evaluates a novel model-based testing technique that aims to reveal specification/requirement-related errors by generating test cases from a test model and exercising them on the design model. The case study outlined in the paper shows that a separate test model not only increases the level of objectivity of the requirements, but also supports the validation of the system under test through test case generation. The results obtained from the case study support the hypothesis that there may be discrepancies between the formal specification of the system modeled and the problem to be solved, and that using solely formal verification methods may not be sufficient to reveal these. The approach presented in this paper aims at providing means to obtain greater confidence in the design model that is used as the basis for code generation.

Autumn term 2008


28th October: Arguments for Automation - Andrew Rae (external speaker)

A frequently encountered effect of innovation is the replacement of a task or role previously performed by a human operator with an automated function. Automation of operator tasks is seen as particularly desirable in the case of the "D-Cube" - dirty, dangerous or dull tasks. Both literature and standards suggest that the correct approach for addressing risk associated with automation is to identify the hazards, and eliminate or mitigate those hazards to the point where risk is As Low as Reasonably Practicable (ALARP). However, this does not directly address the question of whether automation of a task or role is appropriate in the first place. Can risk be truly claimed as ALARP without considering the option of NOT automating as a potential mitigation? Simplistically, a new set of hazards should be deemed acceptable if they present a combined risk less than the sum of the risks they remove and the benefits that accrue from the change in risk. In practice, none of these elements is simple to calculate. This talk will address the three terms of the risk balance equation in turn - generic hazards of automation, generic safety benefits of automation, and the ethical and practical issues surrounding inclusion of benefits in a risk acceptability argument.

4th November: Usage Control in Collaborative Environments - Alvaro Arenas
(Science and Technology Facilities Council - Rutherford Appleton Laboratory)

Collaborative systems such as Grids enable seamless resource sharing across multiple organisations, constituting the basic infrastructure for today data-intensive scientific collaborations. This talk presents a usage control framework for collaborative systems. Usage control is a new access control paradigm proposed by Park and Sandhu that encompasses and extends several access control models. We introduce a process description language used as a policy specification language and show its suitability to model usage policies for Grids.

11th November: A Verified Design of a Fly-By-Wire Control System -
Alexandre Motta (Universidade Federal de Pernambuco [UFPE] - Brazil)

As a critical system an aircraft must be safely controllable and manoeuvrable. In a Fly-By-Wire Flight Control System, the flight control surfaces are controlled using electronic devices for weight savings and better performance and handling qualities. Besides, most of these systems replace the connected yokes of a mechanical system by side-sticks and a priority mechanism. However, their validation/verification is rather complex and expensive, usually based on design simulation and test rigs. This work presents a verified design of an Elevator Control System, based on the CSP process algebra. Our approach uses model checking to ensure the dependability of the system design over different architectures and has proved to find defects in an early stage of development, saving time and cost.

18th November: Graph Grammars: True Concurrency Semantics and Refinement - Leila Ribeiro (Universidade Federal do Rio Grande do Sul [UFRGS]- Brazil)

Graph grammars are a generalization of Chomsky grammars, rewriting graphs instead of strings. In Computer Science we typically deal with graphs at many levels of software development, from specifications diagrams to implementation structures, and therefore investigating graph grammars is an interesting topic. Some of the main application areas of graph grammars are specification of concurrent systems, biological systems, and since recently they have been successfully used for model refactoring. In this talk we will concentrate on the first area: specification of concurrent systems. The idea is that a graph models a distributed complex state, and rules specify local changes. This approach is data driven: when an occurrence of the left-hand side of a rule is found in the graph that represents the state, this rule may be applied. Since many rules may be applied in parallel, concurrency arises naturally. After an introduction to graph grammars and application areas, we will present a true concurrency semantical model based on non-deterministic processes, and show a notion of refinement of graph grammars that is suitable for the definition of abstract interfaces of modules. Finally, we will discuss ongoing and future work.

25th November: Specification and Analysis of Distributed Systems using Graph Grammars - Fernando Ribeiro (UFRGS - Brazil)

The design of distributed systems, assuring required functional properties, can be a complex task. Formal methods are frequently employed, however they often are non trivial to use. Due to its graphical notation and simple concepts we have experienced the use of Graph Grammars as specification formalism for distributed systems. A restricted form of Graph Grammar focused on object-based systems, called Object-Based Graph Grammars (OBGG), has been used for the specification of several distributed algorithms. Analysis is performed using model-checking. An extension of the approach to model distributed systems in presence of faults is also discussed.

2nd December: Model Checking in the Early Lifecycle - Zoe Stephenson

he MOSAIC project is looking at ways of reducing test effort; one such way is to improve the fidelity of analysis in the early lifecycle so that errors of logic and timing are caught as early as possible, and one way in which this is being addressed is with model-checking. The main focus of the work is not the technology itself, but on finding the best way for industrial partners to make good use of the technology. This presentation describes an overall approach for early-lifecycle model-checking, with some case-study results evaluated with two very different model-checking tools - spin and the Simulink Design Verifier.

9th December: Quantitative risk analysis using stochastic model checking - Peter Lindsay (University of Queensland) Distinguished Speaker!

This short informal talk will describe the application of stochastic model checking (more specifically, the Continuous Time Markov Chain instantiation of Prism) to support quantitative Failure Modes & Effects Analysis. The application is to the (UQ version of) the Industrial Press case study, so is likely to be familiar to the HISE group. It will touch on some of the limitations of the modelling approach, but also on some of the useful capabilities of the Prism.
Summer term 2008

15th January: Frank Zeyda,Formal Development of Reversible Computations

5th February:Sergio Mena, Specifying Atomic Broadcast in the Crash-Recovery Model

12th February:Italo Oliveira, Risk analysis of the airborne time-based spacing operation through a stochastically and dynamically coloured Petri net model

19th February: Katrina Attwood, Semi-Automated Domain Modelling for a Product Family of Engine Control Systems: A Feasibility Study

26th February: Jim Woodcock, Circus, Timebands, and the Indeed Component Model

4th March: Jan Tobias Muehlberg, Model Checking Linux's Virtual File System


Abstracts
Model Checking Linux's Virtual File System, Jan Tobias Muehlberg

We present our experiences with model checking part of a Linux file system. The work is set in the context of Hoare's verification grandchallenge, and, in particular, Joshi and Holzmann's mini-challengeto build a verifiable file system. The primary aim of the work wasto construct a larger scale case study upon which to measure our ownresearch into model-checking technology. However, the choice of casestudy material was influenced by the aforementioned mini-challenge.The secondary aim of the work was to add to the existing confidencein the Linux code, for example by "testing" improbable situationsthrough exhaustive model checking. Two models were produced: a Promelamodel, which was analysed using SPIN, and a Petri-Net-based model,which was analysed using versions of the SMART model-checker. Theapproach adopted was incremental, initially focussing on the basicfunctions of Linux's Virtual File Systems layer. The talk presentsintermediate results.


Circus, Timebands, and the Indeed Component Model, Jim Woodcock

The behaviour of a complex computer-based system varies depending on the time-scale of the observer. For example, a socio-technical system may have activities that last for months or even years, but also have activities that take place within a microsecond. The behavioural description of such systems usually involves the representation of time as a single, flat physical phenomenon, but this tends to conceal a system's natural structure by confusing different notions of time. This violates the most important principle of software engineering: separation of concerns. Alan Burns' timebands architecture provides a way of revealing the structure of a system through the granularity and precision of the timing of its events and activities. In this talk, we examine the notion of consistency within and between different timebands, using CircusTime, a timed, state-rich process algebra. The work is contributing to the InDeED project, which is developing knowledge, methods, and tools that contribute to the understanding of socio-technical system dependability, and that support developers of dependable systems.


Semi-Automated Domain Modelling for a Product Family of Engine Control Systems: A Feasibility Study, Katrina Attwood

Requirements Engineering literature identifies two major applications of domain modelling: the use of domain information to provide a basis for the refinement and validation of requirements and/or specifications, and the use of domain modelling to provide a basis for the definition of common and variable features across a Product Family. However, industry is not typically skilled in the systemmatic record of domain information, which is often embedded deeply in historical documentation, or in the brains of 'resident experts'. This seminar will report on a recent study to assess the feasibility of using lightweight natural language parsing techniques to identify potential domain concepts, entities, relationships between and basic information about them in requirements documentation. The use of an ontology to record and structure this information will also be discussed.


Risk analysis of the airborne time-based spacing operation through a stochastically and dynamically coloured Petri net model, Italo Oliveira

The safety in the airspace can considerably increase with the use of airborne spacing and separation operations. Under this paradigm, the task of maintaining a safe distance between aircraft is delegated to the pilots, which will be supported by the Airborne Separation Assistance System (ASAS). With this system, which is still in experimental phase, pilots become aware of the surrounding air traffic risks with up to 15 minutes in advance, without the help of air traffic controllers on the ground. This antecedence is much greater than the one provided by the current Traffic Collision Avoidance System (TCAS). ASAS uses a more advanced communication technology than Mode-C transponder, broadly used in the current civil aviation for collision avoidance purposes. The development of ASAS is being carried out intensively in Eurocontrol and in other initiatives in the US, and this novel system is intended to work in parallel with the current collision avoidance systems, acting as safety nets. The present study approaches the ASAS application to improve the precision of spacing between aircraft that sequentially arrive at an airport, using the so called mathematical formalism "Stochastically and Dynamically Coloured Petri Net", for evaluating quantitative data about accident risk. These data indicate that the accident risk is significantly smaller when aircraft pairs use ASAS Spacing than when aircraft pairs do not use ASAS Spacing.


Specifying Atomic Broadcast in the Crash-Recovery Model, Sergio Mena

Group communication protocols are used at the middleware level to simplify the design of distributed fault-tolerant applications. One of the best-known group communication protocols is atomic broadcast, which enforces total order of message delivery by all processes, despite partial failures. Traditionally, atomic broadcast has mainly been specified and implemented in a system model where processes do not recover after a crash. The model is called crash-stop. The drawback of this model is its inability to express algorithms that tolerate the crash of a majority of processes. This has led to extend the crash-stop model to the so-called crash-recovery model, in which processes have access to stable storage, to log their state periodically. This allows them to roll back to a previous state after a crash.

However, despite some attempts at specifying atomic broadcast in the crash-recovery model, none of these specifications is entirely satisfactory. In this talk, I will present a different way to specify atomic broadcast in the crash-recovery model that addresses the issues present in previous specifications. Specifically, the specification presented allows to distinguish between a uniform and a non-uniform version of atomic broadcast. The non-uniform version logs less information, and is thus more efficient. Finally, I will present the results of a performance comparison with a published crash-recovery atomic broadcast algorithm.


Formal Development of Reversible Computations, Frank Zeyda

This seminar presents some of the work undertaking in my PhD thesis, in particular how the B formalism can be used to develop applications which exploit execution environments in which each computational step is assumed to be reversible. One motivation for a reversible architecture is to overcome fundamental limits on energy dissipation imposed by thermodynamic considerations, however there are other interesting applications such as supporting efficient ways for garbage collection through reversibility, supporting a `prospective value' operator which allows to evaluate the values of an expression E after running a program S however without incurring potential side-effects, or extending the set of permissible implementation constructs i.e. by moving from B0 to RB0 (Reversible B0). By revoking Dijsktra's Law of the Excluded Miracle (H4 in UTP terms) we obtain a language with in-build backtracking in which non-deterministic choice plays the dual role of implementor's and `provisional' choice. The two are similar in semantics but the latter has to be treated differently in the development process; intuitively provisional choice provides the alternative behaviours we might require, for example, to solve some algorithmic search problem, thus the reduction of non-determinism through refinement becomes an issue.

During the presentation I will try and give an overview of some of the problems, limitations and opportunities of the suggested approach of using B-AMN as a formalism for developing reversible computations, in particular use the example of the Knights Tour to illustrate how in principle the B Method and tool support can be (re)used to conduct the correctness proof of the respective implementation.


Summer term 2007


24th April: Ibrahim Habli: DO-178C

01st May: Diyaa Addein: Extending SPARK toolset for reasoning about floating-point arithmetic

08th May: John McDermid: Industrial Development of Safety Critical Software: Facts and Fiction

15th May: Paul Emberson: Producing Flexible Real-Time Architectures With Automated Design

22nd May: Andrew Butterfield, Trinity College, University of Dublin: Formalising Flash Memory: First Steps

29th May: Radu Siminiceanu, National Institute of Aerospace, Virginia: Spacecraft Autonomy and Planning

05th June: Tariq Mahmood, University of Melbourne, Australia

12th June: Gianfranco Ciardo, University of California at Riverside: Static variable ordering and partitioning: heuristics for symbolic state-space generation

19th June: Kun Wei: A Time-Based Framework for Modeling Complex System

26th June: Lindsay Groves, Victoria University of Wellingon, New Zealand: Reasoning about Lock-free Algorithms using Reduction


Reasoning about Lock-free Algorithms using Reduction, Lindsay Groves, Victoria University of Wellingon, New Zealand

Lock-free algorithms have been developed to avoid various problems associated with using locks to control access to shared data structures. Instead of preventing interference between processes using mutual exclusion, lock-free algorithms must ensure correct behaviour in the presence of interference, which is typically detected using hardware primitives such as Compare and Swap (CAS) or Load Linked/Store Conditional (LL/SC). The resulting algorithms are typically more intricate than lock-based algorithms, and many published algorithms have turned out to contain errors. There is thus a pressing need for practical techniques for reasoning about lock-free algorithms and the programs that use them.

In this seminar, we will describe an approach to reasoning about lock-free algorithms based on the reduction method introduced by Lipton, and further developed by Lamport, Cohen and others. The basic idea is to show that a concurrent system has some desired property by showing that every concurrent execution is equivalent to one in which certain sequences of actions are executed without interruption and then showing that this execution has the desired property. We show how this approach, originally designed for lock-based concurrency, can be adapted to apply to lock-free concurrency, and describe some further extensions required to obtain a more
practical method. The approach will be illustrated using simple examples for lock-free counters and some well-known algorithms for lock-free stacks and queues. We will also contrast the reduction approach with the simulation approach we have used in earlier work.


A Time-Based Framework for Modeling Complex System, Kun Wei

This work is concerned with the role time plays within any structural description of a complex computer-based system. Indeed this work explores the benefits of placing time at the centre of any description of system structure. To exploit the unique properties of time, with the aim of producing more dependable computer-based systems it is desirable to explicitly identify distinct time bands in which the system is situated. Such a framework enables the temporal properties and associated dynamic behaviour of existing systems to be described and the requirement for new or modified systems to be specified. A system model based on a finite set of distinct time bands is developed in the work.


Static variable ordering and partitioning: heuristics for symbolic state-space generation, Gianfranco Ciardo, from the University of California, Riverside.

Symbolic algorithms based on decision diagrams are routinely employed to verify complex designs having finite but enormous state spaces. The performance of such methods, however, is highly dependent on the mapping of the state variables onto the decision diagram levels. Already in the case of BDDs, where each binary variable is mapped to a different decision diagram level, the problem of an optimal variable ordering is known to be NP-hard. With MDDs, one can in addition map multiple state variables to the same decision diagram level, thus the space of possible mappings becomes even more complex. In this talk, we focus on the use of structural information, in particular linear invariants about the state variables of an asynchronous system under study, for example a Petri net. From these invariants, we heuristically derive a good static mapping of the state variables onto the decision diagram levels, and a good static order of these decision diagram levels.


Dynamic Analysis of System Interactions for Safety-critical Software Systems,Tariq Mahmood, The University of Melbourne, Australia

An increasing number of accidents are being attributed to interactions taking place within a system, with and without the occurrence of failures. This is primarily due to an increase in both number and complexity of System Interactions.

In this talk, we describe of set of techniques for conducting safety analysis during the early stages of development with the primary focus on System Interactions. First, we describe a technique for extracting and stochastically analyzing the interactions taking place within a system model. The approach, applied to Probabilistic Statecharts, provides safety analysts with a unique insight without loss of information. The results are then fitted to a Hidden Markov Model (HMM) of System Interactions to analyze the transitive and long-term contribution of interactions towards potential safety concerns.

We briefly describe a method to use past accident/incident knowledge in training the HMM. Finally, we describe how the model is used to (a) uncover safety concerns where interacting components may or may not have failed, (b) identify critical interactions contributing towards potential accidents and (c) identify sequences of interactions that may collectively and over time contribute to potential accidents.


Spacecraft Autonomy and Planning, Radu Siminiceanu, National Institute of Aerospace, Virginia

Part of NASA's Constellation program strategy to send crewed missions to the Moon by 2020 is an initiative to increase the spectrum of autonomous operations, for both robotic and manned activities. The SAVH (Spacecraft Autonomy for Vehicles and Habitats) is a research project led by NASA Ames to introduce planning software as part of mission planning and control.

Although AI-based autonomous software can provide increased capability and flexibility, the power comes at a price: the verification and validation problem is extremely challenging. Traditional simulation based testing methods fall short of providing the required level of confidence. In other application domains, formal methods have been developed to explore the entire state space of the application, providing rigorous guarantees about critical safety properties of the application. Unfortunately little progress has been made to date in applying formal methods to AI-based autonomous software systems.

In this talk, I will present a few lessons learned from our initial attempts to apply model checking techniques to planning languages (NDDL, APPL, ANML) and planning software (EUROPA 2).
Formalising Flash Memory: First Steps, Andrew Butterfield, Trinity College, University of Dublin

We present first steps in the construction of formal models of NAND Flash memory, based on a recently emerged openstandard for such devices.The model is at a level of abstraction that captures the internal architecture of such a device, as well as the commands that are used to operate it. The model is intended as a key step in a plan to develop a verified filestore system, by providing a description ofthe hardware devices that would be used in its implementation.


Producing Flexible Real-Time Architectures With Automated Design, Paul Emberson

Real world real-time dependable embedded systems feature a number of trade-offs, sensitivities and risks that make their design complicated and expensive. Changes arise from maintenance, enhancements and upgrades. It is essential for a systems architecture to be flexible with respect to possible changes. This means that changes to requirements should cause as small a change in the design solution as possible in order to reduce costs resulting from the need for new development and recertification.

The work draws on traditional real-time architecture trade-off analysis principles including scenario-based assessment and timing analysis with a focus on the problem of task allocation. A flexible design will allow tasks to be added without impinging on other tasks, causing them to miss deadlines. Heuristic search is a well recognised strategy for solving allocation and scheduling problems but most research is limited to finding any valid solution for a current set of requirements. The technique proposed here incorporates scenario based analysis into heuristic search strategies where the ability of a solution to meet a scenario is included as another heuristic for the changeability of a system. This allows future requirements to be taken into account when choosing a solution so that future changes can be accommodated with minimal alterations to the existing system.


Industrial Development of Safety Critical Software: Facts and Fiction, John McDermid

John will present a review of the realities of the development of safety critical software, in two parts. Part 1 will focus on the pragmatics and economics of safety critical software development, and the issues in deriving requirements for software as part of a broader embedded system. Part 2 (in November) will consider issues such as the role of abstraction and conclude with some observations about research strategy.


Extending SPARK toolset for reasoning about floating-point arithmetic, Diyaa Addein

The semantics of the SPARK language are such that it is possible to prove that a SPARK program is free from (Ada run-time) exceptions. Currently the SPARK toolset provides support for this process by generating verification conditions (VCs) and attempting to discharge them, both in the SPARK Examiner/Simplifier. This proof process is currently limited for integer arithmetic; floating-point expressions are treated as mathematical numbers, with no account for approximations! This seminar presents a UTC work package on extending the SPARK toolset for reasoning about floating-point expressions. So far, this work has motivated the introduction of new features in the SPARK Simplifier, and the development of a new in-house tool (at Rolls-Royce) to support the current proof process in SPARK. Though the work may be limited to Ada floating-point expressions, the results might not.


DO178-C, Ibrahim Habli

DO-178B/ED-12B provides guidelines for the production of software for airborne systems and equipment. These guidelines are associated with a set of software lifecycle process objectives. The satisfaction of these objectives offers a level of confidence in the 'safety' of the airborne software. DO-178B/ED-12B is currently being updated by the joint EUROCAE WG71 and RTCA SC205 committees in order to address emerging software trends and technologies and to implement an approach that can change with the technology. This seminar presents an overview of the approach taken by the committee, focusing primarily on issues regarding tool qualification, formal methods and model-based development and verification.


Spring term 2007

13th February: Jim Woodcock: Trading assertions in the verified software repository

20th February: Ana Cavalcanti: Test-data generation using proof

27th February: Zoe Stephenson, exceptionally at 14.15Using Model Checking to Validate Style-Specific Architectural Refactoring Patterns

06h March: Dimitrios Kolovos and Richard Paige: The Epsilon Model Management Platform

13th March: Steve Wright, University of Bristol: Will Virtual Machines Ever Take Off? A High Integrity VM for Safety-Critical Applications


Will Virtual Machines Ever Take Off? A High Integrity VM for Safety-Critical Applications, Steve Wright, University of Bristol

A Virtual Machine (VM) is a program running on a conventional microprocessor that emulates the binary instruction set, registers, and memory space of an idealized computing machine. At the cost of run-time performance, VMs provide a route for planned hardware obsolescence and early application development and test. Therefore, VMs have been suggested as an evolution of current Integrated Modular Avionics architectures.

MIDAS (Microprocessor Instruction & Data Abstraction System) is a VM capable of running binary executables compiled from languages such as C and ADA and providing run-time executable verification. The VM has also been designed to be suitable for implementation using Formal Methods.

This presentation covers the features of the MIDAS VM, its place in IMA architectures, and its implementation using Formal Methods.
The Epsilon Model Management Platform, Dimitrios Kolovos and Richard Paige

Model-driven development (MDD) requires powerful, cohesive tool support in order to automate mundane tasks and provide the means to manipulate models in useful ways. We present the Epsilon platform, which provides the necessary infrastructure for developing task-specific languages to support model management tasks such as transformation, merging, comparison and validation. Epsilon has been developed in the department in the context of the MODELWARE and MODELPLEX EU Integrated Projects, and it is gaining increasing interest, as it is now an official component of the Eclipse Generative Modelling Technologies (GMT) project (
www.eclipse.org/gmt/epsilon). Through our presentation we aim at motivating HISE researchers, who haven’t yet done so, to look into standard modelling technologies (MOF, EMF) as a viable alternative to expressing their custom notations, instead of using XML, text files or generic drawing tools such as Visio, and demonstrate the tools and languages they can then use to manage their models in a consistent and structured manner.
Using Model Checking to Validate Style-Specific Architectural Refactoring Patterns, Zoe Stephenson

When developing a new domain-specific architectural style, there can be uncertainty about the feasibility of using that style. In particular, the HADES architectural style contains refactoring patterns intended to remove undesirable scheduling features such as deadlock and livelock, but these patterns have not yet been fully validated. We report on the translation between the HADES structure and the input languages for two popular model checkers (SPIN and NuSMV) to help validate these patterns. We found model checking to be a valuable asset in confirming the presence of undesirable features.
Test-data generation using proof, Ana Cavalcanti

Many tools can check if a test set provides control coverage; they are, however, of little or no help when coverage is not achieved and the test set needs to be completed. In this seminar, we describe how a formal characterisation of a coverage criterion can be used to generate test cases; we present a procedure based on traditional programming techniques like normalisation, and weakest precondition calculation. They are a basis for automation using an algebraic theorem prover. In the worst situation, we are left with a specification of the compliant test sets.
Trading assertions in the verified software repository, Jim Woodcock

The recent re-examination of Mondex generated over 2,000 refinement verification conditions discharged amongst four theorem provers. These assertions are to be curated in the verified software repository, and this raises the question of whether such assertions can be traded between different theorem provers. One of the immediate problems to be addressed is in resolving the different treatments of undefinedness. We discuss how classical logic can be used to prove and to refute conjectures in monotonic partial logics, such as those used in the Mondex experiment.
Autumn term 2007

16th October: Tim Kelly: Challenges of Establishing a Software Product Line for an Aerospace Engine Monitoring System

23rd October: Phil Brooke, University of Teesside: A lazy CSP simulator for a concurrent OO model

30th October: Huibing Zhao, HISE visitor: System Safety Practice in China

6th November: Juan Perna, A denotational semantics for Handel-C hardware compilation

13th November: John McDermid, Industrial Development of Safety Critical Software: Facts and Fiction

27th November: Nick Tudor and Colin O'Halloran, QinetiQ

4th December: Louis Rose, The Design and Evolution of Languages for Software Engineering (practice for literature review seminar)

26th June: Lindsay Groves, Victoria University of Wellingon, New Zealand: Reasoning about Lock-free Algorithms using Reduction


A denotational semantics for Handel-C hardware compilation, Juan Perna

Handel-C is an hybrid language based on C and CSP aimingat the high level description of hardware components. Several semantic models for the language and a non rigorous compilation mechanism have proposed for it. The compilation has been empirically validated and used in commercial tools but never formally verified. In this talk I will present a semantic model of the generated hardware and the foundations for the formal verification of correctness of the transformation approach (i.e., semantic equivalence between the source Handel-C code and the compiled hardware).
System Safety Practice in China Huibing Zhao, HISE visitor

China has been undergoing rapid social-economy development in recent years.Extraordinary challenge faced in the system safety area at present.A lot of efforts have been taken in legislation, organization, education and industrial practice, although there still exist many problems.

Brief introductions was given about the administration and research organizations, as well as the current education programmes. Practices in railway, civil aviation, mining industry were reviewed,where cab signal in railway,a kind of safety-critical equipment on board , was focused. The safety criteria, system design and safety evaluation for cab signal were explained in detail. Then, a grand national research plan issued in Oct 2007 in China was shown, which is about 'Foundational Research of Trustworthy Software'. Observation and future expectation were made finally.
A lazy CSP simulator for a concurrent OO model, Phil Brooke, University of Teeside

Eiffel is an object-oriented programming language andmethod; it provides constructs typical of theobject-oriented paradigm, including classes, objects,inheritance, associations, composite (`expanded') types,generic (parameterised) types, polymorphism and dynamic binding, and automatic memory management. A concurrentextension to Eiffel is called Simple Concurrent Object-Oriented Programming (SCOOP).

A CSP model was constructed during the process ofanalysing SCOOP. Existing tools, notably FDR2, were unable to give useful results. A new tool, CSPsim, was constructed for the specific purpose of simulating thisCSP model. CSPsim is a lazy simulator, and can also (inefficiently) search state spaces. It is particularlyuseful for systems with a small state space that result from the composition of very large components.

This talk will describe both SCOOP and its problems, andthe CSPsim tool.
Challenges of Establishing a Software Product Line for an Aerospace Engine Monitoring System, Tim Kelly

The introduction of a software product line may pose a great organisational challenge in the domain of high-integrity systems.. Project and technical managers within an organization need to be assured that the reusable assets of a product line are reliable and trustworthy, particularly when project teams do not have full control over the development of these assets. In this presentation we'll report on the Rolls-Royce and the York UTC's experience with the establishment of a software product line for an aerospace Engine Monitoring Unit (EMU). Specifically, we report on challenges encountered with the configuration management and certification of EMU products derived from the product line. These two areas are still to be addressed adequately by the product line community as they are central for the management of product line assets across different projects within an organization.