Department of Computer Science


  • Managing software variability for dynamic reconfiguration of robot control systems

    Davide Brugali, University of Bergarmo

    Robot control systems need to be dynamically reconfigured due to runtime variabilities, such as environmental conditions, workload fluctuation and resources availability. Dynamic reconfiguration must preserve desired system properties, such as performance and safety, that are typically called non-functional requirements. This talk discusses issues and challenges in managing software variability in robotic control systems and approaches for dealing with non-functional requirements.


    [1] Davide Brugali, Rafael Capilla, Raffaela Mirandola, Catia Trubiani: Model-Based Development of QoS-Aware Reconfigurable Autonomous Robotic Systems. IRC 2018: 129-136

    [2] Sergio García, Daniel Strüber, Davide Brugali, Alessandro Di Fava, Philipp Schillinger, Patrizio Pelliccione, Thorsten Berger: Variability Modeling of Service Robots: Experiences and Challenges. VaMoS 2019: 8:1-8:6

  • The RAFCON Task Control Framework

    Sebastian Brunner, DLR German Aerospace Center

    Robotic tasks are becoming increasingly complex, so do robotic systems. New tools are required to manage this complexity and to orchestrate the systems to fulfill demanding autonomous tasks.

    For this purpose, we developed a new graphical tool targeting at the creation and execution of robotic tasks, called RAFCON [1] . These tasks are described in hierarchical state machines supporting concurrency, which have their own formal notation. Error detection and handling possibilities are deeply integrated into the core design of our framework. Powerful action concepts can be implemented combining operational and descriptive action models [2]. Next to the logical layout of actions (the logic flow), RAFCON also allows for programming the data flow between actions. It enables to model where data comes from, where it is processed and where it is finally used. In combination with the graphical layout, this leads to a profound explainability of robotic behavior. The tool provides many debugging mechanisms and a GUI with a graphical editor, allowing for fast prototyping, intuitive visual programming and sustainable, collaborative development of complex robotic behavior.

    RAFCON’s powerful logging, monitoring and analysis tools allow for tracking the achievement of task objectives and for identifying performance and robustness bottlenecks [3] . By providing a holistic view onto robotic behavior development the task analysis architecture satisfies the requirements of various stakeholders and enables the validation of the implemented robotic behavior. As the behavior is based on a formal state machine concept, verification of system properties can be performed with the “Divine” model checker.

    So far, our framework was used on multiple robots in several domains such as space, industry and service robotics. One of the most famous examples of RAFCON’s usage was the ROBEX moon-analogue mission on Mt. Etna, Sicily, in which our framework controlled a mobile manipulator to deploy a network of seismometers on the surface of the volcano [4]. An overview of all projects in which RAFCON was employed so far is given in [5]. Finally, the software architecture of the LRU robot (one of our most complex robots software wise), in which RAFCON plays a central role, is presented [6].


    [1] S. G. Brunner, F. Steinmetz, R. Belder, and A. Dömel. RAFCON: A Graphical Tool for Engineering Complex, Robotic Tasks. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Deajeon, Korea, October 2016

    [2] S. G. Brunner, A. Dömel, P. Lehner, M. Beetz, and F. Stulp. Autonomous Parallelization of Resource-Aware Robotic Task Nodes. IEEE Robotics and Automation Letters, April 2019

    [3] S. G. Brunner, P. Lehner, M. J. Schuster, S. Riedel, R. Belder, A. Wedler, D. Leidner, M. Beetz, and F. Stulp. Design, Execution, and Postmortem Analysis of Prolonged Autonomous Robot Operations. IEEE Robotics and Automation Letters, 3(2):1056–1063, April 2018

    [4] P. Lehner, S. G. Brunner, A. Dömel, H. Gmeiner, S. Riedel, B. Vodermayer, and A. Wedler. Mobile

    Manipulation for Planetary Exploration. In Aerospace Conference, Big Sky, Montana, USA, March 2018. IEEE

    [5] S. G. Brunner, F. Steinmetz, R. Belder. RAFCON - RMC advanced flow control;

    [6] M. J. Schuster, S. G. Brunner, K. Bussmann, S. Büttner et al. Towards Autonomous Planetary Exploration: The Lightweight Rover Unit (LRU), its Success in the SpaceBotCamp Challenge, and Beyond. Journal of Intelligent & Robotic Systems (JINT), November 2017

  • RoboStar technology - a roboticist’s toolbox for combined proof and sound simulation

    Ana Cavalcanti, University of York

    Simulation is a favoured technique adopted by roboticists to evaluate controller design and software. Often, state machines are drawn to convey overall ideas; they are used as a basis to program tool-specific simulations. The simulation code, written in general or proprietary programming languages, is, however, the only account of the robotic system. In this talk, we present a modern approach to design that supports the automatic generation of simulation code that is guaranteed to be correct, and complements the use of simulation with model checking and theorem proving.

    This approach, under development by the RoboStar group (, uses two domain-specific languages: RoboChart and RoboSim. RoboChart [1] includes a controlled subset of UML-like state machines, a customised component-model, and primitives to specify timed and probabilistic properties. RoboChart is an event-based notation for design; RoboSim [2] is a matching cycle-based diagrammatic notation for simulation. RoboSim also includes block diagrams enriched to specify physical and dynamic behaviours of robotic platforms.

    Both RoboChart and RoboSim can be used to generate automatically mathematical models that can be used for verification [3]. In the RoboStar approach, the mathematical models are hidden from practitioners, but can be used to prove properties of models, and consistency between designs and simulations. We have experience with FDR, PRISM, and Isabelle.

    RoboChart and RoboSim can complement approaches that cater for a global view of the system architecture by supporting modelling and verification of the functional component-behaviour, covering interaction, time, and probabilistic properties. It also complements work on deployment of verified code.


    [1] A. Miyazawa, P. Ribeiro, L. Wei, A. L. C. Cavalcanti, J. Timmis, J. C. P. Woodcock: RoboChart: modelling and verification of the functional behaviour of robotic applications. Software & Systems Modeling. (2019).

    [2] A. L. C. Cavalcanti, A. C. A. Sampaio, A. Miyazawa, P. Ribeiro, M. S. Conserva Filho, A. Didier, W. Li, J. Timmis: Verified simulation for robotics. Science of Computer Programming. 174, 1–37 (2019).

    [3] A. Miyazawa, P. Ribeiro, W. Li, A. L. C. Cavalcanti, J. Timmis: Automatic Property Checking of Robotic Applications. In: The International Conference on Intelligent Robots and Systems. pp. 3869–3876 (2017).

  • Gaining Confidence in the Correctness of Robotic and Autonomous Systems

    Kerstin Eder, University of Bristol, Trustworthy Systems Laboratory, and Bristol Robotics Laboratory

    Because no single technique is adequate to cover a whole system in practice, a variety of complementing verification techniques will be needed to gain confidence in the correctness of robotic and autonomous systems [1]. In this talk I will concentrate on three aspects.

    Coding: While much research has been invested into proving robotic navigation algorithms correct, little has been done to demonstrate that the implementations are bug-free. Coding in SPARK, an imperative, strongly-typed language used mainly in the safety-critical systems domain, enables formal proof to be used, with 97% of the proof obligations being discharged fully automatically and performance comparable to that of the original (buggy) implementations [2].

    Designing: Being able to distinguish between design and implementation errors is critically important in practice. In the case of control systems, designs are typically written in Simulink before implementation in a programming language. On the example of stability, I will show how assertions can be used to capture the designer’s intent (i.e. the system requirements) in Simulink, and how simulation combined with automated formal proof can be used to gain confidence in the control system meeting these requirements [3].

    Testing: The efficient generation of effective tests is essential for simulation-based testing [4]. In the context of a Human-Robot Interaction scenario, I will present our approach of exploiting the planning native to multi-agent systems to generate tests that achieve a high level of coverage as part of a coverage-driven verification method [5].

    Finally, I will briefly outline the challenges of verifying robotic and autonomous systems, including specification, (more) automation, creatively combining techniques and, as addressed also by Arnaud Gotlieb, exploiting AI for verification. The work directly complements the contribution by Ana Cavalcanti on combining formal with simulation-based techniques, and the research presented by Arnaud Gotlieb and Rob Hierons on testing and model-based techniques.


    [1] M. Webster, D. Western, D. Araiza-Illan, C. Dixon, K. Eder, M. Fisher and A. G. Pipe (2018). A Corroborative Approach to Verification and Validation of Human-Robot Teams.

    [2] P. Trojanek and K. Eder. Verification and testing of mobile robot navigation algorithms: A case study in SPARK. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

    pp. 1489-1494. Sep 2014.

    [3] D. Araiza Illan, K. Eder, A. Richards. Verification of Control Systems Implemented in Simulink with Assertion Checks and Theorem Proving: A Case Study. European Control Conference (ECC), pp. 2670 - 2675. Jul 2015.

    [4] D. Araiza-Illan, D. Western, A. Pipe and K. Eder. Systematic and Realistic Testing in Simulation of Control Code for Robots in Collaborative Human-Robot Interactions. 17th Annual Conference Towards Autonomous Robotic Systems (TAROS 2016), pp. 20-32. Lecture Notes in Artificial Intelligence 9716. Springer, June 2016.

    [5] D. Araiza-Illan, A.G. Pipe, K. Eder. Intelligent Agent-Based Stimulation for Testing Robotic Software in Human-Robot Interactions. Proceedings of MORSE 2016, ACM, July 2016.

  • Evolving Robot Software and Hardware

    Gusz Eiben, Vrije Universiteit Amsterdam

    This talk discusses a radically unconventional approach to developing robots: using evolution. Artificial evolution has been successful in optimization and design in the field of Evolutionary Computing; applying it to developing robots implies huge challenges and great promises, [1], [2], [3]. The long-term vision foresees robots that reproduce and evolve in real-time and real space. Possible application scenarios can be divided into two categories depending on the extent of human involvement. First, the “breeding farm scenario”, where humans steer and accelerate evolution through influencing selection (and possibly also reproduction) until a good robot design emerges in the environment of the “breeding farm”. This design can then be validated, produced and employed on a large scale in the real application environment, e.g., cave systems, deep seas, rainforests. Second, the “other planet scenario”, where the robot population evolves and adapts autonomously to the unknown environment without the need for direct human oversight. In this case evolution is not merely a design method that stops when the outcome is satisfactory, but a permanent force that continually improves and adapts the population to the given circumstances.

    While at the current level of technology this may sound more fiction than science, the quick development of 3D-printing and autonomous assembly can make this a feasible option in the near future. As of today, the overall system architecture has been designed [4], and the first proofs-of concept with automated robot (re)production have been published [5], [6]. To illustrate specific aspects I will reflect on an ongoing EPSRC project that is developing the first evolutionary robot system where robots evolve in a breeding farm scenario [7].


    [1] A.E. Eiben, S. Kernbach, and Evert Haasdijk, Embodied artificial evolution: Artificial evolutionary systems in the 21st Century, Evolutionary Intelligence, 5(4):261-272, 2012.

    [2] A.E. Eiben and J. Smith, From evolutionary computation to the evolution of things, Nature, 521:476-482, 2015.

    [3] D. Howard, A.E. Eiben, D.F. Kennedy, J.-B. Mouret, P. Valencia, and D. Winkler, Evolving embodied intelligence from materials to machines, Nature Machine Intelligence 1(1):12-19, January 2019, doi: 10.1038/s42256-018-0009-9

    [4] A.E. Eiben, N. Bredeche, M. Hoogendoorn, J. Stradner, J. Timmis, A.M. Tyrrell and A. Winfield, The Triangle of Life: Evolving Robots in Real-time and Real-space, Proceedings ECAL 2013, MIT Press, 2013, pp. 1056-1063.

    [5] Brodbeck, L., Hauser, S., & Iida, F. (2015). Morphological evolution of physical robots through model-free phenotype development. PLoS One, 10(6), e0128444.

    [6] M.J. Jelisavcic, M. De Carlo, E. Hupkes, P. Eustratiadis, J. Orlowski, E. Haasdijk, J. Auerbach, and A.E. Eiben, Real-World Evolution of Robot Morphologies: A Proof of Concept, Artificial life. 23(2):206-235, 2017.

    [7] M.F. Hale, Edgar Buchanan, A.F. Winfield, J. Timmis, E. Hart, A.E. Eiben, M. Angus, F. Veenstra, W. Li, R. Woolley, M. De Carlo, A.M. Tyrrell, The ARE Robot Fabricator: How to (Re)produce Robots that Can Evolve in the Real World, Proceedings ALIFE 2019, MIT Press, 2019, to appear

  • Verifiable Autonomy and Responsible Robotics

    Michael Fisher, University of Liverpool

    Robots will increasingly make their own decisions about what to do and when to do it without necessarily consulting humans. How can we trust these systems if we do not know what they will choose to do and, crucially, why they make these choices?

    In this talk, we describe how this move to “autonomy” involves distinct approaches to software engineering and formal methods. In particular, our use of rational agents at the core of autonomous systems [1] forms the basis for not only transparency and explainability, but verifiability of behaviour and decision-making [2]. By engineering systems based on such hybrid agent architectures, we can strongly analyse them and move towards not only trustworthiness and predictability, but safe, ethical and responsible behaviour [3].

    From these principles, we provide some examples examining the deployment of hybrid agent architectures in both robots (domestic and industrial) and autonomous vehicles (road and air), covering the verification of safety, ethics, fault tolerance, etc.


    [1] Michael Fisher, Louise A. Dennis, Matthew P. Webster: Verifying Autonomous Systems. Communications of the ACM 56(9):84-93 (2013).

    [2] Louise A. Dennis, Michael Fisher, Nicholas Lincoln, Alexei Lisitsa, Sandor M. Veres: Practical Verification of Decision-making in Agent-based Autonomous Systems. Automated. Software Engineering 23(3):305-359 (2016).

    [3] Paul Bremner, Louise A. Dennis, Michael Fisher, Alan F. T. Winfield: On Proactive, Transparent, and Verifiable Ethical Reasoning for Robots. Proceedings of the IEEE 107(3):541-561 (2019)

  • Testing Robotic Systems: A New Battlefield!

    Arnaud Gotlieb, Simula Research Laboratory

    Industrial robotics is a field which evolves very fast, with ever growing needs in terms of safety, performance, robustness and reliability. Nowadays, industrial robots are communicating cyber-physical systems which embeds complex distributed multi-core software-systems involving intelligent motion control, anti-collision, advanced force/torque control [1]. This increased complexity makes these robots more fragile and more error-prone that they were previously. Failures can originate from many sources including system and software bugs, communication downtime, CPU overload, robots wear, etc. Hopefully, advanced verification techniques such as constraint-based testing and validation intelligence are employed to cope with specification and development errors and ensure a better quality of delivered robots.

    My talk will address the challenges of testing industrial serial robots and will review examples where Artificial Intelligence techniques have been used to ease the automation of some parts of the testing processes. In particular, the usage of Constraint Programming in the automatic generation of test scenarios for an integrated painting system will be presented [2], as well as the deployment of this technology into the real-world continuous integration process of a large robot manufacturing company [3]. Test case selection, scheduling and prioritization is another example where advanced intelligent method based on Reinforcement Learning has been deployed [4]. Finally, the talk will present a recent work where constraint optimization and learning can be combined in order to stress test industrial robots [5]. Obviously, these test optimization methods are meant to be complementary to other strong formal verification techniques such as model-checking and theorem-proving, and only the combination of these methods will lead to an industrial manufacturing world where robots are safer and more reliable.


    [1] A. Grau, M. Indri, L. L. Bello, and T. Sauter Industrial robotics in factory automation: From the early stage to the Internet of Things. Proc. Of the 43rd Annual Conference of the IEEE Industrial Electronics Society (IECON 2017), Beijing, China, 2017

    [2] Morten Mossige, Arnaud Gotlieb, and Hein Meling. Generating tests for robotized painting using constraint programming. In Int. Joint Conf. on Artificial Intelligence (IJCAI-16) - Sister Conference Best Paper Track, New York City, Jul. 2016.

    [3] Morten Mossige, Arnaud Gotlieb, and Hein Meling. Testing robot controllers using constraint programming and continuous integration. Information and Software Technology, Vol. 57, Jan. 2015.

    [4] Helge Spieker, Arnaud Gotlieb, Dusica Marijan, and Morten Mossige. Reinforcement learning for automatic test case prioritization and selection in continuous integration. In Proc. of 26th Int. Symp. on Software Testing and Analysis (ISSTA’17), Santa Barbara, CA, USA, Jul. 2017.

    [5] Mathieu Collet, Arnaud Gotlieb, Nadjib Lazaar, Morten Mossige: Stress Testing of Single-Arm Robots Through Constraint-Based Generation of Continuous Trajectories. 1 st IEEE Conference on Artificial Intelligence Testing (AITest 2019), San Francisco, USA, Apr. 2019, pp 121-128

  • Systematic automated testing of Robotic Systems based on Formal Models

    Rob Hierons, University of Sheffield

    Robotic systems form the basis for advances in areas such as manufacturing, healthcare, and transport. A number of areas in which robotic systems are being used are safety-critical and so there is a need for software development processes that lead to robotic systems that are safe, reliable and trusted. Testing will inevitably be an important component.

    This talk will describe recent work on automated testing of robotic systems. The work is model-based: it takes as input a state-based model that describes the required behaviour of the system under test. Models are written in either RoboChart [1], a state-based language for robotics, or RoboSim, a simulation language for robotics. These languages have been given a formal semantics, making it possible to reason about models in a sound manner. This talk will describe how the existence of such a formal semantics makes it possible to automate the generation of sound test cases for use in simulation and possibly deployment testing. Testing is systematic since test cases target potential faults. This line of research builds on work on RoboChart and RoboSim [2], within RoboStar, and has the potential to contribute to certification by supporting automated, systematic testing within a simulation.


    [1] A. Miyazawa, P. Ribeiro, L. Wei, A. L. C. Cavalcanti, J. Timmis, J. C. P. Woodcock: RoboChart: modelling and verification of the functional behaviour of robotic applications. Software & Systems Modeling. (2019).

    [2] A. L. C. Cavalcanti, A. C. A. Sampaio, A. Miyazawa, P. Ribeiro, M. S. Conserva Filho, A. Didier, W. Li, J. Timmis: Verified simulation for robotics. Science of Computer Programming. 174, 1–37 (2019).

  • Verification of Autonomous Robots, a Roboticist Bottom Up Approach

    Félix Ingrand, LAAS/CNRS

    Complete validation and verification of the software of an autonomous robot (AR) is out of reach for now. Still, this should not prevent us from trying to verify some properties on components and their integration. There are many approaches to consider for the V&V of AR software, e.g. write high level specifications and derive them in correct implementations, deploy and develop new or modified V&V formalisms to program robotics components, etc. Learned models put aside, most models used in deliberation functions [2] are amenable to formal V&V [1] and we prefer focussing on the functional level V&V challenge. We propose an approach which rely on an existing robotics specification/implementation framework (GenoM) to deploy robotics functional components, to which we harness existing well known formal V&V frameworks (UPPAAL, BIP, FIACRE/TINA). GenoM was originally developed by roboticists and software engineers, who wanted to clearly and precisely specify how a reusable, portable, middleware independent, functional component should be written and implemented. Many complex robotic experiments have been developed and deployed over 20 years using GenoM and it is only recently that its designers realized that the rigorous specification, a clear semantic of the implementation and the template mechanism to synthesize code opens the door to automatic formal model synthesis and formal V&V (offline and online). This bottom up approach, which starts from components implementation, may be more modest than the top down ones which aim at a larger and more global view of the problem. Yet, it gives encouraging results on real implementations on which one can build more complex high level properties to be then verified and validated offline but also with online monitors.


    [1] Félix Ingrand. Recent Trends in Formal Validation and Verification of Autonomous Robots Software. IEEE International Conference on Robotic Computing, Feb 2019, Naples, Italy. ⟨hal-01968265⟩

    [2] Félix Ingrand, Malik Ghallab. Deliberation for autonomous robots: A survey. Artificial Intelligence, Elsevier, 2017, 247 pp.10-44. ⟨⟩. ⟨10.1016/j.artint.2014.11.003⟩.

    [3] Tesnim Abdellatif, Saddek Bensalem, Jacques Combaz, Lavindra de Silva, Félix Ingrand. Rigorous design of robot software: A formal component-based approach. Robotics and Autonomous Systems, Elsevier, 2012, 60 (12), pp.1563-1578. ⟨10.1016/j.robot.2012.09.005⟩.

    [4] Mohammed Foughali, Silvano Dal Zilio, Félix Ingrand. On the Semantics of the GenoM3 Framework. Rapport LAAS n° 19036. 2019.

  • Making robots usable in everyday life

    Patrizio Pelliccione, University of L'Aquila

    Mobile robots are increasingly used in everyday life to autonomously realize missions such as exploring rooms, delivering goods, or following certain paths for surveillance. The current robotic market is asking for a radical shift in the development of robotic applications where missions specification is performed by robotic users that are not highly qualified and specialized in robotics or ICT.

    To this aim, we first present a catalog of 22 mission specification patterns for mobile robots, together with tooling for instantiating, composing, and compiling the patterns to create mission specifications [1,2]. The patterns provide solutions for recurrent specification problems, each of which detailing the usage intent, known uses, relationships to other patterns, and—most importantly—a template solution in temporal logic. The patterns originate from 245 realistic textual mission requirements extracted from the robotics literature, and they are evaluated upon a total of 449 real-world mission requirements.

    Then, we propose a Domain Specific Language (DSL) that enables nontechnical users to specify missions for a team of autonomous robots in a user-friendly and effective way [3,4]. The DSL contains a set of atomic tasks that can be executed by robots and a set of operators that allow the composition of these tasks in complex missions. Mission specification can be performed through a textual and a graphical interface. While the DLS support is provided by a standalone tool and can be integrated within a variety of frameworks, the current implementation has been integrated with a software platform that provides a set of functionalities, including motion control, collision avoidance, image recognition, SLAM, planning, etc. Our DSL has been successfully validated with both simulation and real robots.

    The talk is related to the talk of Sebastian Brunner “The RAFCON Task Control Framework”, which presents a graphical tool for the creation and execution of robotic tasks.


    [1] Claudio Menghi, Christos Tsigkanos, Patrizio Pelliccione, Carlo Ghezzi, Thorsten Berger (2019) Specification Patterns for Robotic Missions IEEE Transactions on Software Engineering (TSE).

    [2] Mission Specification Patterns website (2019)

    [3] Garcia, S., Pelliccione, P., Menghi, C., Berger, T., Bures, T.: High-level mission specification for multiple robots. In: 12th ACM SIGPLAN International Conference on Software Language Engineering (SLE)s (2019).

    [4] Promise website (2019)

  • Extending automotive certification processes to handle autonomous vehicles

    Zeyn Saigol, Connected Places Catapult

    Autonomous vehicles (AVs) sit at the higher end of the both the complexity and safety-criticality spectrums of robotics, which means the regulators who must certify the vehicles for public use have a significant challenge. This talk will initially review current certification methods, which leverage decades of knowledge to create processes which are both highly effective and highly entrenched. However, these processes are not suitable for certifying AVs, due to the complexity of software systems needed to control an AV, and the range of conditions it needs to be able to handle safely (road layouts, road surroundings, other road users and their actions, weather conditions, etc) [1].

    Despite the challenges, there is a feasible certification path building incrementally on existing test tools and methodologies. The critical pillars of this route are simulation, to increase test efficiency, and scenarios, to define relevant and human-understandable test cases [2,3]. Once a relatively simple scenario-based test methodology is established, it can evolve to include intelligent testing, likely based on some of the methods presented in other talks at this event. Further cultural shifts towards transparency could allow component-based or model-based test methods, opening the door applying more of the presented research. However, given the increasing presence of safety-critical AI technology in production vehicles, initial steps are vital to preserve the safety standards of regulatory approval.


    [1] N. Kalra, S. M. Paddock: Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability? RAND Corporation Technical Report (2016).

    [2] Z. Saigol, A. Peters: Verifying Automated Driving Systems in Simulation: Framework and Challenges. In: 25th ITS World Congress, Copenhagen (2018).

    [3] S. Hallerbach, Y. Xia, U. Eberle, F. Koester: Simulation-Based Identification of Critical Scenarios for Cooperative and Automated Vehicles. SAE Journal of Connected and Automated Vehicles (2018).

  • Composition, Separation of Roles and Model-Driven Approaches as Enabler of a Robotics Software Ecosystem

    Christian Schlegel, Technische Hochschule Ulm

    Successful engineering principles for building complex software systems rely on separation of concerns for mastering complexity. However, just working on different concerns of a system in a collaborative way is not good enough for economically feasible tailored solutions. A successful approach for this is building systems by composing commodity building blocks. These come “as is” and can be represented as blocks with ports via data sheets. Data sheets are models and allow a proper selection and configuration as well as the prediction of the behavior of a building block in a specific context.

    In that setting, separation of concerns comes with a different flavor as different stakeholders (e.g. component supplier, system architect, system builder, safety engineer, etc.) need to become able to work independently according to their role. Separation of roles, composability and compositionality become the drivers for bundling different concerns in a role-specific way. Basically, the challenge is to identify the sweet spot between freedom-of-choice and freedom-from-choice such that different roles can do their specific part of the work independently from the others and that they can consume and provide different building blocks without direct interaction with other roles.

    At its core, this is about the link between component level and system level and thinking about complex robotic systems before we build them. It is about becoming able to perform what-if analyses, to predict system level properties and to know that the required ones are matched by the selected components and the selected configuration.

    In this talk, I will explain how model-driven approaches can be used to support separation of roles, composition and compositionality for robotics software systems. This is based on and underpinned by the practical work and body-of-knowledge of the EU H2020 RobMoSys project, the German national project SeRoNet and its real-world use e.g. within the Festo Robotino robots. It comes with models, open source tooling and robotic systems.

    Model-driven approaches are considered to have the potential to interlink all the various topics brought up by the talks of this meeting.


    [1] Matthias Lutz, Juan F. Inglés-Romero, Dennis Stampfer, Alex Lotz, Cristina Vicente-Chicote and Christian Schlegel. “Managing Variability as a Means to Promote Composability: A Robotics Perspective.” In New Perspectives on Information Systems Modeling and Design, ed. António Miguel Rosado da Cruz and Maria Estrela Ferreira da Cruz, 274-295 (2019), doi:10.4018/978-1-5225-7271-8.ch012

    [2] Lotz, A., Hamann, A., Lange, R., Heinzemann, C., Staschulat, J., Kesel, V., Stampfer, D., Lutz, M., and Schlegel, C. Combining Robotics Component-Based Model-Driven Development with a Model-Based Performance Analysis. In IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR), 2016, San Francisco, CA, USA.

    [3] Dennis Stampfer, Alex Lotz, Matthias Lutz and Christian Schlegel. “The SmartMDSD Toolchain: An Integrated MDSD Workflow and Integrated Development Environment (IDE) for Robotics Software”. Special Issue on Domain-Specific Languages and Models in Robotics, Journal of Software Engineering for Robotics (JOSER), 7(1), 3-19 ISSN: 2035-3928, July 2016.


  • A software framework for interoperable, plug-and-play distributed robotic systems of systems

    Rob Skilton, Remote Applications in Challenging Environments (RACE), UK

    In worlds of nuclear energy, mining, petrochemical processing, and sub-sea, robots are being used for an increasing number and range of tasks. This is resulting in ever more complex robotics installations being deployed, maintained, and extended over long periods of time. Additionally, the unstructured, experimental, or unknown operational conditions frequently result in new or changing system requirements, meaning extension and adaptation is necessary. Whilst existing frameworks allow for robust integration of complex robotic systems, they are not compatible with highly efficient maintenance and extension in the face of changing requirements and obsolescence issues over decades-long periods. We present ongoing activities at RACE that attempt to solve the long-term maintainability and extensibility issues encountered in such scenarios through the use of a standardised, self-describing data representations and associated communications protocols. Progress in developing and testing software frameworks to implement this, as well as an overview of current and planned deployments will be presented. This talk relates to and is highly complimentary to Christian Schlegel’s talk on Composition, Separation of Roles and Model-Driven Approaches as Enabler of a Robotics Software Ecosystem. In addition, the proposed framework presents interesting opportunities for modular approaches to verification.

  • Ethical Standards in robotics and AI: what are they and why they matter

    Alan Winfield, Bristol Robotics Lab, University of the West of England, Bristol

    In response to concerns over the ethical and societal impact of robotics and AI a new generation of ethical standards are emerging [1]. In this talk I will introduce these standards, including both British Standard BS8611 Guide to the ethical design of robots and robotic systems and the IEEE P700X series of ‘human standards’ currently in development. As a case study I will focus on one of these emerging standards: P7001 on Transparency of Autonomous Systems. I will set these ethical standards within the wider context of Ethical Governance [2], and ask the question: what is Responsible Robotics?

    The talk will introduce one hitherto overlooked aspect of responsible robotics: the question of how to properly and systematically investigate robot accidents, and the data that will be needed to support such investigations [3]. The talk will complement others that focus on verification, transparency, assurance and certification within responsible robotics (by Bloomfield, Fisher and Saigol) and ethical governance.

    Standards, like open science, are a trust technology. Without ethical standards, it is hard to see how robots will be trusted and widely accepted, and without that acceptance their great promise will not be realized.


    [1] Winfield, A. F. (2019) Ethical standards in Robotics and AI. Nature Electronics, 2 (2). pp. 46-48.

    [2] Winfield, A. F. and Jirotka, M. (2018) Ethical governance is essential to building trust in robotics and AI systems. Philosophical Transactions A: Mathematical, Physical and Engineering Sciences, 376 (2133).

    [3] Winfield AF and Jirotka M. (2017) The case for an ethical black box. In Towards autonomous robotic systems (eds Y Gao, S Fallah, Y Jin, C Lekakou). Lecture Notes in Artificial Intelligence, 10454, pp. 262–273, Springer.

  • Modelling uncertainty in RoboChart using probability

    Jim Woodcock, University of York

    A challenging research direction in robotics is dealing with uncertainty, which arises when a robot lacks vital and accurate information that it needs to operate satisfactorily. Thrun et al. [1] identify five sources of uncertainty:

    1. The physical world is unpredictable.

    2. Sensors are subject both to physical laws (they have limited range and can’t see through walls) and are sensitive to noise.

    3. Motor actuators suffer from control noise and wear-and-tear.

    4. Models abstract underlying physical processes of a robot and its environment, and model errors are a source of uncertainty.

    5. Control algorithms sacrifice accuracy in order to compute in real time.

    All of these factors give rise to uncertainty and a modern approach to robotic development must address this phenomenon.

    RoboChart is a diagrammatic, domain-specific notation for modelling robotic controllers with their hardware platforms and operating environments [2]. It is designed to be usable by robotics practitioners and has first-class primitives for describing real-time and probabilistic behaviour.

    We outline the probabilistic denotational semantics for RoboChart and show how models are analysed using the Prism probabilistic model checker [3] and the Isabelle automated theorem prover [4]. We illustrate this using a RoboChart model of a robust pose-estimation algorithm, based on Ransac (random-sample consensus) [5], that gives a probabilistic guarantee of accurately determining a robot’s current pose. We conclude with a discussion of future research directions. The talk complements others on verification and certification of robotic systems and their development, simulation, and deployment.


    [1] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic robotics: Intelligent robotics and autonomous agents, MIT Press, (2005)

    [2] Alvaro Miyazawa, Pedro Ribeiro, Wei Li, Ana Cavalcanti, Jon Timmis, and Jim Woodcock. RoboChart: modelling and verification of the functional behaviour of robotic applications, Software and Systems Modeling 18(5):3097-3149 2019

    [3] Marta Z. Kwiatkowska, Gethin Norman, and David Parker. PRISM 4.0: Verification of Probabilistic Real-Time Systems, Ganesh Gopalakrishnan and Shaz Qadeer (eds), Computer Aided Verification - 23rd International Conference, CAV 2011, Snowbird, July 14-20, 2011. Springer, Lecture Notes in Computer Science 6806, 585-591, 2011

    [4] Simon Foster, Frank Zeyda, and Jim Woodcock. Isabelle/UTP: A Mechanised Theory Engineering Framework, David Naumann (ed.), Unifying Theories of Programming - 5th International Symposium, UTP 2014, Singapore, May 13, 2014. Springer, Lecture Notes in Computer Science 8963, 21–41, 2015

    [5] Martin A. Fischler and Robert C. Bolles. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Comm. ACM. 24(6):381-395 1981

Department of Computer Science
Deramore Lane, University of York, Heslington,
York, YO10 5GH, UK
Tel: 01904 325500