Abstracts 2014 - 15

Autumn 2014

8th October: Ernesto Estrada (Strathclyde).

Title: Communicability distance and graph embedding

Abstract: I will motivate and define the communicability distance among the nodes of an undirected graph. I will then prove that it corresponds to a Euclidean distance in the graph. Then, I will prove that the communicability distance induces an embedding of the graph in an hyper-dimensional sphere. Some applications of these measures for studying real-world networks will be illustrated. A future directions and open problems are finally presented.

 

29th October: Avi Rosenfeld (Jerusalem College of Technology).

Title: Bounded Rationality and Agent Systems

Abstract: In this talk, I will propose that creating computer agents based on Bounded Rationality is critical for the success of many intelligent agent and multi-agent systems. Bounded Rationality, or the idea that people intentionally make sub-optimal decisions due to their limited ability to consider and process information, is becoming a well-established idea within other fields including psychology and experimental economics. I will present that boundedly rational computer agents are critical in modeling complex coordination problems, especially in mixed agent-human systems. To support this claim I will discuss how such agents have been used within several real-world applications including work towards developing a new generation of negotiation agents and adaptive cruise control within cars at General Motors. 

 

5th November: Alan Winfield (UWE Bristol)

Title: Towards an Ethical Robot

Abstract: If robots are to be trusted, especially when interacting with humans, then they will need to be more than just safe. In this talk I will outline the potential of robots capable of modelling and therefore predicting the consequences of both their own actions and the actions of other dynamic actors in their environment. I will show that with the addition of an ‘ethical’ action selection mechanism a robot can sometimes choose actions that compromise its own safety in order to prevent a second robot from coming to harm. An implementation with e-puck mobile robots provides a proof of principle by showing that a simple robot can, in real time, model and act upon the consequences of both its own and another robot’s actions. I argue that this work moves us towards robots that are ethical, as well as safe.

 

12th November: Peter Hines (York) 

Title: How Complex is Category Theory? 

Abstract: This talk was motivated by a proposal to use core structures from the foundations of category theory as a basis for cryptographic protocols. We investigate whether this is a good, or a bad, idea!

This proposal was unintentional, since the extremely close connection with categorical foundations is a recent discovery. Based on this connection, we show that the structures in question are already familiar, as part of some well-known theoretical computer science. Usefully, the resulting theoretical computer science is from an area where questions of complexity are very well-studied and understood, and ready-made algorithms are available. This allows for simple answers to seemingly difficult questions.

This talk was originally presented at Category Theory 2014; however, a full introduction to the necessary category theory will be given, and no prior familiarity will be assumed. The first half of the talk will be an introduction to category theory generally, and the required foundational structures. The work presented at CT2014 forms the second half of the talk.

 

19th November: Jade Alglave (UCL)

Title: Herding cats: modelling, simulation, testing and verification for weak memory.

Abstract: There is a joke where a physicist and a mathematician are asked to herd cats. The physicist starts with an infinitely large pen which he reduces until it is of reasonable diameter yet contains all the cats. The mathematician builds a fence around himself and declares the outside to be the inside. Defining memory models is akin to herding cats: both the physicist's or mathematician's attitudes are tempting, but we cannot rely on one more than on the other.

In this talk, I will present an overview of my work. I have mostly been involved in studying weak memory (i.e. the execution models of multiprocessors). Understanding precisely what our machines guarantee is crucial for writing correct programs, especially since weak memory forces us to revise the programming model that we have been taught at school, namely Lamport's Sequential Consistency.

I will show how to define formal models for weak memory, within a generic framework in which one can represent Sequential Consistency (SC), Intel x86 (i.e. Sparc Total Store Order, TSO), IBM Power, ARM, C++ and GPUs. I will then show how to exploit these models for devising tools to verify concurrent programs.

Associated paper: http://www.cs.ucl.ac.uk/staff/j.alglave/papers/toplas14.pdf

 

26th November: Karl Tuyls (University of Liverpool)

Title: Analysing multi-agent learning dynamics

Abstract: Many real-world scenarios can be modelled as multi-agent systems, in which multiple autonomous decision makers interact in a single environment. The complex and dynamic nature of such interactions prevents hand-crafting solutions for all possible scenarios, hence learning is crucial. Studying the dynamics of multi-agent learning is imperative in selecting and tuning the right learning algorithm for the task at hand. So far, analysis of these dynamics has been mainly limited to normal form games, or unstructured populations. However, many multi-agent systems are highly structured, complex networks, with agents only interacting locally. In this talk I will discuss the link between the replicator equations of evolutionary game theory and reinforcement learning and I will survey the dynamics of a variety of multi-agent learning algorithms. Furthermore, I will discuss the dynamics of networked interactions, using the replicator dynamics as a model for learning.

 

Spring 2015

21st January: Cliff Jones (Newcastle) 

Title: Tackling Separation via Abstraction

Abstract: Various Separation Logics have been shown to be effective in reasoning about the issues of separation and ownership in program design. If however one takes a top-down view of the design of programs that rely on separation of memory, well-known techniques of data abstraction and reification can both help structure arguments about separation and layer the design decisions so that the key reasoning about the algorithm is handled separately from that about representation details. The talk illustrates “Separation via Abstraction” with two examples. This view is in no way an objection to separation logics - rather it is part of a programme that has already resulted in a rather basic restructuring of the way in which rely/guarantee specifications and proofs are presented.

 

28th January: Marc de Kamps (Leeds)

Title: Compositional Representations in the Brain

Abstract: Thirty years ago, Fodor and Pylyshyn criticized connectionism because the notions of productivity and sytematicity were absent. Although some of their arguments now seem naïve, and apply to a rather narrow interpretation of connectionism, there is still no good account for how the human brain represents compositional relationships. The question is all the more pressing because all ‘truly’ intelligent cognitive capabilities such as reasoning, planning and use of language, appear to be dependent on the ability to form compositional relationships. Neuronal representations appear to be grounded by their position in sensory pathways. While this helps in understanding the ‘meaning’ of firing rate patterns, it presents a problem for symbol representation. Symbols cannot simply be copied. Based on evidence from neuroscience, I will present a so-called blackboard architecture that offers a potential solution to this problem. Transient links, present just long enough to represent a visual scene or a sentence link neural representations that are far removed physically. The architecture predicts the existence of neural assembles that represent abstract neural relationships rather than concrete objects and is able to account for the relative difficulty that humans have with center-embedded sentences compared to right-branching ones. The architecture requires precise timing to communicate the outcome of computational processing to other brain areas. An interesting unsolved question is whether such coordination is centrally organized, or emerging.

 

4th February: Conor McBride (Strathclyde)

Title: Dependent-Up your Data (Baby Steps)

Abstract: I’ve recently begun to think about how to use dependent types to model the kinds of data we encounter in the world. Where functional languages have often been concerned with closed inductive datatypes, it seems more pertinent to consider open and extensible enumeration and record types. Crucially, dependent types allow us to access an extra dimension of indexing, allowing us to say which data other data are about. Booleans (bits uninformative), thus give way to relations, which allow us a notion of refinement.

I got here by thinking about types for spreadsheets, but then I realised I couldn’t give a sensible characterisation for a view of some data without a model of the structure of the data. Given such a model, we should be able to explain how a spreadsheet (for example) lets us access some data in a meaningful way. We might also acquire a more semantic basis for typed interoperation with a variety of data sources.

 

11th February: Manfred Kerber (Birmingham)

Title: Verifying Auctions

Abstract: In the ForMaRE project we explore "Formal Mathematical Reasoning
in Economics". Concretely, we look at auctions and verify their
mathematical properties with the generic proof assistant Isabelle. Most
notably we look at second price auctions (for a single good and for
multiple goods) and prove important properties of these auctions. We use
Isabelle's code extraction mechanism to extract Scala code from the
definitions.

For further information, see http://www.cs.bham.ac.uk/research/projects/formare/

 

18th February: Dimitris Kolovos (Early-career Researcher Seminar)

Title: Towards Scalable Model-Driven Engineering

Abstract: Model-Driven Engineering (MDE) is a software engineering approach that promotes domain-specific models as first-class artefacts of the software development and maintenance lifecycle. As MDE is increasingly used for the development of larger and more complex software systems, the current generation of modelling and model management technologies are being pushed to their limits.

In this talk I will provide an overview of some of the most important scalability challenges that manifest when working with large (collections of) domain-specific models. I will then go through ongoing work in the Enterprise Systems Group that attempts to address these challenges by providing support for incremental code generation, partial model loading, and model indexing.

 

25th February: Scott Owens (Kent) 

Title: The CakeML verified compiler

Abstract: CakeML is a new ML dialect aimed at supporting formally verified programs.

The CakeML project has several aspects including formal semantics and metatheory, a verified compiler, a formal connection between its semantics and higher-order logic (in the HOL4 interactive theorem prover), and example verified applications written in CakeML and HOL4. The project is an active collaboration between Ramana Kumar at Cambridge, Magnus Myreen at Chalmers, Michael Norrish at NICTA, and myself.

In this talk, I will explain what it means to verify a compiler, and explain the architecture of CakeML's verified compiler (including bootstrapping the compiler itself). Lastly, I will explain how these tools will allow us to support -- with a very small trusted computing base -- a verified version of the HOL Light theorem prover.

CakeML's web site is https://cakeml.org, and development is hosted on GitHub at https://github.com/CakeML/cakeml.

 

4th March: Yvonne Rogers (UCL) 

Title: Where are the People in the Internet of Things?

Abstract: The current vision that is driving much future technology development is the internet of things (IoT). Many companies are investing hugely in it, hoping to become world leaders. There is much talk about the market being worth many billions. It is envisioned that in the next few years millions of objects and devices will be wirelessly connected to each other via the Internet. Each one of these ‘nodes’ will be able to send or receive information, without any need for human intervention. Dishwashers, washing machines and fridges will be equipped with an array of sensors that can decide when best to turn themselves on without us ever having to worry, ourselves. People will not have to concern themselves with a whole slew of mundane decisions like when to buy more milk or turn the heating up. Such a domestic bliss vision is largely driven by values of cost-effectiveness, efficiency, automation and optimization. While there is much merit to making our lives easier and less stressful, I argue that there is much scope for thinking about other ways in which ecosystems of things can be put to good use and where people are at the centre of them. In my talk, I will examine the current state of the prevailing paradigm of IoT and propose alternative ways of connecting people and communities through interconnected technologies.

 

11th March: Jane Hillston (Edinburgh) 

Title: Fluid Approximation for the Analysis of Collective Systems

Abstract: Stochastic process algebras have been used for more than 20 years for the formal description and analysis of quantitative aspects of system behaviour. However, like all discrete state representations, they suffer from problems of state space explosion, meaning that analysis becomes intractable as the size of the system grows. Nevertheless a process algebra has many attractive features for modelling collective systems which consist of multiple instances f simple components interacting as populations. 

In this talk I will briefly introduce the stochastic process algebra PEPA and review its Markovian semantics which can be used to study aspects of dynamic behaviour. I will then explain how these semantics may be rigorously approximated, in the case of collective systems, by a set of ordinary differential equations. These equations which are more efficient to analyse and are scale-free, in the sense that the equations remain the same as the populations grow.

 

Summer 2015 

22 April: Abhijeet Ghosh (Imperial)

Title: Acquisition and Modeling of Facial and Material Appearance

Abstract: The talk will cover state of the art techniques for acquisition of facial and material appearance including geometry and reflectance properties. Central to the acquisition will be measurement under controlled illumination using a Light Stage. The first part of the talk will present efficient acquisition of facial geometry and reflectance using polarized spherical gradient illumination. The talk will cover practical ways of measuring layered skin reflectance including surface and subsurface scattering using a small set of measurements. The talk will also present some recent results of measuring skin micro-geometry at the resolution of a few microns for very high resolution (16K) rendering of skin for increased realism. The second half of the talk will focus on measurement and modeling of material reflectance properties. Once again, controlled spherical illumination from various lighting setups will be employed to estimate spatially varying diffuse and specular reflectance properties including albedo, surface normals, specular roughness and anisotropy.


Bio: Dr. Abhijeet Ghosh is a Lecturer in the Department of Computing at Imperial College London. His main research interests are in appearance modeling, realistic rendering, and computational photography. Previously, he was a senior researcher and research assistant professor at the University of Southern California Institute for Creative Technologies where he worked on Light Stage based acquisition. Abhijeet received his PhD in computer science from the University of British Columbia. His doctoral dissertation, "Realistic Materials and Illumination Environments", received an Alain Fournier Award. He currently holds a Royal Society Wolfson Research Merit Award at Imperial College London.

 

23 April: Steve Easterbrook (Toronto)

Title: Computing the Climate: The software engineering of global climate models

[Seminar will take place in CSE/082 at 13.30 on 23rd April] 

Abstract: Discussions of climate science and climate policy often include projections for future climate change over the rest of this century and beyond. But these discussions rarely explore what’s in the simulation models that generate the projections, nor how the models are developed and tested. In this talk, I will present some of the results of a 7 year research study of the engineering of global climate models. The talk will begin with a brief overview of what a climate model is, and how it is used, including a little of the history of the science on which the models are based. I will then present some of the key findings from our field studies at major climate modeling labs in Europe and North America. The results show that climate modellers have evolved a sophisticated software development process that weaves together ideas from continuous integration with a scientific approach to hypothesis testing and peer review. Our analysis of the defect density of climate models shows that this process provides some of the highest quality software ever measured. I’ll end the talk with some some new results from a comparative study of the software architecture of these models, which illustrates some of diversity of these models across different research labs.

Biography: Steve Easterbrook received a BSc in Computer Science from the University of York in 1986, and a PhD in Computing from Imperial College in 1991. He joined the faculty at the School of Cognitive and Computing Science, University of Sussex in 1990, where he served as inaugural director of the Masters program in Human-Centred Computer Systems. In 1995 he moved to the US to lead the research team at NASA ́s Independent Verification and Validation (IV&V) Facility in West Virginia, where he worked on verification of the flight software for the Space Shuttle and the International Space Station. In 1999, he moved to the University of Toronto, where he is a professor of Computer Science. His research interests range from modelling and analysis of complex adaptive systems to the socio-cognitive aspects of team interaction. His current research is in climate informatics, where he studies how climate scientists develop computational models to improve their understanding of earth systems and climate change, and the broader question of how that knowledge is shared with other communities. In the summer of 2008, he was a visiting scientist at the UK Met Office Hadley Centre, and in 2010 a visiting scientist at the National Centre for Atmospheric Research in Boulder, Colorado; the Max-Planck-Institute for Meteorologie, in Hamburg; and the Institute Pierre Simon Laplace in Paris.

 

29th April: Prof Georg Struth (Sheffield) 

Title: Verification Tool DIY

Abstract: I present a principled approach to the design of program construction and verification tools which seeks a clean separation between the control flow and the data flow of programs. The aim is to find simple algebraic semantics for the control flow and combine them with detailed set-theoretic models for data and memory domains. The approach is implemented in Isabelle/HOL, and its principles are illustrated through three example tools, all of which are correct by construction. The first one is a program verification and refinement tool for simple while-programs, which uses Kleene algebra with tests for the control flow and a standard relational semantics for the data flow. The second tool implements separation logic using a novel algebraic semantics for the control flow and an extended relational model for the data flow over the store and heap. The third one is based on the rely-guarantee approach to shared-variable concurrency, using concurrent Kleene algebra for the control flow and trace-based models for the data flow, once again with novel axioms for interference conditions. I will discuss two of these three tools in detail, depending on the audience's interest, and show them at work on a number of algorithmic verification and refinement examples.

 

6th May: Paul Cairns 

Title: Teaching-led research

Abstract: Modern academic life has two important ​activities​:​ teaching and ​researching​​. The entire discourse around teaching and research though, as embodied in institutional structures, works to separate teaching and research into entirely different domains. The only nod towards a deeper integration is the occasional and un-monitored use of the term "research-led teaching." However, this flies in the face of the tradition of what a university is. I therefore reflect on the possibility of a tighter integration of teaching and research espousing the idea that perhaps we should only do research that we can teach about, that is, teaching-led research. I base my reflections on my own experiences of teaching based on my research from first year undergraduates to MSc students. I point out the benefits to me and my students but the drawbacks in gaining recognition for the value of this sort of work. It should be of relevance to anyone who is interested in (my idiosyncratic view of) academic life or has suffered being one of my students.

Bio: Paul Cairns is Reader in HCI here in the department of Computer Science. He was formerly a senior lecturer at UCL in the Dept of Psychology. His DPhil is in Mathematics from Oxford University. ​He has been heavily involved in the development of several degree programmes and modules within those programmes including the very successful MSc in Human-Centre Interactive Technologies here at York, the MSc in HCI with Ergonomics at UCL, BSc in Business Information Systems at Middlesex and most recently the training programming for the Four year PhD in Intelligence Games and Games Intelligence. He leads and teaches modules across the range of students from first years to PhD students.

 

13th May: Karl Tuyls

Title: Analysing multi-agent learning dynamics

Abstract: Many real-world scenarios can be modelled as multi-agent systems, in which multiple autonomous decision makers interact in a single environment. The complex and dynamic nature of such interactions prevents hand-crafting solutions for all possible scenarios, hence learning is crucial. Studying the dynamics of multi-agent learning is imperative in selecting and tuning the right learning algorithm for the task at hand. So far, analysis of these dynamics has been mainly limited to normal form games, or unstructured populations. However, many multi-agent systems are highly structured, complex networks, with agents only interacting locally. In this talk I will discuss the link between the replicator equations of evolutionary game theory and reinforcement learning and I will survey the dynamics of a variety of multi-agent learning algorithms. Furthermore, I will discuss the dynamics of networked interactions, using the replicator dynamics as a model for learning.

 

27th May: Louis Rose

Title: Incrementality: Do More with Less

Abstract: Incrementality is about reducing redundant rework, and is normally achieved by reusing prior results. In computer science, incrementality is important if a computation must be performed repeatedly on many similar inputs. As it happens, this situation arises often during software engineering (and elsewhere too).

In this talk, I’ll describe how thinking about incrementality has led the Enterprise Systems group to solve problems in software maintenance and in model-driven engineering. I’ll also introduce new work on applying incrementality to solve a key problem in software testing: improving the scalability of mutation testing (a technique for answering questions such as “should I stop testing yet?”)

 

3 June: Paul Walton

Title: Equality for Women in Science: Now, Sometime, Never?

Abstract: Despite that fact that the fraction of male and female undergraduates in science in the UK is about equal, by the time you reach the senior grades of the profession the fraction of women has dwindled. The data show that women appear to face barriers at every stage of moving up the ranks, regardless of subject and regardless of sector, and that we face a systemic and highly damaging situation where large swathes of the scientific workforce are being prevented from reaching the top positions or are being forced out of the profession. This includes universities, where the problem appears to be particularly acute in Chemistry and in Medicine. This talk explores the issue. It takes a critical look at the modern social-science literature which is now informing the reasons behind gender inequality. It will also discuss the possible interventions that everyone can make to help address the issue.

 

10th June: Prof Lynne Baillie (Glasgow Caledonian University)

Title: Envisaging the Future of Rehabilitation Technology

Abstract: The talk presents our exploration, during a large scale joint research council project, of the potential of exergame and visualisation technologies to assist older adults with their rehabilitation. Rehabilitation has proven effective to significantly reduce the risk of falling in older adults and improve outcomes after knee surgery. However, low adherence to rehabilitation programmes in the home implies that older adults often do not receive the required amount of exercise therapy necessary. We proposed that exergames and dynamic visualisations of user movement could aid older adults by encouraging greater participation in home exercise, thereby improving the quality of therapy that these users receive in the home. An investigation, utilising user-centred and collaborative design methods, was first carried out to gather the requirements necessary to design the technologies, to maximise the effectiveness of these tools with regard to therapy and engagement. This investigation uniquely involved older adults in the collaborative design of rehabilitation technologies for home-based rehabilitation.

Once the development and usability testing work had been completed an empirical investigation, based on a randomised controlled trial (RCT) design, was carried out over 12 weeks (falls), 6 weeks (knee) in the home. The main goal of these studies was to explore the potential of the exergame and visualisation technologies to improve adherence to exercise, functional outcomes and confidence. The results indicate that the technologies can potentially improve functional outcomes versus standard rehabilitation care. However, the home studies discussed in the talk were pilot RCTs; therefore they were limited mainly by the sample size, and as such these results did not achieve statistical significance. Our current work is on a new project which will undertake a full RCT of the exergame system. Talk attendees at the end of the talk will be invited to play the exergames for fallers.

Biography: Professor Lynne Baillie has a PhD (2002) and MSc in Computing (1999) and is currently the Director of the Interactive and Trustworthy Technologies Research Group at Glasgow Caledonian University. She was a Senior Researcher at the Telecommunications Research Centre in Vienna (FTW) from December 2002 - December 2006. She has led 12 research projects and been a co-investigator on a further 5. Her research focus is on the design and evaluation of novel interactions for non-work contexts, with a special focus on healthcare technologies. For further information see her research group page: http://www.ittgroup.org

 

2nd July: Burkhart Wolff (Paris Sud)

Title: Can Testing be Liberated from the Automata-style?

Abstract: Sequence Testing is an important sub-domain of formal model-based Testing. It addresses test scenarios where the tester controls the state of the System Under Test (SUT) only at the initialization time and then indirectly via a sequence of inputs. The latter may stimulate observable outputs on which the test-verdict must be based solely.

A number of automata-based test-theories have been suggested that work fairly well for traces of impressing length — provided that the state space of the SUT is small. Whenever large state spaces have to be modeled — as is the case for operating systems, data-bases or web-services — both theory and implementations resists obstinately practical applicability: Theoretically, because symbolic representations of state spaces have to be treated; Practically, because these difficulties result in a small number of tools addressing sparse and fairly limited application domains.

In this talk, I will present a novel approach to the problem based on Monads, their theory developed in Isabelle/HOL. Notions like Test-Sequence and Test-Refinement can be rephrased in terms of Monads, which opens the way both for efficient symbolic execution of system models as well as the efficient compilation to test-drivers. Theoretically, the monadic approach allows to 1.) resists the tendency to surrender to finitism and constructivism at the first-best opportunity 2.) provides a sensible shift from syntax to semantics: instead of a first-order, intentional view in nodes and events in automata, the heart of the calculus is on computations and their compositions.

Contact us

Dr Dimitar Kazakov

Dr Dimitar Kazakov

Seminars organiser

dimitar.kazakov@york.ac.uk
+44 (0)1904 325676