Abstracts 2013-14

Next seminars

18th July: Bertrand Meyer (ETH Zurich)

Title: Alias calculus, change calculus and more: four years later

Abstract: Many of the most difficult and important problems of program analysis boil down to alias analysis: finding out whether two reference (pointer) expressions can become attached to the same object. In 2010 I started looking for an effective approach to automatic alias analysis, and first presented the nascent “alias calculus” at a department at York. Since then, colleagues (particularly Alexander Kogtenkov and Sergey Velder) and I have considerably improved the approach (partly a euphemism for correcting the initial mistakes), implemented it, and applied it to such goals as change analysis (also known as frame analysis) and deadlock detection. The department’s kind invitation to speak again provides an opportunity to present the current state of the research, showing what can be achieved today in automatic alias and change analysis as part of the Eiffel Verification Environment.

Abstracts of the previous seminars

18th June: Bernhard Rumpe (RWTH Aachen, Germany)

Title: Software Language Engineering for Cyber-Physical Systems

Abstract: We examine the current state and problems of modeling software intensive machines and how modeling languages impact these challenges. Then we have a deeper look at the current state of art in defining appropriate domain specific modeling languages. We discuss how to engineer new modeling languages by adapting, extending and composing language fragments in syntax, semantics, methodological and tooling dimension. In particular, we discuss how to make use of models in large development projects, where a set of heterogenous models of different languages needs is developed and needs to fit together. This is a particular important challenge in integrated software and systems development projects, such as Cyberphysical Systems.


Biography: Bernhard Rumpe is Professor of Computer Science and Head of the Software Engineering Department at RWTH Aachen University in Germany. His research focuses on technologies, methods and tools necessary to create software of quality, that is as efficient and sustainable as possible. He has contributed to the semantics and use of modeling languages in software development. His group leads the development of the language workbench Monticore, which is particularly focused on supporting compositional and component-based approaches to language definition.
He has particular interest in projects in the automotive industry, which deal with architecture, quality and requirements management in the vehicle, with intelligent driver assistance for self-navigating cars. In 2001 he founded the Springer journal Software and Systems Modeling with Robert France; he continues as an editor-in-chief to this day.


4th June 2014: Paul Busch (Maths, York)

Title: The Heisenberg measurement uncertainty controversy and its resolution

Abstract: Reports on experiments recently performed in Vienna [Erhard et al, Nature Phys. 8, 185 (2012)] and Toronto [Rozema et al, Phys. Rev. Lett. 109, 100404 (2012)] include claims of a violation of Heisenberg’s error-disturbance relation. In contrast, we have presented and proven a Heisenberg-type relation for joint measurements of position and momentum [Phys. Rev. Lett. 111, 160405 (2013)]. To resolve the apparent conflict, we formulate here a new general trade-off relation for errors in qubit measurements, using the same concepts as we did in the position-momentum case. We show that the combined errors in an approximate joint measurement of a pair of 1-valued observables are tightly bounded from below by a quantity that measures their degree of incompatibility. The claim of a violation of Heisenberg is shown to fail as it is based on unsuitable measures of error and disturbance. These measures are used in an inequality that was formulated by Ozawa as a correction of a wrong inequality that is incorrectly attributed to Heisenberg. We will see that Ozawa’s quantities overestimate the errors and are found, ironically, to obey a trade-off relation of the Heisenberg form in the qubit case. Finally we show how the experiments mentioned may directly be used to test our error inequality.

The talk is based on our recent paper “Heisenberg uncertainty for qubit measurements”, available as arXiv:1311.0837, published in Phys. Rev. A 89, 012129 (2014). 


28th May: Colin Runciman (CS, York)

Title: Four decades of functional programming: personal thunks, reflections and futures

Abstract: Though Church's lambda calculus and Curry's combinatory logic were foundations laid in the 1930s, and the non-FORTRAN enclave of McArthy's LISP dates from the late 1950s, the era of modern functional programming began in the 1970s. I trace my own first awareness of the brave new world to a suspected bomb in summer 1975 and a pungent outburst in spring 1977.  In the talk I shall review some high points, and occasional low points, from then until now.  It will be a selective and personal view, but I hope not parochial.  There will be a few cross-references to current and future issues in computing.


7th May: Gav Wood (Ethereum project)

Title: Ethereum: A secure decentralised generalised transaction ledger

Abstract: The blockchain paradigm when coupled with cryptographically-secured transactions has demonstrated its utility through a number of
projects, not least Bitcoin. Each such project can be seen as a simple application on a decentralised, but singleton, compute resource. We
can call this paradigm a transactional singleton machine with shared-state. Ethereum implements this paradigm in a generalised manner. Furthermore it provides a plurality of such resources, each with a distinct state and operating code but able to interact through a message-passing framework with others. We discuss its design, implementation issues, the opportunities it provides and the future hurdles we envisage.


30th April: Philippa Gardner (Imperial College London)

Co-authors: Arthur Chargueraud (INRIA Paris), Martin Bodin, Alan Schmitt (INRIA Rennes), Daniele Filaretti, Sergio Maffeis, Daiva Naudziuniene, Garth Smith (Imperial College London)

Title: A Trusted Mechanised Specification of the JavaScript Standard

Abstract: JavaScript is by far the most widely used web language for client-side applications. Whilst the development of JavaScript was initially led by implementations, there is now increasing momentum behind the ECMA standardisation process. The time is ripe for a formal, mechanised specification of the language, to serve as a trusted basis for high-assurance proofs of language properties, the compilation of high-level languages, and JavaScript implementations. This talk is for a general audience, aiming to demonstrate that modern techniques of mechanised specification can handle the complexity of real-world languages such as JavaScript. We present JSCert, a mechansised specification of ECMAScript 5 in the Coq proof assistant, and JSRef, a reference interpreter for JavaScript extracted from Coq to OCaml. We establish trust in several ways: JSCert is designed to be 'eyeball close' to ECMAScript 5; JSRef is provably correct with respect to JSCert; and JSRef is tested to industrial standard. We believe that, over time, our methodology will lead to a highly trusted specification of the JavaScript standard. This work has recently been published in POPL'14. See http://jscert.org/ for more details.

Biography: Philippa Gardner is a professor in the Department of Computing at Imperial College London. Her current research focusses on program verification: in particular, reasoning about web programs (JavaScript and DOM) and reasoning about concurrent programs. She completed her PhD thesis, supervised by Professor Gordon Plotkin at Edinburgh in 1992. She moved to Cambridge in 1998 on an EPSRC Advanced Fellowship, hosted by Professor Robin Milner. She obtained a lectureship at Imperial in 2001, and became professor in 2009. She held a Microsoft Research Cambridge/Royal Academy of Engineering Senior Fellowship from 2005 to 2010 at Imperial. She is the Director of the UK Research Institute in Automatic Program Analysis and Verification, funded by GCHQ in association with EPSRC.


12 March: Marco Buratti (University of Perugia)

Title: Algebraic methods for the construction of some combinatorial designs

Abstract: It is widely recognized that the use of some more or less sophisticated algebra might be crucial in constructing finite structures with assigned combinatorial properties. A round-robin tournament is familiar even to kids but not everybody knows that to schedule such a tournament is one of the very first example of a combinatorial problem which can be quickly and elegantly solved with just a little bit of elementary algebra. In this talk I will illustrate nice algebraic solutions of other classic combinatorial constructions such as the scheduling of a whist tournament or the Hamilton decomposition of a complete graph. I will also present a series of famous unsolved problems and show some attempts using algebra which may possibly help in finding a solution.


5 March 2014: Simon Dobson (University of St Andrews)

Title: Complex networks and complex processes

Abstract: There is increasing interest in using complex networks to model phenomena, and especially in the construction of systems of networks that interact and respond to each other -- the so-called complex adaptive coupled networks. They seem to offer a level of abstraction that is appropriate for capturing the large-scale dynamics of real-world processes without becoming lost in the detail. This talk introduces such networks for a systems audience, describes some recent work in urban traffic modelling, and speculates on the combination of complex networks with sensor data to study environmental incidents such as flooding.


26 Feb 2014: Mohsen Razavi (University of Leeds)

Title: Toward public quantum communication networks

Abstract: Quantum communications is perhaps the most advanced of the emerging technologies that rely on quantum information science. It offers secure communications immune to any possible computational advancement in the future. Having successfully demonstrated over dark and commercial fibre, it is now the right time to step up the efforts and plan for extending this technology to the public level, where every home user can enjoy its benefits. In this talk, I will describe some of the technological challenges that hybrid quantum-classical networks are facing, and introduce some of the possible solutions. In particular, we look at several possible topologies for such networks, and discuss different generations of such hybrid networks relying on measurement-device-independent techniques and quantum repeaters.


19 Feb 2014: Simon Holland (Open)

Title: The Haptic Bracelets:  multi-limb interaction from music to medicine

Abstract: The Haptic Bracelets are lightweight wireless devices, designed to be worn on each wrist and ankle. They contain accelerometers, processors and low-latency, powerful, precise vibrotactiles. The multi-limb Haptic Bracelets were originally devised for musical applications, but have been found to have a range of medical uses.  Four motivating theoretical perspectives will be outlined:  Entrainment, Motor memory, Dalcrose Eurythmics and Sensory Motor Contingency theory. A range of musical applications are considered, and results from two preliminary musical studies are outlined. Results from a  pilot in gait rehabilitation with a stroke patient are presented. Potential applications for Parkinsons, cerebral palsy, the deaf and the blind deaf are noted. The talk will include a participative demo of the Haptic Bracelets.


12 Feb 2014: Claire Maiza (Grenoble INP)

Title: From the software to the hardware, analysis of worst-case execution time bounds and delays for hard real-time systems

Abstract: In hard real-time systems upper-bounds on the execution time of programs are mandatory to guarantee the behavior of systems. In this talk, we will focus on static timing analysis for worst-case execution time and Cache-related preemption delay. I will show how we improved timing analysis by focusing on program analysis and hardware architecture.
On the software, timing analysis may be improved by better taking into accounts information about the high-level application, where the application is designed (Lustre/SCADE). On the hardware, a new cache replacement strategy that fits better preemptive systems (cache-related preemption delay) is introduced to get a better global execution time bound.

Short Bio: Since September 2010, Claire Maiza is an associate professor (Maître de conférence) at Verimag in the Synchrone team (florence Maraninchi). She works on timing analysis of real-time systems and the main topics are: cache-related preemption delay, program analysis for worst-case execution time and predictable architectures. Claire Maiza (Burguière) obtained her Ph.D. in Computer Science from Université Paul Sabatier (Toulouse, France) under the supervision of Christine Rochange (team: TRACES, Lab: IRIT) on the timing analysis of branch predictors. From 2008 to 2010, she was a postdoctoral scholar at the Saarland University (Saarbruecken, Germany) in the research group of Reinhard Wilhelm where she started collaboration on the topics of cache-related preemption delay and predictability.


22 January 2014: Susan Stepney, Computer Science (York)

Title: Can a slime mould compute?

Abstract: If you have a PC, tablet, or smartphone, you have used a computer. But some people use billiard balls, beams of light, sticks of wood, chemicals, bacteria, slime moulds, spaghetti, even black holes, as computers (although some of these only in theory!).  How can these things be computers?  What can they do?  Can they do things your smartphone can't?  And why are these people using such peculiar things to compute with, anyway?


22 January 2014: Matteo G. A. Paris (University of Milan)

Title: An invitation to quantum estimation theory

Abstract: Several quantities of interest in physics are non-linear functions of the density matrix and cannot, even in principle, correspond to proper quantum observables. Any method aimed to determine the value of these quantities should resort to indirect measurements and thus corresponds to a parameter estimation problem whose solution, i.e. the determination of the most precise estimator, unavoidably involves an optimization procedure. In this lecture I review local quantum estimation theory, which allows to quantify quantum noise in the measurements of non observable quantities and provides a tools for the characterization of signals and devices, e.g. in quantum technology. Explicit formulas for the symmetric logarithmic derivative and the quantum Fisher information of relevant families of quantum states are presented, and the connection between the optmization procedure and the geometry of quantum statistical models is discussed in some details. Finally, few applications, ranging from quantum optics to critical systems are illustrated.


4 December 2013: John Hughes (Chalmers, University of Gothenburg)

Title: Property-based Testing with QuickCheck

Abstract: QuickCheck is a property-based testing tool which combines random test case generation with minimization of failing tests, to find bugs rapidly and present them in an easily diagnosed form. Properties are expressed in a functional language, often in the form of abstract state machine models. QuickCheck has been applied successfully to solve difficult industrial problems, including finding notorious race conditions in a database, finding eventual consistency problems in a distributed key-value store, and certifying AUTOSAR Basic Software for inclusion in cars. I will describe how QuickCheck works, and show some of the most interesting industrial successes.

Bio: John Hughes has been a functional programming enthusiast for more than thirty years, at the Universities of Oxford, Glasgow, and since 1992 Chalmers University in Gothenburg, Sweden. He served on the Haskell design committee, co-chairing the committee for Haskell 98, and is the author of more than 75 papers, including "Why Functional Programming Matters", one of the classics of the area. With Koen Claessen, he created QuickCheck, the most popular testing tool among Haskell programmers, and in 2006 he founded Quviq to commercialise the technology using Erlang.


27 November 2013: Dimitar Kazakov (Department of Computer Science, Univ. of York)

Title: Evolutionary Paths to Compositional Language

Authors: D Kazakov and M Bartlett

Abstract: We present results from simulations studying the hypothesis that mechanisms for landmark-based navigation could have served as preadaptations for compositional language. It is argued that sharing directions would significantly have helped bridge the gap between general and language-specific cognitive faculties. The experiments in this study are built around two observations: firstly, that it is possible to formulate a range of landmark-based navigational behaviours of increasing complexity, and a corresponding range of communicative behaviours, where each pair of behaviours requires the same type of abstract computation to be carried out. Secondly, that the necessary step from possessing each navigational behaviour to using the corresponding communicative behaviour for the purposes of sharing directions is relatively simple, and the same in all cases, namely, to be able to name a landmark, and be understood (Kazakov & Bartlett, 2002). We have previously shown that environmental factors, such as the amount of food present and the speed with which it perishes, can have a great impact on the relative benefits of communication, as shown on a representative range of environments (Bartlett & Kazakov, 2005). Here a number of different levels of navigational and communicative abilities are considered, resulting in a range of possible evolutionary paths. The selective pressures for, resp. against, increased complexity in either faculty are then evaluated for selected environments. The study aims specifically to identify whether there is a viable evolutionary path leading to compositional language, and if so, under what circumstances.

The results show that environmental conditions can render a step towards more complex communication either desirable or harmul. For instance, in environments with volatile resources, there is a considerable benefit for individuals with an already complex navigation if they adopted a simple (proto)language allowing them to share a path towards a resource as a sequence of landmarks. At the same time, sharing directions in a way that leaves both speaker and hearer with very similar chances of exploiting the same resource appears harmful. The full set of results suggest that very specific initial conditions and sequence of changes in the environment, resp. the ecological niche occupied, would have been needed to select for compositional language. Subject to these conditions however, there would be a significant evolutionary pressure in favour of a (proto)language using order, but no hierarchical structure.

D. Kazakov and M. Bartlett. (2013). Evolutionary Pressures Promoting Complexity in Navigation and Communication. Interaction Studies, Volume 14, Number 1, pp. 107-135(29). John Benjamins.


20 November 2013: Perdita Stevens (University of Edinburgh) 

Title: Bidirectionally tolerating inconsistency: partial transformations

Abstract: A bidirectional transformation's job is to manage consistency between information sources; in model-driven development, these are usually models of a software-intensive system. I will start by introducing bidirectional transformations and then go on to discuss some recent and ongoing work. A foundational property of bidirectional transformations is that they should be correct: that is, the transformation should succeed in restoring consistency between any models it is given. In practice, however, transformation engines sometimes fail to restore consistency, either because there is no perfectly consistent model to return or because the tool is (for reasons I'll discuss) unable to return one. I'll discuss examples, how to formalise properties that may nevertheless hold in such circumstances, and the implications.


13 November 2013: Almut Beige (University of Leeds) 

Title: Coherent cavity networks with complete connectivity

Abstract: The standard standing-wave description of optical cavities can be used for example to calculate the total photon scattering rate of experiments with resonant and near-resonant laser driving. However, it cannot be used to calculate the photon scattering rates through the different sides of a two-sided optical cavity. To overcome this problem, we introduce a traveling-wave cavity Hamiltonian. When modelling a situation which can be analysed taking either a fully quantum optical or a fully classical approach, this Hamiltonian is shown to yields the same predictions as Maxwell's equations. But it also applies to quantum optical experiments beyond the scope of Maxwell's equations. For example, it can be used to model the scattering of single photons through the fiber connections of coherent cavity networks. Here we use this approach to design coherent cavity networks with complete connectivity with potential applications in quantum computing and the simulation of the complex interaction Hamiltonians of biological systems.
[1] T. M. Barlow and A. Beige, Scattering light through a two-sided optical cavity, arXiv:1307.3545 (2013). [2] E. S. Kyoseva, A. Beige, and L. C. Kwek, New J. Phys. 14, 023023 (2012).
 
Biography:
 
Since October 2005:  Quantum Information Group,  School of Physics & Astronomy, University of Leeds
 
October 2003 - September 2004:  Department of Applied Mathematics and Theoretical Physics (DAMTP), University of Cambridge
 
October 2002 - September 2005:  Quantum Optics and Laser Science Group (QOLS), Imperial College London
 
August 2000 - September 2002:  Laser physics group, Max-Planck-Institut für Quantenoptik in Garching
 
June 1998 - July 2000:  Theoretical Quantum Optics Group, Imperial College London
 
March 1998 - May 1998:  Quantum Theory Group, University of Potsdam December 1992 - January 1998:  Quantum Optics Group, Institute for Theoretical Physics, University of Göttingen 

6 November 2013: Simone Severini (UCL) 

Title: The Graph Isomorphism Problem and Quantum Information

Abstract: I will review ideas to approach the Graph Isomorphism Problem with tools linked to Quantum Information
 
Biography: Simone Severini is a Royal Society URF and a member of the Department of Computer Science at UCL. He contributed to create new directions at the interplay between quantum theory, discrete mathematics, and complex systems (e.g., zero-error quantum information, background independent models of gravity, etc.) with ca. 100 theoretical papers, editorial work, and the organization of major conferences on quantum information theory (e.g., TQC, AQC, etc.).

30 October 2013: Matthew Parkinson (Microsoft)

Title: Views: compositional reasoning for concurrent programs

Abstract: Compositional abstractions underly many reasoning principles for concurrent programs: the concurrent environment is abstracted in order to reason about a thread in isolation; and these abstractions are composed to reason about a program consisting of many threads. For instance, separation logic uses formulae that describe part of the state, abstracting the rest; when two threads use disjoint state, their specifications can be composed with the separating conjunction. Type systems abstract the state to the types of variables; threads may be composed when they agree on the types of shared variables.

In this talk, I will present the "Concurrent Views Framework", a metatheory of concurrent reasoning principles. The theory is parameterised by an abstraction of state with a notion of composition, which we call views. The metatheory is remarkably simple, but highly applicable: the rely-guarantee method, concurrent separation logic, concurrent abstract predicates, type systems for recursive references and for unique pointers, and even an adaptation of the Owicki-Gries method can all be seen as instances of the Concurrent Views Framework. Moreover, our metatheory proves each of these systems is sound without requiring induction on the operational semantics.


16 October 2013: Mirco Musolesi (University of Birmingham)

Title: Mining and Understanding Big (and Small) Mobile Data

Abstract: Mobile phones are increasingly equipped with sensors, such as accelerometers, GPS  receivers, proximity sensors and cameras, which can be used to sense and interpret people behaviour in real-time. Novel user-centered sensing applications can be built by exploiting the availability of these technologies. Moreover, data extracted from the sensors can also be used to model and predict people behaviour and movement patterns, providing a very rich set of multi-dimensional and linked data, which can be extremely useful, for instance, for marketing applications, real-time support for policy-makers and health interventions. In this talk I will discuss our recent work in this area, by discussing some recent projects in the area of large-scale scale data mining (and modelling) of mobile datasets with applications to human mobility prediction and epidemic spreading containment. I will also show how a (complex) network approach can be extremely powerful for the analysis of large-scale and heterogenous datasets.

Biography: Dr. Mirco Musolesi is a Senior Lecturer at the School of Computer Science at the University of Birmingham. He received a PhD in Computer Science from University College London in 2007 and a Master in Electronic Engineering from the University of Bologna in 2002. From October 2005 to August 2007 he was a Research Fellow at the Department of Computer Science, University College London. Then, from September 2007 to August 2008 he was an ISTS Postdoctoral Research Fellow at Dartmouth College, NH, USA, and from September 2008 to October 2009 a Postdoctoral Research Associate at the Computer Laboratory, University of Cambridge.  Before joining Birmingham, he was a Lecturer at the University of St Andrews. His research interests lie at the interface of different areas, namely ubiquitous computing, large-scale data mining, machine learning, networked systems, and network science (in particular, social and spatial networks).


09 October 2013: James Steele (University College London)

Title: Neanderthals and the origins of language: some computer modeling approaches

Abstract: The origins of spoken language are still clouded in mystery, because the fossil and archaeological evidence is so sparse and indirect. In this talk I will summarise our recent anatomical and simulations-based study of the reconstruction of the Neanderthal vocal tract and its articulatory potential [1]. I will also allude more briefly to recent work by others on the evolutionary relationship between manual gesture and speech, and to the relationship between simple linguistic syntax and action recognition, since this may also be of interest to the computer science community.

[1] Barney, A., Martelli, S., Serrurier, A., & Steele, J. (2012). Articulatory capacity of Neanderthals, a very recent and human-like fossil hominin. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1585), 88-102.

Contact us

Dr Dimitar Kazakov

Dr Dimitar Kazakov

Seminars organiser

dimitar.kazakov@york.ac.uk
+44 (0)1904 325676