Social networks and swarm behaviour
Daniel Franks (University of York)
24 October 2012
In both computer science and biology there has been much interest in swarm behaviour and animal collective motion, and computer models have been central to developing an understanding of these systems. In current models, however, agents are assumed to be interacting in an egalitarian manner: with no preferences or biases for certain individuals. However, studies of sociality in a wide range of real animals have demonstrated that social interactions are structured, with biased preferences between certain individuals. Collective motion and animal social networkshave, each individually, received much attention. However, they have not previously been brought together in a combined framework. I will discuss a) how social networks can impact animal collective motionand b) how we can derive and analyse social structure from observing animal collective motion. Along the way I will discuss killer whale social networks, research-inspired games, ant networks, social robots, and complain about incorrect statistical analysis.
Daniel Franks has a background in computer science, but used applied computer science to study biology for his PhD at the University of Leeds. He then moved on to the University of Sussex to work as a research fellow in a social science institute, using computational techniques to study human social networks. He then moved to York as an RCUK fellow in the York Centre for Complex Systems Analysis (YCCSA) in 2006 in a joint appointment between computer science and biology. He is now a lecturer in both departments.
Teaching Semantics with a Proof Assistant or No more LSD trip proofs
Tobias Nipkow (Technical University Munich, Germany)
31 October 2012, hosted by Dr Detlef Plump
The gulf between many computer science students and rigorous proofs is well known and much lamented. Teachers are frequently confronted with student "proofs'' that look more like LSD trips than coherent chains of logic. Inthis talk I will present a new Programming Language Semantics course that bridges the gulf with the help of a proof assistant, Isabelle. During the first quarter of the semester the students are introduced to machine-checked proofs. The rest of the course covers a wide spectrum of topics centered around a simple imperative language: operational semantics, compilation, types, program analysis and Hoare logic. At the end of my talk I will give a (predictably positive) evaluation of the approach.
Tobias Nipkow received his Diplom (MSc) in Informatics from the Technical University Darmstadt in 1982 and his PhD from the University of Manchester in 1987. He was a post-doc at MIT and Cambridge University before he became a professor at the Technical University Munich in 1992 where he has the chair for Logic and Verification. He is best known for his work in term rewriting (he published the standard reference 'Term Rewriting and All That' together with Franz Baader) and his work on the theorem prover Isabelle (leading to the book 'Isabelle/HOL. A Proof Assistant for Higher-Order Logic', co-authored by Paulson and Wenzel).
A Fresh Look at Parkinson’s Disease
Peter Wellstead (Hamilton Institute, NUIM, Maynooth, Ireland)
12 November 2012, hosted by Jim Austin
Parkinson’s disease (PD) presents enormous research challenges, so that 200 years after Parkinson’s defining essay, we still do not know the causes, nor can we cure it. PD is particularly difficult because it is not a disease in the classical sense - it is a systems disorder associated with a range of multi-factorial mechanisms and disease pathways. As a result the vulnerability to PD depends upon a number of issues that can occur in many combinations and with various levels of genetic predisposition. Conventional disease research was not designed to deal with such diversity - a new perspective is required.
This seminar describes a fresh look at PD in the form of a systems approach involving mathematical modelling, computer-based simulation and analysis of disease dynamics. The central idea is to: (i) develop an integrative framework for consolidating and quantifying knowledge; (ii) use simulation and analysis to accelerate and focus experimental research.
The seminar describes how this can be done for the two key phases of PD:
By providing a unifying framework for PD, we increase our understanding of a condition where many isolated facts are known but few links have been made. In particular, an integrative look brings two novel ideas: an energy deficit theory for neural vulnerability to PD, and the theoretical prediction of a neurochemical trigger – the Parkinson’s switch – responsible for pathogenesis.
A Decade of Research on Constraint Modelling and Reformulation: The Quest for Abstraction and Automation
Alan M Frisch (Artificial Intelligence Group, York)
21 November 2012
This talk reviews research in the field of constraint modelling
and reformulation, focusing on the key themes of abstraction and automation.
Looking to the future, the talk identifies key issues that must be
confronted to further the quest for abstraction and automation.
This is a high-level talk that assumes no background in constraint modelling.
What makes things similar?
Ulrike Hahn (University of Cardiff)
12 December 2012
The notion of similarity is central to theorizing in a wide range of cognitive domains, from perception through memory to reasoning and problem-solving, and it figures prominently in computational approaches to tasks from these domains. This makes it imperative to gain an understanding of similarity that is computationally explicit enough to support detailed modelling. The talk will introduce the main approaches to similarity that have been put forward within psychology, highlight strengths and weaknesses of each, and detail what general insights each of these approaches has delivered.
Opportunities for Data-driven Optimisation
Barry O'Sullivan (University College Cork, Ireland)
16 January 2013
Constraint programming (CP) is a technology for solving combinatorial optimisation problems. A major generic challenge that faces CP is scalability, largely because the problems to which it is usually applied are computationally intractable (NP-Hard). While CP has been successfully applied in domains such as scheduling, timetabling, planning, inventory management and configuration, many instances of these problems are extremely challenging for traditional CP methods due to their hardness. However, an emerging dimension of scale relates to problem size, and the volume of data available that is relevant to solving a particular instance, e.g. extremely large domain sizes, or very large extensionally defined constraints of high arity. In 2009 information on the web was doubling every 18 months. It is now believed that this occurs in less than 12 months. This exponential growth in data, often referred to as the "big data" challenge, presents CP with major opportunities.
In this talk we make the case that optimisation is moving from being model-driven to being data-driven. We will discuss a number of challenging application domains for data-driven optimisation and the technical challenges that these present to the research community.
Barry O’Sullivan holds the Chair of Constraint Programming at the Department of Computer Science at University College Cork. He is the current Head of Department and is Director of the Cork Constraint Computation Centre. Professor O'Sullivan has been a Science Foundation Ireland (SFI) Principal Investigator since 2006. He is a Fellow of ECCAI (the European Coordinating Committee for Artificial Intelligence) and a Senior Member of AAAI (the Association for the Advancement of Artificial Intelligence). He is the Past President of the Association for Constraint Programming and Chairman of the Artificial Intelligence Association of Ireland. More details can be found at http://osullivan.ucc.ie
Quantum walks and algorithms
Viv Kendon (University of Leeds)
30 January 2013
Quantum walks are now a standard part of the quantum programmer's toolbox. I will give an introduction to quantum walks and their algorithmic uses, with some asides on how they can be used to model physical phenomena and how they relate to current experiments.
A route to quantumness in mesoscopic systems: Through the (quantum) looking glass, and what Alice found found there
Mauro Paternostro (Queen's University of Belfast)
6 February 2013
After getting her Master in Theoretical Physics, sleeping on the couch of her living room in a lazy summer afternoon, Alice wonders what she can actually do with the optomechanical cavities that her parents gave her as presents. She finds herself amazed by the possibility to enforce nonclassical correlations and create quantum states of optomechanical systems affected by strong noise and decoherence. The girl thus starts wondering about the possibility to build surreal optomechanical networks, challenging the boring "quantum repeaters" paradigm, and thus building up quantum interfaces between mechanical oscillators and other systems (ultracold atoms, BECs, massive molecules), but only if you asks them gently!
In this seminar, we will try to find out what Alice discovered when she woke up...
Teleporting to the Future
Samuel Braunstein (Computer Science, University of York)
13 February 2013
Teleportation is what we usually associate with the fuzzy disappearance and re-appearance of space voyagers such as Captain Kirk after the familiar command "beam me up Scottie". Since its early use in science fiction, the term teleportation has since been used to refer to the process by which objects are transferred from one location to another, without actually making the journey along the way. The "disembodied" nature of teleportation raises some baffling questions. "What is actually sent?" Is it the original system that is reconstructed at the remote site or merely a copy?
So long as teleportation remains within the remit of science fiction, these questions may seem since rather philosophical. But quantum teleportation, unlike its science fiction inspiration, is a fact. It has been achieved in laboratories the world over for the transfer of single photons, atoms and even beams of light. How might this new technology begin to affect our world and our lives? Is it possible to scale teleportation up so that one day we could teleport people? What might we learn from that, and to what use might teleportation be put by generations to come?
In this lecture, Samuel Braunstein will explain what teleportation is all about, and explore how the science of teleportation might take us all to places where no man has gone before.
Entry to the lecture is by free ticket only - register for yours online.
Immersion in digital games
Paul Cairns (Computer Science, University of York)
27 February 2013
Immersion is a term used widely to describe the experience of playing digital games. However, it is not clear what exactly people mean by it, what it is and what it is in games that influences it. In this talk, I'll describe studies I have been involved with to define and measure immersion and to position it in relation to other concepts relevant to gaming experiences. I'll then go on to show how a lot of what influences immersion is players' perception of the gaming situation. Along the way, I'll address something of the methodological issues related to understanding user experiences.
The Leeds digital pathology workstation: designing, developing and evaluating for diagnostic use
Darren Treanor (Leeds Teaching Hospitals NHS Trust)
6 March 2013
Digital pathology is a new medical imaging technique which uses so-called virtual slide scanners to produce high resolution (200,000 dpi) images of tissue. I will describe our experience in the Leeds virtual pathology project of using digital pathology in medical research and education, as well as describing work undertaken with collaborators at the School of Computing at Leeds in image analysis, 3D imaging, and human-computer interaction/ systems design.
More information is available at http://www.virtualpathology.leeds.ac.uk/research/
Continuous-Variable Quantum Cryptography with Entanglement in the Middle
Christian Weedbrook (University of Toronto, Canada)
13 March 2013
We analyze the performance of continuous-variable quantum key distribution protocols where the entangled source originates not from one of the trusted parties, Alice or Bob, but from the malicious eavesdropper in the middle. This is in contrast to the typical simulations where Alice creates the entangled source and sends it over an insecure quantum channel to Bob. By using previous techniques and identifying certain error correction protocol equivalences, we show that Alice and Bob do not need to trust their source, and can still generate a positive key rate. Such a situation can occur in a quantum network where the untrusted source originated in between the two users.
Novice interaction designer behaviour: a comparison of UK and Botswana participants
Helen Sharp (Open University)
1 May 2013, hosted by Paul Cairns
Designing interactive products for different cultures is challenging. The need to design products for the target end user group is well-understood in HCI and Interaction Design, but it is clear that cultural factors influence not just the design of the product itself but also the definition of usability and the process of design. The UNITE project, funded by the Leverhulme Trust, is a partnership between The Open University and Botho College in Botswana. The project is investigating novice interaction design behaviour in two cohorts of students studying the same Interaction Design module, one in the UK and one in Botswana. We aim to identify simple behaviours that can be taught to novice interaction designers from both cultural settings to improve their designs, and to identify socio-cultural factors affecting design.
To achieve this aim we have conducted protocol and diary studies in both countries, and for analysis we have augmented Schon's design reflection cycle with analogy. Analysis of the protocol studies has yielded some interesting insights and I will share these in this talk.
Graph-Based State Spaces
Arend Rensink (University of Twente, Netherlands)
8 May 2013 - Hosted by Richard Paige
We claim that behavioural modelling on the basis of graph transformation offers several advantages over the more traditional, state vector-based approach. The basic idea is to use graphs to represent individual system states, and graph transformation rules to specify steps in the system's evolution. The resulting models can then be used as a basis for verification, for instance through model checking.
The power and visual nature of the underlying graph formalism make for rapid prototyping and compact yet easily understandable models; moreover, the principle of graph transformation is as widely applicable as the principle of representing states as graphs, which is very wide indeed. On the other hand, it is not to be denied that there are also many challenges, especially in reaching a satisfactory performance.
In this talk I will discuss the pros and cons of graph-based state spaces, touching upon diverse aspects such as algorithmic complexity, reduction techniques, logics for expressing properties, and abstraction. The approach has been implemented in the state space generation tool GROOVE (Graphs for Object-Oriented Verification), which I will also demonstrate.
Towards Bridging the Gap between Structural and Statistical Pattern Recognition
Horst Bunke (University of Bern, Switzerland - RAEng disinguished visitor at York)
15 May 2013, hosted by Edwin Hancock
The discipline of pattern recognition is traditionally divided into the statistical and the structural approach. Statistical pattern recognition is characterized by representing objects by means of feature vectors, while the structural approach uses symbolic data structures, such as strings, trees, and graphs. This talk will focus on graphs for object representation. When comparing graphs with feature vectors, one notices an increased flexibility and representational power provided by graphs. On the other hand, the domain of graphs lacks mathematical operations needed to build pattern recognition and machine learning algorithms. Consequently, there is a shortage of suitable tools for graph classification, clustering, and related tasks.
In the first part of the talk we briefly review advances in the field of graph-based pattern recognition that aim at making algorithmic tools originally developed in statistical pattern recognition available for graphs. We will focus on graph embedding and graph kernels. The second part of the talk will give two concrete application examples, demonstrating the usefulness of graph embedding and graph kernels in the fields of handwriting recognition and brain state decoding
Neural control of behaviour: Lessons from a worm
Netta Cohen (University of Leeds)
22 May 2013
Biological systems solve computational problems in remarkable ways. Animals, in particular, have nervous systems that allow them to sense and act on the environment in ways that bewilder the imagination. Amongst the simplest animal nervous systems are those of nematodes or roundworms. Here I will present some recent progress in modelling the neural control of behaviour in one nematode: the model system C. elegans. I will begin by breaking the problem down into sub-problems, limited both in the behaviours and neural circuits being studied, with examples from locomotion, navigation, sensory integration and behavioural "choice". For each example, I will outline the engineering challenges, and discuss the respective roles of the neuronal dynamics, the animal's embodiment and its situatedness in the environment. In addition I will highlight the sometimes counterintuitive nature of some of the engineering solutions proposed, and the implications, both for our understanding of the neural control of behaviour in this animal, and for translating these models into biologically inspired engineering solutions in mobile robots and beyond.
Population-based microbial computing
Martyn Amos (Manchester Metropolitan University)
29 May 2013, hosted by Susan Stepney
Synthetic biology is an emerging research field, in which engineering principles are applied to natural, living systems. A major goal of synthetic biology is to harness the inherent "biological nanotechnology" of living cells for the purposes of computation, production, or diagnosis. As the field evolves, it is gradually moving away from a single-cell approach (akin to using standalone computers) to a distributed, population-based approach (rather like using networks of connected machines). In this talk we present several recent results from our group, describing various aspects of this new form of biological engineering. Specifically, we show, using computational studies, how reconfigurable logic devices may be constructed using bacteria, and how these may be used as the basis for a "client-server" model of microbial computing.
Martyn Amos is a Professor of Novel Computation in the School of Computing, Mathematics and Digital Technology at Manchester Metropolitan University, and an expert on natural computation and DNA computing. He was born in Hexham, Northumberland in 1971. He graduated with a degree in Computer Science from Coventry University in 1993, before earning a Ph.D. in DNA computing in 1997, from the University of Warwick. He then held a Leverhulme Trust Special Research Fellowship at the University of Liverpool, before taking up permanent academic appointments, first at the University of Liverpool (2000–2002) and then the University of Exeter (2002–2006).
Taming Concurrency by Thinking Locally
Mike Dodds (University of York)
5 June 2013
Concurrent behaviour is ubiquitous in modern computing, in multicore CPUs, in large-scale data-processing, and in the distributed systems that make up the internet. For efficiency reasons, such systems are based on algorithms where many processes can share access to the same data-structure. These fine-grained algorithms are simultaneously extraordinarily subtle, and key to system reliability. In this talk, I will discuss using formal mathematics to guarantee that these algorithms behave correctly. To do this, I will use an approach called local reasoning, which allows irrelevant complexity to be hidden. I will show how local reasoning can capture straightforward programmer intuitions such as data locality, resource sharing, and thread protocols. I will also explore some of the theory underlying concurrent local reasoning, and discuss its applications to real-world systems.
Monte Carlo Tree Search
Peter Cowling (University of York)
12 June 2013
Monte Carlo Tree Search (MCTS) has advanced the state-of-the-art in Artificial Intelligence (AI) decision making for several challenging adversarial domains, most famously the game of Go. MCTS combines the power of game tree search, the generality of statistical machine learning and the simplicity of random simulation. MCTS produces high quality decisions even in the absence of expert human knowledge, making it ideal for domains where such knowledge is difficult or impossible to formulate (although when knowledge is available, MCTS can easily be enhanced to exploit it). MCTS is an anytime algorithm, able to effectively use as much or as little computational time as is available.
This talk will begin with an overview of the MCTS algorithm itself, describing its operation and surveying the highlights of published work to date. I will then present our work on applying MCTS to games with uncertainty and hidden information and to video-game-like domains with real-time constraints. These approaches have won international competitions, and in collaboration with an industrial partner have been deployed in a successful mobile game with more than 2.5 million downloads. I will conclude with some prospects for future work, including potential applications to complex decision problems where there is an adversary or a hostile environment which occur in domains such as commerce, biology and security.