Abstracts 2015 - 16

Autumn 2015

14th October: Detecting the Threat from Inside

Sadie Creese (Oxford)

For the past five or six years my group has been increasingly concerned and interested in the threat we face from insiders in our information infrastructures. This research was seeded by our investigations into cloud computing and the likely risks we would face. We concluded there that insiders would be an extremely difficult threat to combat as the nature of an insider in that context is itself problematic. This coupled with the economies of scale that might reward threats caused us to conduct more focused research into detection in cloud-service environments. More recently, in the past three years, we have expanded our investigations to consider the potential for anomaly detection in corporate infrastructures with consideration of how we might incorporate an understanding of the human element and cyber indicators of behavioural trends that may correlate with risk. This seminar will explain the journey and the resulting detection architecture and reflect on what we have learnt so far.

Biography: Sadie Creese is Professor of Cybersecurity in the Department of Computer Science at the University of Oxford. She is Director of the Global Cyber Security Capacity Centre at the Oxford Martin School, and a co-Director of the Institute for the Future of Computing at the Oxford Martin School. Her research experience spans time in academia, industry and government. She is engaged in a broad portfolio of cyber security research spanning situational awareness, visual analytics, risk propagation and communication, threat modelling and detection, network defence, dependability and resilience, and formal analysis. She has numerous research collaborations with other disciplines and has been leading inter-disciplinary research projects since 2003. Prior to joining Oxford in October 2011 Creese was Professor and Director of e-Security at the University of Warwick’s International Digital Laboratory. Creese joined Warwick in 2007 from QinetiQ where she most recently served as Director of Strategic Programmes for QinetiQ’s Trusted Information Management Division. Recent publications include papers on topics including insider threat detection, visual analytics for cyber attack, cyber risk propagation prediction, identity attribution across physical and cyber spaces, personal privacy in the face of big data, vulnerability of identities in social networking contexts, and trustworthiness metrics for openly sourced data.

 

21st October: Lightweight verification in industry: AMD's remote-scope-promotion GPU design

Mark Batty (Kent) 

At the start of the design of a processor, bugs are cheap to fix, but testing resources are very expensive. In the later design phases, testing becomes easier, but changes become incredibly expensive. At the same time, processor vendors increasingly provide a relaxed interface to memory, exposing intricate details of the memory system to the programmer, for performance reasons, sometimes introducing elusive bugs in the process. To counter this problem, we advocate the use of lightweight formal verification of processor designs very early in this process.

In this talk I will present a case study (to appear in OOPSLA '15): a team of hardware architects at AMD recently published a prospective GPU design that incorporates a novel new feature: "remote-scope promotion". Remote-scope promotion extends the GPU programming language, OpenCL, with the ability to combine two popular heterogeneous programming features: work-stealing and memory scopes. We worked with the AMD team to formalise their design, both at the level of the OpenCL memory model, and at the level of the underlying hardware architecture. We uncovered and fixed several significant bugs in their implementation (including a violation that breaks the message-passing programming idiom), and we propose an alternative implementation that is definitely simpler and probably more efficient. Our work demonstrates the value, and viability, of applying formal methods early in the hardware design process.

 

28th October: Situating Computational Thinking as a Discipline in Schools

Christopher Power (York)

Living in a world surrounded by devices that flash, chirp and tweet at us all day long, it is easy for the general public to see the impact that computing technology has had on our society. However, what is not apparent is the decades of work that have gone into understanding how computation works and the means by which some of the hardest problems have been approached.

As we have matured as a field, there is an increasing recognition that computational thinking, as a discipline, is an essential collection of concepts and skills for understanding the world around us that complements approaches in empirical sciences, social sciences and humanities.

This has led rise to the Computing At Schools initiative, which has successfully integrated Computing as a core component of the national curriculum. What has previously been the primary domain of universities, specifically the teaching of concepts around computational thinking and related skills in programming, is now being delivered across the country from primary school forward.

In this talk, I will present the history of CAS and York's involvement in this initiative. I will discuss the new curriculum and try to situate it in practice that is happening in primary and secondary schools. Finally, I will focus on how teaching of computing differs from our typical approaches, and how this will impact the knowledge students have coming into our programmes. I will end leaving with time for discussion as to what it means for our future engagement with undergraduates, our admission criteria, and even how we perceive our place in society.

Biography: Christopher Power is a member of the HCI Research Group and has been the coordinator for our CAS activities for the past 5 years. He has recently secured funding for the University of York to become a Regional Centire for Yorkshire and Humber through which we are coordinating CPD in 11 local authorities.

 

4th November: Separating the concerns of rely and guarantee in reasoning about concurrent programs

Ian Hayes (The University of Queensland - visiting Newcastle University until the end of the year)

The rely-guarantee approach to reasoning about concurrent programs pioneered by Cliff Jones makes use of a rely condition r (a binary relation) to record the expected interference from the environment: the interference steps must satisfy r between their before and after states. Complementing the rely assumption is a guarantee condition g (a binary relation) that the process ensures holds for all its program steps.

This talk looks at considering rely and guarantee conditions as separate concerns. Separate laws are developed for guarantees and relies in isolation and then these are combined to build general laws for refining concurrent programs. Generalisation of relies and guarantees from binary relations to processes is then considered. Taking an algebraic perspective allows one to apply the laws to a range of concurrency models including shared-memory, event-based and hybrid systems. The algebra of parallel processes and rely quotients is conceptually similar to the algebra of multiplication and division (parallel and rely quotient, respectively); only familiarity with the former is assumed.

Biography: Ian Hayes is a professor of computing science at the University of Queensland, Brisbane, Australia. His research interests include methods for specifying and reasoning about concurrent and real-time systems based on the rely-guarantee approach and time bands.

 

25th November: Folding Domain-Specific Languages: Deep and Shallow Embeddings

Jeremy Gibbons (University of Oxford)

A domain-specific language can be implemented by embedding within a general-purpose host language. This embedding may be deep or shallow, depending on whether terms in the language construct syntactic or semantic representations. The deep and shallow styles are closely related, and intimately connected to folds; we explore that connection.

Biography: Jeremy Gibbons is Professor of Computing at the University of Oxford, and Director of the part-time professional Software Engineering Programme there. His research area is in programming languages, especially functional programming, patterns in programs, programming languages, and programming methodology. He is Editor-in-Chief of the Journal of Functional Programming, and has recently stood down as Vice-Chair of ACM SIGPLAN and as Chair of IFIP Working Group 2.1.

 

Spring 2016

13th January: Black Box Algebra

Alexandre Borovik (University of Manchester) 

A black box (BB) is a device or an algorithm which hides inside itself an algebraic object, say, a finite group or finite ring, which picks, on request from an attacker (A), random elements from this object,and manipulates with them according to instructions it receives from the attacker. So a typical dialogue looks like that:

A: pick random x
BB: done
A: compute a square root of x, call it y.
BB: done
A: compute a squre root of x, call it z.
BB: done
A: compute t = yz^{-1}
BB: done
A: is t = 1?
BB: no
... etc.
The aim of the attacker is figure out what particular algebraic object is hidden inside BB.

The BB set-up naturally arises in homomorphic encryption and in finite computational algebra.

My talk will focus on unusual data structures and ways to organize calculations which arise in BB algebra. I will illustrate them using some elementary looking problems which remained inaccessible for decades and were solved only very recently.

This is a joint work with Sukru Yalcinkaya (Istanbul University).

 

27th January: SibylFS: formal specification and oracle-based testing for POSIX and real-world file systems

Tom Ridge (University of Leicester) 

Systems depend critically on the behaviour of file systems, but that behaviour differs in many details, both between implementations and between each implementation and the POSIX (and other) prose specifications. Building robust and portable software requires understanding these details and differences, but there is currently no good way to systematically describe, investigate, or test file system behaviour across this complex multi-platform interface.

In this talk we show how to characterise the envelope of allowed behaviour of file systems in a form that enables practical and highly discriminating testing.

SibylFS is part of a long line of work over the last 15 years (NetSem, relaxed memory models, processor models), to give specifications for complicated real-world systems. I will also discuss some of the things we learned along the way.

 

3rd February: Argumentation, Trust and Interaction

Simon Parsons (King's College London) 

In the past few years I have been interested in the use of argumentation in decision-making. The basic model is that arguments are made for and against decisions options, and the final decision is made by weighing up the arguments on both sides. My work has concentrated on the situation in which the decision depends upon information that is not all entirely trustworthy, and so there is value in including information about the (not all entirely trustworthy) sources. I will describe a model that captures this situation, an implementation of the model, and a user study that evaluated the implementation.

 

10th February: From Digital Forensics to Regular Expressions

Howard Chivers (York) 

Research challenges in Digital Forensics are concerned with making defensible inferences about the actions of individuals from low level events. However, existing tooling supports the extraction of files rather than the discovery of event-based artefacts such as messages or system actions and the regular expression evaluation performance in most current mainstream language libraries proved to be a major barrier to efficient artefact discovery. The result was a research agenda to create a new regular expression library suited to this application.

The seminar will outline the Digital Forensics research problem, why the performance of existing language libraries is problematical, and then describe the design of a new regular expression library for Python. The new library makes use of a well-established implementation approach to the evaluation of non-deterministic automata, and also introduces new implementation ideas. The most important of these is a Regular Profile, which provides a new language interpretation for regular expressions. Profiles allow the compiler to establish important properties of the regular expression, and can be implemented as compact DFAs; these properties allow a range of performance optimisations, including achieving a practical time performance which grows only logarithmically with the size of the expression to be evaluated.

 

17th February: Data Stream Processing Management in the Cloud

Eva Kalyvianaki (City University London)

As users of “big data” applications expect fresh results, we witness a new breed of stream processing systems (SPS) that are designed to scale to large numbers of cloud-hosted machines. Such systems face new challenges: (i) to benefit from the “pay-as-you-go” model of cloud computing, they must scale out on demand, acquiring additional virtual machines (VMs) and parallelis-ing operators when the workload increases; (ii) failures are common with deployments on hundreds of VMs—systems must be fault-tolerant with fast recovery times, yet low per-machine overheads. An open question is how to achieve these two goals when stream queries include stateful operators, which must be scaled out and recovered without affecting query results.

In this talk I will describe a novel approach to externalise operator state explicitly to the SPS though a set of state management primitives. State externalisation enable us to handle both scale out and recovery from operators’ failures using the same primitives. Our system periodically checkpoints operator state and saves it to upstream VMs. Failed operators are recovered by restoring checkpointed state on a new VM.

Bio: Eva is a Lecturer (Assistant Professor) in the Department of Computer Science at City University London. Before this, she was a post-doctoral researcher in the Department of Computing, Imperial College London. She holds a Ph.D. from the Computer Laboratory (SRG/netos group) in Cambridge University. Her interests span the areas of Cloud Computing, Data Stream Processing, Autonomic Computing, Distributed Systems and Systems Research in general.

 

24th February: General Game Playing AI

Sam Devlin (York DC hub) 

The history of Artificial Intelligence (AI) is intertwined with achievements in playing specific games. For example, DeepBlue’s defeat of chess champion Garry Kasparov, the solution to Checkers and the believable behaviour of the Xenomorph in Alien Isolation. Motivated by the latest success in this area (DeepMind’s recent defeat of a European champion Go player), I will discuss open grand challenges in game playing AI with a focus on a proposed framework for developing agents that can autonomously learn to play any digital game. The talk will also touch upon the mutually beneficial relationship we have established with the games industry, and how it could be used to enable high impact research across the whole department.

Biography: Sam Devlin received an MEng degree in Computer Systems and Software Engineering from the University of York in 2009. In 2013, he completed his PhD on multi-agent reinforcement learning at the University of York and visited Oregon State University funded by a Santander International Connections Award. His research interests are focussed on machine learning, data mining and artificial intelligence. He was a Research Associate on the NEMOG project from 2013-2015, working on data mining for collective game intelligence. He is now a transitional fellow in the Digital Creativity Hub and Department of Computer Science.

 

Summer 2016

27th April: Whole Systems Energy Transparency (or: More Power to Software Developers!)

Kerstin Eder (Bristol) 

Energy efficiency is now a major (if not the major) concern in electronic systems engineering. While hardware can be designed to save a modest amount of energy, the potential for savings are far greater at the higher levels of abstraction in the system stack. The greatest savings are expected from energy consumption-aware software. This seminar emphasizes the importance of energy transparency from hardware to software as a foundation for energy-aware system design. Energy transparency enables a deeper understanding of how algorithms and coding impact on the energy consumption of a computation when executed on hardware. It is a key prerequisite for informed design space exploration and helps system designers to find the optimal tradeoff between performance, accuracy and energy consumption of a computation. Promoting energy efficiency to a first class software design goal is therefore an urgent research challenge. In this seminar I will outline our approach, techniques and recent results towards giving "more power" to software developers. We will cover energy monitoring of software, energy modelling at different abstraction levels, including insights into how data affects the energy consumption of a computation, and static analysis techniques for energy consumption prediction.

Biography: Kerstin Eder is a Reader in Design Automation and Verification at the Department of Computer Science of the University of Bristol. She set up the Energy Aware COmputing (EACO) initiative (http://www.cs.bris.ac.uk/Research/eaco/) and leads the Verification and Validation for Safety in Robots research theme at the Bristol Robotics Laboratory (http://www.brl.ac.uk/vv). Kerstin has gained extensive experience of verifying complex microelectronic designs while working with leading semiconductor design and Electronic Design Automation companies. She is a Principal Investigator of the EC FP7 FET MINECC (Minimizing Energy Consumption of Computing to the Limit) collaborative research project ENTRA (Whole Systems Energy Transparency) which promotes energy efficiency to a first class software design goal utilizing advanced energy modelling and static analysis techniques. At the BRL she is the Principal Investigator of two EPSRC projects: RIVERAS (Robust Integrated Verification of Autonomous Systems) and ROBOSAFE (Trustworthy Robotic Assistants).

 

4th May: Multiagent Learning, Planning & Influences

Frans Oliehoek (Liverpool) 

Multiagent systems (MASs) hold great promise as they potentially offer more efficient, robust and reliable solutions to a great variety of real-world problems. Consequently, multiagent planning and learning have been important research topics in artificial intelligence for nearly two decades. When talking about learning, however, relatively little research has addressed stochastic, partially observable environments where agents need to act based on only their individual observations only and planning for such settings remains burdened by the curse of dimensionality.

In this talk, I will give an overview of some approaches to multiagent learning and planning that I have pursued in recent years. A common thread in these is the attempt to capture the locality in the way that agents may influence one another. Formalizations of such influences have lead to vast improvements in planning tractability, and I will argue that they will be critical to the advancement of multiagent learning too.

 

11th May: Generating Situations to Break Autonomous Robots

Rob Alexander (York) 

Autonomous cars, planes and drones are the focus of enormous effort and a great deal of hype. It’s clear, though, that we don’t have the verification and validation technology to make them a safe reality. Over the past five years, I’ve been working on ways to help this by putting such systems in simulated situations that present them with diverse challenges. In this talk I’ll describe the key work I’ve lead in this area, starting with my initial work in the human-piloted aircraft space then moving on to autonomous air vehicles and autonomous cars. I’ll explain the concept of situation coverage, and why it’s crucial for autonomous vehicle testing.

 

18th May: From Deep Blue To Watson - A Brief History Of IBM's Grand Challenges

James Luke (IBM) 

In this session, I will present the stories of two of the most fascinating grand challenges in AI Research. This will include a brief history of each challenge and a technical summary of IBM's approach and the resulting solutions. The lecture will include a discussion on how the technology developed in these projects was then applied in business applications.

Bio: James Luke is the Chief Architect for Watson Tools and has over 20 years’ experience in the delivery of machine learning and cognitive systems. In his IBM career, James has held several key leadership roles including being the Chief Architect for i2 intelligence products and the lead architect for the Text Analytics Group. Other roles have included leading the development of advanced analytics solutions for UK Government Customers in the Defence & Security Practice, directing the technical solutions of the EMEA Knowledge & Content Management (K&CM) Practice as the lead Architect for the Practice and managing the Applied Science & Technology team based at IBM Hursley.

Prior to joining IBM, James worked as an Artificial Intelligence (AI) consultant with Data Sciences (Data Sciences was taken over by IBM in 1996). Whilst at Data Sciences, James managed a number of AI projects for both military and commercial clients. Examples of this work range from the development adaptive systems for EW classification to the identification of wavering customers for a major superstore (Safeway). James completed his first degree, in Electronic Engineering Systems, at the Royal Naval Engineering College whilst serving as an Officer in the Weapons Engineering branch. In 2003 he completed a PhD with the Image, Speech and Intelligent Systems (ISIS) group of the University of Southampton researching the Application of Intelligent Agents in Information Systems Protection. James is an experienced conference speaker and a keen inventor with many patent applications filed and 8 patents awarded.

 

25th May: Employing the tool USE (UML-based Specification Environment) for Model-Based Engineering

Martin Gogolla (Bremen) 

MBE (Model-Based Engineering) proposes to develop software by taking advantage of models, in contrast to traditional code-centric development approaches. If models play a central role in development, model properties must be formulated and checked early on the modeling level, not late on the implementation level. We discuss how to validate and verify model properties in the context of modeling languages like the UML (Unified Modeling Language) combined with textual restrictions formulated in the OCL (Object Constraint Language).

Bio: Martin Gogolla is professor for Computer Science at University of Bremen, Germany and is the head of the Research Group Database Systems. His research interests include software development with object-oriented approaches, formal methods in system design, semantics of languages, and formal specification. Before joining University of Bremen he worked for the University of Dortmund and the Technical University of Braunschweig. His professional activities include: Publications in journals and conference proceedings; publication of books; speaker to university and industrial colloquia; referee for journals and conferences; organizer of workshops and conferences; member in international and national program committees; contributor to international computer science standards (OCL as part of UML). Martin Gogolla is actively participating in the MODELS conference (as PB Member, PC member, PC chair, SC member, SC chair, Workshop organizer, Educators' Symposium Organizer, Doctoral Symposium Organizer). Martin Gogolla is involved in the organisation of the OCL workhops and the ICMT and TAP conferences. In his group, foundational work on the semantics of and the tooling for UML, OCL and general modeling languages has been carried out. The group developes the OCL and UML tool USE (UML-based Specification Environment) since about 15 years. The tool is internationally and nationally widely accepted and employed for research and teaching and in software production.

 

1st June: My 40-year Quest for Intelligence. And More.

Alan Frisch (York) 

This talk takes a trip through the history of AI as seen through my eyes over the past 40 years. The trip will make a few stops to recall some of my own involvement in this quest for intelligence.

At the end of the trip, I will take a contrarian look at the present and future of AI. As the recent, often dramatic successes of AI are already well chronicled in the popular press, I will instead tell the less-known story of the apparently simple things that AI can't do ... and probably won't do in my lifetime.

 

8th June: Human-centered video analysis for media production

Ioannis Pitas (Bristol)

Recent advances in technological equipment have dramatically increased the amount of data captured for professional media production (e.g., movies, special effects, etc) and diversified its nature, using multiple sensor types (e.g., 3D scanners, multi-view cameras, HD/UHD cameras, RGBD cameras, HDR cameras, motion capture, etc). Therefore, digital media analysis is fully justified as a big data analysis problem. As expected, most such media production data are acquired in order to describe human presence and activity and are exploited during movie and games post-production. Basic problems in human-centered media analysis are face detection/tracking, face clustering/recognition, facial expression recognition and human activity recognition. In this lecture, a short overview of recent research efforts will be given for digital media analysis and description using machine learning methods, primarily for action recognition. Such methods are very powerful in analyzing, representing and classifying single-view and multiview video content. Both supervised, semi-supervised and unsupervised algorithms will be presented.

Bio: Prof. Ioannis Pitas (IEEE fellow, IEEE Distinguished Lecturer, EURASIP fellow) received the Diploma and PhD degree in Electrical Engineering, both from the Aristotle University of Thessaloniki (AUTH), Greece. He is Professor in the Department of Electrical and Electronic Engineering, University of Bristol, UK. He served as a Visiting Professor at several Universities.

His current interests are in the areas of image/video processing, intelligent digital media, machine learning, human centered interfaces, affective computing, computer vision, 3D imaging and biomedical imaging. He has published over 830 papers, contributed in 44 books in his areas of interest and edited or (co-)authored another 10 books. He has also been member of the program committee of many scientific conferences and workshops. In the past he served as Associate Editor or co-Editor of eight international journals and General or Technical Chair of four international conferences. He participated in 68 R&D projects, primarily funded by the European Union and is/was principal investigator/researcher in 40 such projects. He has 22350+ citations (Google Scholar) to his work and h-index 71+ (Google Scholar).

 

15th June: The Software Engineering Team Project

Richard Paige & Fiona Polack (York)

The Software Engineering Team Project (SEPR) is a substantial component of the second year curriculum. It is also the introduction to software engineering principles and practices for our undergraduate students. The SEPR designers and instructors will explain what they do in SEPR, why they do it in this way, why they don’t do certain things, and what is novel about the teaching methods they use. The talk will feature glamorous pictures and ducks.

Contact us

Dr Dimitar Kazakov

Dr Dimitar Kazakov

Seminars organiser

dimitar.kazakov@york.ac.uk
+44 (0)1904 325676