Prospective Research Students

Are you interested in pursuing a research degree (MSc, MPhil or PhD) in Programming Languages and Systems? Then we would like to hear from you. Let us know what interests you most, and we'll be happy to discuss possible topics of research. Some suggested topics follow, with contact details. But you are not restricted to these. If you can't see a topic that matches your interests, mail a potential supervisor who works in a related area, or mail Colin Runciman [1] and he will try to put you in touch with someone suitable.

Suggested Topics

Subject Area : Functional Programming (Colin Runciman [2])

Topic: A verified implementation of Core Haskell

As functional programs are concise high-level declarations, and amenable to substitutive reasoning, at source level they are suitable for the development of verified software. But what about the implementation level? Widely used optimising compilers, such as GHC for Haskell, achieve excellent performance. But they are large and complex, as are their associated run-time systems. So it seems infeasible to verify correctness of such implementations. Smaller and simpler implementations, using minimalist abstract machines such as the one described in the Reduceron paper, offer performance that is good enough for many applications. Verifying the correctness of such an implementation is a feasible but challenging goal.

Matthew Naylor and Colin Runciman, The Reduceron: Widening the von Neumann Bottleneck for Graph Reduction using an FPGA, In IFL 2007: Proc. 19th Intl. Symposium on Implementation and Application of Functional Languages (IFL 2007), Revised Selected Papers, to appear in Springer LNCS, 2008.

Topic: Profile-directed Optimisation and Tracing

Even programs that look simple and elegant may have complex computational behaviour. Functional programmers may write neat and concise recursion equations, but evaluation based on these equations may require a surprising amount of time and memory. Past work at York has produced new techniques such as heap profiling, now widely used, but interpreting and exploiting profile information is still quite an art. Possible advances include the automatic use of profiling information to reduce execution costs, and linking low-level resource-based profiles with source-level traces of computation.

Colin Runciman and Niklas Röjemo, Heap profiling for space efficiency, In J. Launchbury, E. Meijer, and T. Sheard, eds., Advanced Functional Programming, 2nd Intl. School, Springer LNCS 1129, pages 159-183, Olympia, WA, August 1996.

Topic: Finding Simple Sources for Target Failures

When a software fault occurs it may be hard to diagnose until a simple and repeatable instance of failure is obtained. Recent work at York has provided tools to find the smallest examples of function applications that lead to failures at specified program points. But often computations are staged: the result of one stage defines computation in the next, as when a compiler generates object code from source code. Finding simple sources that compile successfully but lead to a run-time failure is a challenging problem. In practice the search might often be informed by known examples of such failure for non-simple sources.

Matthew Naylor and Colin Runciman, Finding inputs that reach a target expression, In Proc. Seventh IEEE International Working Conference on Source Code Analysis and Manipulation, pages 133-142, Washington, DC, USA, 2007.

Topic: Type classes with verified algebraic laws

The functional language Haskell has type classes as one means of avoiding unduly type-specific programming. Each class defines the names and type signatures for a collection of methods defined for each type in the class. For example, <= is among the methods defined for each type in the Ord class, and == for each type in the Eq class. All Haskell requires, however, is that methods have the right names and types. There is no guarantee, for example, that <= is an ordering and == an equivalence in the usual mathematical sense. Yet these properties may be implicitly assumed in generic routines. The goal is to allow such properties to be stated, and to verify them automatically at compile-time.

Mark P. Jones, Functional Programming with Overloading and Higher-Order Polymorphism, Advanced Functional Programming, First Intl. School, Springer LNCS 925, Bastad, Sweden, 1995.

Klaus Didrich, Compiler Support for Specification and Justification -- Description of a Case Study, Tech. Report 99-18, Technical University of Berlin, 1999.

Topic: Domain-Specific Computing over Traces of Functional Programs

The most informative traces of lazy functional computation are not just linear sequence of events but dependency graphs between intermediate expressions. The York Hat tool builds traces of this kind. Traces are rich sources of information for the programmer, but they are too large and complex to be explored unaided. Some special-purpose tools have been written to help programmers use traces to locate program faults. But a more general way to exploit the available information would be to design and implement a domain-specific language for computing with traces.

Koen Claessen, Colin Runciman, Olaf Chitil, John Hughes and Malcolm Wallace, Testing and tracing lazy functional programs using QuickCheck and Hat, in Advanced Functional Programming, Fourth Intl. School (AFP'02), Springer LNCS 2638, pages 59-99, 2003.

Subject Area: Programming by Graph Transformation (Detlef Plump [3])

GP is a rule-based, nondeterministic programming language for solving graph problems at a high level of abstraction, freeing programmers from dealing with low-level data structures. GP's design aims at syntactic and semantic simplicity, to facilitate formal reasoning about programs. The language contains just four control constructs (rule application, sequential composition, branching and looping), has a simple formal semantics, and is computationally complete.

Topic: Speeding up GP

This project will attempt to speed up GP's current implementation, which is based on a compiler and the York abstract machine, to rival with the fastest graph transformation tools currently available. Activities to make GP faster include the introduction of a powerful type system for graphs to support the analysis of graphs at run time, the implementation of dynamic search plans for graph pattern matching, and the use of so-called rooted graph transformation rules which can be matched in constant time.

G. Manning and D. Plump, The York Abstract Machine, To appear in Proc. Graph Transformation and Visual Modelling Techniques (GT-VMT 2006), Electronic Notes in Theoretical Computer Science. Elsevier.

G. Manning and D. Plump, The GP Programming System, To appear in Proc. Graph Transformation and Visual Modelling Techniques (GT-VMT 2008), Electronic Communications of the EASST.

G. Taentzer et al., Generation of Sierpinski Triangles: A Case Study for Graph Transformation Tools, To appear in Applications of Graph Transformation with Industrial Relevance (AGTIVE 2007), Selected Papers, Springer LNCS.

Topic: Static analysis of graph programs

This project will develop an automatic program analysis for detecting confluence and termination in graph programs. A nondeterministic program is confluent (or 'don't care' nondeterministic) if all its executions on a given input will produce the same result. Confluence is essential for run time efficiency because GP's backtracking mechanism can be turned off for confluent subprograms without compromising the semantics. Sufficient conditions for confluence will be developed by generalizing so-called critical pair techniques from sets of graph transformation rules to programs. A simple static analysis of termination will complement the analysis of confluence.

G. Manning and D. Plump, The GP Programming System, To appear in Proc. Graph Transformation and Visual Modelling Techniques (GT-VMT 2008), Electronic Communications of the EASST.

D. Plump, Confluence of Graph Transformation Revisited, In Processes, Terms and Cycles: Steps on the Road to Infinity: Essays Dedicated to Jan Willem Klop on the Occasion of His 60th Birthday, Springer LNCS 3838, 280-308, 2005.

Subject Area: Safe Pointer Programs (Detlef Plump [4])

The type systems of current imperative programming languages are too weak to detect non-trivial pointer errors which violate the shapes of linked data structures such as cyclic lists, balanced trees, etc. One goal is to provide powerful 'shape types' by which programmers can specify and manipulate pointer structures, together with a static analysis which guarantees that pointer programs preserve shapes.

Topic: Shape types

The experimental language C-GRS extends C with shape types, it comes with a translation to standard C and an abstraction algorithm that allows shape types to be checked for shape-safety. This project will extend the scope of the shape checking-algorithm and also implement tools for visualising and testing shape types. More on the theoretical side, the project should attempt to clarify the relationship between the C-GRS approach and logic-based approaches to shape safety.

A. Bakewell, D. Plump and C. Runciman, Specifying Pointer Structures by Graph Reduction, In Applications of Graph Transformations with Industrial Relevance (AGTIVE 2003), Selected Papers, Springer LNCS 3062, 30-44, 2004.

A. Bakewell, D. Plump and C. Runciman, Checking the Shape Safety of Pointer Manipulations, In Relational Methods in Computer Science (RelMiCS 7), Selected Papers, Springer LNCS 3051, 48-61, 2004.

M. Dodds and D. Plump, Extending C for Checking Shape Safety, In Proc. Graph Transformation for Verification and Concurrency (GT-VC 2005), Electronic Notes in Theoretical Computer Science 154(2), 95-112, Elsevier, 2006.

Subject area: Formal methods and modelling (Jeremy Jacob [5])

Topic: Refining security protocol designs

Security protocols are important in communicating identity and trust around the InterNet. They are a vital component of any secure transaction. While typically only a few messages long, and devoid of looping, they have proved hard to get right.

A previous doctoral student at York, Chen Hao (PhD thesis) has built a system that takes a formal description of the assumptions and goals of an interchange and automatically builds Java classes to implement a correct protocol. Chen's tool is a pipeline. The pipeline's front end turns the statement of what is wanted into a high level design (using heuristic search techniques). The back end embeds some simple refinement rules to turn the high level design into a low-level one and then into Java.

While the refinement rules for the back end work they are very simple, and the code produced could be more efficient. This project would start with Chen's solution and improve it by designing and implementing more sophisticated techniques for refining high-level designs to low level designs (and thence to compilable code).

Topic: searching for systems with a categorical semantics

Genetic programming is a technique whereby programs are found by heuristically searching the space of all programs for one that solves the task in hand. This is quite a tricky technique that has had some successes. Ordinary genetic algorithms represent the `genome' of the solution as a sequence; genetic programming uses trees instead, as they are a natural fit with the parse tree of the program.

This project would look at systems described using categorical semantics (for example, José Fiadeiro's CommUnity). These systems typically use directed graphs to represent systems. This poses whole new challenges for heuritic search.

General Enquiries

General enquiries about the graduate programmes of the Department of Computer Science should be made to the Postgraduate Secretary:

Further Information

This page was last modified on 23 February 2012, at 23:54. This page has been accessed 39,004 times.

Powered by MediaWiki