Dr. Chris Bailey

Advanced Computer Architecture Group Research and Studentship Places

 ChipMask

Introduction

My main research interests are :- 

  • stack-based processors, novel CPU design
  • novel forms of instruction -level-parallelism, 
  • code optimization and translation, 
  • Dark Silicon
  • Large Scale Neural Network Hardware Platforms

 

Possible Projects

Note that these are possible projects, but there is the opportunity to suggets project topics and develop a PhD project based on ideas or interests you may already have. 

ILP Stack Machines

Stack machines have been limited to a narrow form of processing, that of serial program execution. However my recent work has shown that stack machines in fact have a great potential for execution of programs using instruction-level-parallelism, both as super-pipelined and super-scalar machines. Developing simulators and code optimization techniques to enhance performance of stack ILP machines would be the aim.

Binary Meta-Translation with Stack Machines

The use of binary meta-translation to convert binary code from a source architecture (e.g an INTEL cpu) to that of a target machine (such as a stack machine) can allow code to be executed transparently on new platforms with real-time code translation being performed. Investigating the viability of such techniques when applied to translating register to stack based machine architectures is an important area for future study.

BHT for Scalable CPU Arrays

Binary Hyper Translation (as proposed by Chris Bailey), suggests use of BMT techniques (see above) to translate code form a source machine, into a series of simultaneous micro-threads or program fragments, to be executed on a cpu array. Translation would be done in real-time (perhaps by additional processing array elements) whilst being able to adapt ot any array size without the need to recompile code.

Conservation Cores - Dark Silicon

Work in the area of Dark Silicon exploitation has recently become a hot topic in Computer Architecture. The design of CPU's that utilise a large number of small and highly specialised co-processors to perform tasks at very low power (example being the GREENDROID project) are increasingly of interest. We would like to develop research thees around this topic, and already have one PhD student researching this area.

Large-Scale Neural Network Accelerators

The emulation of neural arrays has often been limited to hundreds of neurons for the purposes of simple AI type applications such as recognising shapes in images. However the complexity of real neural systems, even those of simple animals such as insects, demands millions of neurons to be simulated. Ultimately systems of 1 billion neurons operating in real-time (in biological terms) would be valuable for a range of research applications. This project would explore methods of achieving this aim without having to resort to supercomputers to solve the problem.

 

 

Document Actions
Latest News

THE award

ACAG Wins Top Award

find out more

OPPORTUNITIES

Globe

PHD 

STUDENTSHIPS

Log in


Forgot your password?
INTRANET

Group Pages

 

Please refer to the legal disclaimer covering content on this site.