Unconventional Computation 2006

5th International Conference

  • Increase font size
  • Default font size
  • Decrease font size

Keynote Speakers

Print

Gerard Dreyfus

ESPCI, Paris, France

Graph Machines and Their Applications to Computer-Aided Drug Design: a new approach to learning from structured data.

The recent developments of statistical learning focused on vector machines, i.e. on machines that learn from examples that are described by vectors of features. However, there are many fields where structured data must be handled; therefore, it would be desirable to learn from examples described by graphs. Obviously, a very active field where learning from graphs is highly desirable is Quantitative Structure-Activity Relationships (QSAR) and Quantitative Structure-Property relationships (QSPR), which aim at predicting activities and properties of molecules from their structures, which are naturally in the form of graphs.

The presentation will describe graph machines, which learn real numbers from graphs. Basically, for each input graph, a separate learning machine is built, whose structure contains the same information as the graph. If the graph is a tree, the machine is made of elementary, identical machines, which are connected together in the same way as the nodes of the graph are connected by its edges; the output of the machine is the quantity to be modeled; the input of the machine is a constant. Therefore, the basic difference between conventional learning machines and graph machines is the following: instead of training a single machine (e.g. a neural network) on several different input-output pairs, one trains several different machines on one output each. Because all graph machines are combinations of the same elementary machine, the number of parameters is kept low. Usual cross-validation or leave-one techniques extend simply to graph machines, as well as bootstrapping and bagging.

Recently developed methods handle cyclic graphs as well as trees, and accommodate features that may be different for different nodes. Typically, for QSAR applications, they allow handling molecules of arbitrary complexity, with cycles, heteroatoms, various types of bonds and of structural asymmetry.

The presentation will first describe the principles of graph machines and how they relate to the general problem of structured data representation. The second part of the talk will describe applications to real QSAR-QSPR problems.

Abstract (pdf)

Presentation slides (pdf)


Michael C. Mozer

Department of Computer Science, and Institute of Cogntive Science, University of Colorado, Boulder, CO USA

Rational Models of Cognitive Control

Human behavior is remarkably flexible. An individual who drives the same route each day easily adjusts for a traffic jam or to pick up groceries. Any theory of human cognition must explain not only routine behavior, but how behavior is flexibly modulated by the current environment and goals. In this talk, we discuss this ability, often referred to as cognitive control.  We focus on rational models, which argue that cognitive control optimizes performance to the statistical structure of the environment, subject to limitations on current knowledge or processing hardware. We describe how characteristics of the environment and task domain can be estimated from experience, and how these characteristics can then be exploited to make behavior more efficient. We validate our models by explaining human data from tasks involving visual search (locating a visual target in a cluttered display) and simple choice (making a rapid decision to a visual stimulus).

Presentation slides (pdf)

Related material:

Sequential dependencies in human behavioroffer insights into cognitive control
Top-Down Control of Visual Attention: A Rational Account
Control of Response Initiation: Mechanisms of Adaptation to Recent Experience


Reidun Twarock

Department of Mathematics and Department of Biology, University of York, UK

Structure and Self-assembly in Viruses

Based on group theory and tiling theory we have developed a new series of polyhedra, the triacontahedral series, that corresponds to the particles observed during the self-assembly of the major capsid proteins of viruses in the family of Papovaviridae. This allows the classification of the malformations that may occur during self-assembly. The new theory has opened up various areas of application.

In this talk, we focus in particular on our models for the self-assembly of viral capsids and the classification of crosslinking structures, and report on our recent results concerning the RNA-organisation within viral capsids.  We cover assembly models, cross-linking structures, and RNA organisation within viral capsids.

Full abstract (pdf)


Erik Winfree

Computer Science and Computation & Neural Systems, California Institute of Technology

Fault-Tolerance in Biochemical Systems

Biochemistry is messy. It's a miracle any of it works. And yet it does. The wonderful diversity and amazing talents of living things derive from the biochemical processes that copy genetic information and use that information as a program to construct a sophisticated organization of matter and behaviour -- reliably and robustly overcoming insult after insult from the environment. In this talk I will first discuss how known techniques for fault-tolerant computing, such as von Neumann's multiplexing technique for digital circuits, can be translated to the biochemical context. I will then discuss fault-tolerance in molecular self-assembly, which requires new techniques. Using a model of algorithmic self-assembly, a generalization of crystal growth processes, I will present techniques for controlling the nucleation of self-assembly, for reducing errors during growth, and for recovering after gross damage or fragmentation.


Damien Woods

University College Cork

Optical computing and computational complexity

Over the years, optical computers have been designed and built to emulate conventional microprocessors (digital optical computing), and for image processing over continuous wavefronts (analog optical computing). Here we are interested in the latter class: optical computers that store data as images. Numerous physical implementations exist and example uses include fast pattern recognition and matrix-vector algebra.

We investigate the computational complexity of a model of computation inspired by such optical computers. The model is called the continuous space machine and operates in discrete timesteps over a number of two-dimensional images of fixed size and arbitrary spatial resolution. The (constant time) operations on images include Fourier transformation, multiplication, addition, thresholding, zoom in and zoom out.

We have characterised the power of an important discrete restriction of the continuous space machine. Parallel time corresponds, within a polynomial, to sequential space on Turing machines, thus verifying the
parallel computation thesis. We also gave a characterisation of the class NC. Thus the model has computational power that is polynomially equivalent to that of many well-known parallel models. Such characterisations give a method to translate parallel algorithms to optical algorithms and facilitate the application of the complexity theory toolbox to optical computers. In the present work we improve on these results.


 

EPSRC logo

Last Updated on Monday, 18 September 2006 10:56