Non-Standard Computation Group
The Non-Standard Computation Group researches reality-based computing approaches
that seek their inspiration from the natural world (mainly biology and physics).
It explores new computational paradigms that break the classical computational assumptions.
How might we use real physical
and biological systems for computation?
The real world has already provided the inspiration for:
novel algorithms, including genetic algorithms, swarm algorithms,
and artificial immune systems;
novel views of what constitutes a computation, such as complex adaptive systems,
and self-organising networks; novel computational foundations,
such as quantum computing.
The Group is participating in the quest to produce a fully mature science of
all forms of computation, that unifies the classical and non-classical paradigms.
are there so few quantum algorithms? How can we find new ones?
How can we use existing ones to solve new problems?
Quantum computing presents one of the most exciting developments for computer science
in recent times. Based on quantum physics, it can perform computations that cannot
be effectively implemented on a classical Turing machine.
It exploits interference, many worlds, entanglement and non-locality.
Newer work still is further breaking out of the binary mind-set, with multiple-valued “qudits”,
and continuous variables.
The subject covers computation, information theory, and communication protocols.
what extent is the working of
biological systems dictated by the physical substrate?
Which parts of the biology are necessary for functioning,
which are necessary only because of the particular physical realisation,
and which are merely contingent evolutionary aspects?
has inspired many important techniques in computer science,
drawing inspiration from physics (simulated annealing), evolution
(genetic algorithms, genetic programming), neurology (artificial neural networks),
immunology (artificial immune systems), plant and animal growth (L-systems),
social networks (ant colony optimisation), and others.
In the virtual worlds inside the computer, we are no longer constrained by the laws of nature,
and can go beyond the precise way the real world works.
For example, we can introduce novel evolutionary operators to our genetic algorithms,
novel kinds of neurons to our neural nets,
and even, as we come to understand the embracing concepts,
novel kinds of complex adaptive systems themselves.
we find a general theory of reality-based approaches?
Can we use this theory to develop more effective systems inspired by,
but not based on, any known real world processes?
Post-classical refinement is needed,
to permit quantitative reasoning about all reality-inspired algorithms.
We need to understand and predict the global properties that emerge from
a collection of local non-specific agents,
so that we can we design (refine) systems that have desired emergent properties,
and do not have undesired emergent properties.
We need to be able to design and implement appropriate algorithms for particular applications,
in a rigorous (but possibly non-incremental) way,
and to give quantitative description methods that enable rigorous reasoning
about the behaviour of the algorithms,
such that they can be used reliably in critical applications.
are the computational limits to what we can simulate?
Massive parallelism, as seen in cellular automata, and agent systems,
and as realised in FPGAs, is needed
to effectively implement many of these reality-based algorithms.
can trajectory observations be used to give useful information?
What observations are feasible, and useful?
Computational trajectories, measured as a program is executing,
are a computational resource in their own right.
Logical trajectories, tracing the path through the logical state space,
and physical trajectories, measuring
physical changes during execution, are both valuable.
are the various attractors of a dynamical computation?
How can we encourage the system to move to a “better” attractor?
Dynamic reaction networks can exhibit the emergent complexity, complex dynamics,
and self-organising properties of many far-from-equilibrium systems.
These systems, and others, can self-organise into regions “at the edge of chaos”,
neither too ordered nor too random, where they can perform interesting computations.
There are many dynamic network models that occur in biological and social systems:
autocatalytic networks, genomic control networks, dynamical neural networks
and cytokine immune networks, ecological food webs, etc.
Realistic models of such networks need a pragmatic theory
of dynamic, heterogeneous, unstructured, open networks
can we hold a system at the edge of chaos to perform useful computations?
How can we make it self-organise to the edge?
Open dynamical systems includes the full consideration of computation
as a dynamical process, computation at the edge of chaos,
including its fundamental capabilities, and designed emergence.
We need to know the fundamental properties of such systems.
We need to understand the events that can open up new kinds of regions
of phase space to a computation.
And we want to design, and predict the effect of, interventions
(adding new things, or removing things) to the system.
can we predict and design emergent properties?
Agent systems are an example of dynamical networks:
they comprise masses of simple agents, interacting with each other and with the environment,
resulting in various emergent properties.
The ultimate agents are physical nanotech assemblers,
interacting to construct macroscopic artefacts.