Friday 26 January, 2001, 12:30, CS202J

We are full members of AgentLink II... so what?

Eduardo Alonso
Department of Computer Science
University of York

Abstract:

We are full members of AgentLink II, Europe's IST-funded Network of Excellence for agent-based computing. AgentLink is a coordinating organisation for research and development activities in the area of agent-based computer systems funded by the European Commission. As such, AgentLink supports a range of activities aimed at raising the profile, quality, and industrial relevance of agent systems in Europe.

In this talk I'll give a general description of AgentLink II focusing on its structure, workplan, and potential benefits in terms of research and funding.


Wednesday 31 January, 2001,  12.15, CS202J

Reformulation of Models of Constraint Satisfaction Problems

Brahim Hnich,


 Department of Information Science,
Uppsala University.

Abstract:

Effective constraint programming (or: modelling) is very difficult, even
for application domain experts, and hence time-consuming. Moreover, most
of these problems are NP-hard in general, and hence require an
exponential amount of solving-time in the worst case. Furthermore, for a
given solver and a given instance of a problem, it is very hard to
figure out which model is the best.
To tackle the programming-time problem, ever more expressive and
declarative constraint programming languages are being designed such as
our modelling language ESRA. To tackle the solving-time problem, the
default behavior of the solver can be modified and the models of the
constraint satisfaction problem can be reformulated.
Reformulation can be achieved by adding implied constraints, or
switching from a pure constraint program to an integer linear program,
or from a node-based model to an edge-based model, etc.
Three different approaches to reformulation will be introduced, as well
as our initial insights on two of these approaches.



Wednesday 31 January, 2001,  2pm, CS202J

Permutation Problems and Channelling Constraints
Toby Walsh
Department of Computer Science
University of York

Abstract:

Many assignment, scheduling and routing problems are permutation
problems.

For example, sports tournament scheduling can be modeled as finding a
permutation of the games to fit into the available slots, whilst the
traveling saleperson problem can be modeled as finding a permutation of
the cities which gives a tour of minimal length. There are a number of different
ways of modelling permutation problems (for example, do we assign teams to
time slots, or do we assign times slots to teams?). I describe a methodology,
as well as a measure of constraint tightness, that can be used to
compare these different models. My results will help users of constraints satisfaction
toolkits to choose a model for a  permutation problem, and a local consistency
property to enforce upon it.



Friday 2 February 2001, 12:30, CS202J

Simulation of learning and evolution in multi-agent systems

Stephen Routledge
Department of Computer Science
University of York

Evolution and kinship-driven altruism

John Barton
Department of Computer Science
University of York

Intelligent cookbook or do it on-line with Prolog

Owen Williams
Department of Computer Science
University of York



Tuesday 6 February, 2001, 12.30, CS202J

A Meta-heuristic for Subset Problems

Brahim Hnich,
Computer Science Division,
Department of Information Science,
Uppsala University.

Abstract:

In constraint solvers, variable and value ordering heuristics are used
to finetune the performance of the underlying search and propagation
algorithms. However, few guidelines have been proposed for when to
choose what heuristic among the wealth of existing ones. Empirical
studies have established that this would be very hard, as none of these
heuristics outperforms all the other ones on all instances of all
problems (for an otherwise fixed solver). The best heuristic varies not
only between problems, but even between different instances of the same
problem. Taking heed of the popular dictum "If you can't beat them, join
them!" we devise a meta-heuristic that chooses, at run-time, the best
heuristic for the instance at hand. This meta-heuristic is applicable to
an entire class on NP-complete subset problems.



Wednesday 7 February 2001, 14:00, Room 103

 Bidirectional Contextual Resolution

Stephen Pulman
Somerville College, Oxford University,
   and
SRI International Cambridge Computer Science Research Centre

Abstract:

This talk describes a formalism and implementation for the
interpretation and generation of sentences containing context-
dependent constructs like determiners, pronouns, focus, and ellipsis.
A variant of 'quasi-logical form' is used as an underspecified meaning
representation, related to 'resolved logical forms' via 'conditional
equivalences'. These equivalences define the interpretation of
contextually dependent constructs with respect to a given context.
Higher order unification and abduction are used in relating
expressions to contexts. The conditional equivalences can be used
unchanged in both the interpretation and the generation direction.

I will give a demonstration of a small-scale implementation of the
system. I will also give a demonstration of a simple application where
birectionality is an advantage: solving Lewis Carroll-type logic
puzzles, where the problem is stated in English and the solution is
also generated as an English paraphrase.

Please note that this is a departmental seminar.
For further details please see the departmental seminar web pages.



Friday 16 February 2001, 12:30, CS202J

A Nonogram Solver

Lindsay Collins
Department of Computer Science
University of York

Gin Rummy Agent

John Dickinson
Department of Computer Science
University of York

Exploiting Randomisation in Stochastic Local Search: Predicting Optimal Levels of Noise

Anthony J Doggett
Department of Computer Science
University of York



Friday 23 February 2001, 12:30, CS202J

An Introduction to the HR Program

Simon Colton
Department of Computer Science
University of York

Abstract:

I'll be introducing the HR Program by discussing (i) the aims of the project (ii) the theory behind HR (iii) the applications of HR, in particular, mathematical discovery, learning tasks and puzzle generation (iv) some results we've had with HR and (v) the current implementation of HR in Java. I'll be demonstrating the program and hoping that some of you will want to try HR out. In particular, I'm writing a Progol to HR translator so that HR can form theories about the kinds of concepts Progol learns. Most of this talk will be different to the one I gave in 1998.



Friday, 2 March 2001, 12:30, CS202J

Translating the Annotation of the Penn Treebank

Stephen Watkinson
Department of Computer Science
University of York

Abstract:

Annotated corpora have become a vital tool in the evaluation of
natural language systems, as they provide a standard against which to
compare results.  Hence, it has been necessary to provide the time and
money to annotate a variety of corpora with a variety of different
information.  Unfortunately, the problem often remains that there is
not a suitably annotated corpus for a given problem and it would be
useful to be able to translate between annotation formalism.  I will
discuss translating the Penn Treebank from phrase structure grammar
annotation to a Categorial Grammar annotation.



Friday,  16 March 2001, 12:30, CS202J

Combining semantic and statistical processing:


are TRUCKS the wayto travel in terminology?

 Diana Maynard
The Department of Computer Science,
University of Sheffield

Abstract:

Huge volumes of scientific databases and text collections have become
available in recent years, but their usefulness is hampered by their lack
of uniformity and structure, as well as the constantly changing array of
scientific terms in use. The TRUCKS system addresses the need for tools to
facilitate the processing and discovery of such terms.

TRUCKS combines linguistic and statistical techniques for term
recognition and disambiguation, by making use of an idea called "contextual
acquaintance". It focuses on the relationship between technical terms and
their surrounding contexts, considering in particular the semantic
relations entailed. Evaluation shows an improvement over purely
statistical techniques, achieving a score of 85% precision on a
million word corpus of ophthalmology reports.

While TRUCKS was developed for use in the medical domain, it is applicable
to other domains, and its technology is not limited to term recognition,
but is extensible to areas such as information extraction, web-based
knowledge mining and domain-specific ontology creation.



Friday, 30 March 2001, 12:30, CS202J

                                Stochastic search in Progol

                                   Stephen Muggleton
                              Department of Computer Science
                                   University of York Abstract:

Most search techniques within ILP are based on the theory of clause
refinement. Such searches are typical time-consuming, requiring the
testing of large number of clauses. Acceptable clauses typically
need to be consistent, and are only found at the "fringe" of the
search conducted. However, the majority of time in the search is usually
consumed in testing clauses which are not consistent.  A novel search approach
will be presented, based on a novel algorithm called QG (Quick Generalisation).
QG is non-deterministic, and efficiently generates a random
consistent clause C on the fringe of the refinement graph search without
needing to explore the refinement graph in detail. An experimental
version of QG has been implemented in Progol4.6.  Initial experiments
with QG indicate that when the proportion P of consistent clauses
is small, QG is far more efficient than standard refinement-graph
searches, while still generating similar solutions. Limitations of
the present QG algorithm will be discussed.



Friday, 27 Apr 2001, 12:30, CS202J

A Maximum-Entropy Partial Parser for Unrestricted Text

Wojciech Skut
German Research Centre for Artificial Intelligence (DFKI)
Language Technology Lab
Saarbrücken

Abstract:

Chunking/shallow parsing is one of the most frequently
used processing techniques in NLP systems. It can
be described as parsing restricted to those structures
which can be recognised with a high degree of accuracy,
e.g., simple, non-recursive NPs and PPs, verb groups, etc.

In my talk, I will describe Chunkie, a statistical
partial parser based on Hidden Markov-Models (HMM),
which I implemented as part of my PhD research at the
University of Saarland, Saarbrücken. Chunkie can
be trained on treebank data and achieves between
85% and 92% accuracy depending on language, corpus size,
application and domain. The additional increase
in accuracy due to the use of a maximum entropy model
will also be discussed.

Reference:

Wojciech Skut and Thorsten Brants, "A Maximum-Entropy
Partial Parser for Unrestricted Text", Proceedings
of the 5th Workshop on Very Large Corpora, Montreal,
1998.



Friday,  4 May 2001, 12:30, CS202J

Modelling a Steel Mill Slab Design Problem

Ian Miguel
Department of Computer Science
University of York

Abstract:

This talk considers an industrial steel mill slab design problem. This
problem is an instance of a class of difficult problems where the
problem structure (in this case, the number and size of slabs) is not
fixed initially, but determined as part of the solution process. Since
a natural encoding into a constraint satisfaction problem does not
immediately suggest itself, the choice of model for this type of
problem may be crucial in successfully solving it.

Three different models to tackle the slab design problem are therefore
considered. Model A uses a conservatively large number of variables to
represent the slabs, some of which will typically be redundant in an
optimal solution. Model B operates in two phases, first solving an
abstraction of the original problem before solving further
sub-problems to provide a solution to the original problem. The third
model is a dual model combining models A and B, allowing search and
propagation on both sets of variables.



Tuesday 15 May 2001, 11:15, CS/103

Knowledge and Data in Computational Biological Discovery

Pat Langley
Center for the Study of Language and Information,Stanford University
and
Institute for the Study of Learning and Expretise,Palo Alto, California

The growing amount of biological data has led to the increased use of
computational discovery methods to understand and interpret these
data.  However, most work has relied on knowledge-lean techniques like
clustering and classification learning, and it has relied on
formalisms developed in AI or statistics, so that the results seldom
make direct contact with current theories of biological processes. In
this talk, I describe an approach to computational discovery that
incorporates knowledge in the form of an existing process model,
utilizes data to refine this model, and casts the result in terms
familiar to biologists. I illustrate this approach to biological
knowledge discovery in two domains. One involves improving a
quantitative model of the Earth's ecosystem using environmental data
from satellites and ground stations. The other effort focuses on
constructing metabolic models for simple organisms using temporal data
from DNA microarrays, but it also holds potential for revealing
processes of gene regulation. Initial results suggest that this method
of combining data with knowledge can improve predictive ability while
ensuring that the revised models remain communicable to human
biologists.

This talk describes joint work with Vanessa Brooks, Jennifer Cross,
Trond Grenager, Steve Klooster, Andrew Pohorille, Chris Potter, Kazumi
Saito, Mark Schwabacher, Jeff Shrager, and Alicia Torregrosa.

Please note that this is a departmental seminar. For further details please see the departmental seminar web pages.



Friday,  1 June 2001, 12:30, CS202J

Camera Planning 2

Jonathan Pickering
Department of Computer Science
University of York

Abstract:

I describe the problem of defining a virtual camera, in a 3D graphics
system, based on a description of desired properties of the resultant
image. A number of systems developed by other workers are outlined and
contrasted. The problem is shown to be one of constrained optimization
in a space of non-homogeneous coordinates. The problem is further
complicated by the existence of dependencies between the coordinates.

I will show that there exist a number of useful constraints that are
purely properties of position, and can be modeled using conventional
solid modeling techniques. An image description language can be written
that parses into objective functions and constraints. If the constraints
are modeled a feasible volume can be found which may be searched using a
techniques such a genetic algorithms or simulated annealing.


Friday, 8 June 2001, 12:30, CS202J

Using Genetic Algorithms for Learning Clauses in First-Order Logic

Alireza Tamaddoni-Nezhad
Department of Computer Science
University of York

Abstract:

This talk aims to introduce a framework for combining first-order
concept learning with Genetic Algorithms. This framework includes: 1) a
novel binary representation for clauses 2) task-specific genetic
operators 3) a fast evaluation mechanism. The proposed binary
representation encodes the refinement space of clauses in a natural and
compact way. It is shown that essential operations on clauses such as
unification and anti-unification can be done by simple bitwise
operations (e.g. and/or) on the binary encoding of clauses. These
properties are used for designing task-specific genetic operators. It is
also shown that by using these properties individuals can be evaluated
at genotype level without mapping them into corresponding clauses. This
replaces the complex task of evaluating clauses, which usually needs
repeated theorem proving, by simple bitwise operations. A preliminary
implementation of the proposed framework is used to combine Inverse
Entailment with a genetic search.

Please note that this talk is a dry-run for the GAs conference and some
parts overlap with my last year presentation at AIG and ILP2000.



Friday, 15 Jun 2001, 12:30, CS202J

 Games for Agent Dialogues

Simon Parsons
Department of Computer Science
University of Liverpool

Abstract:

By treating dialogues as abstract games, we are able to
develop a logic-based formalism for modeling of dialogues between
intelligent and autonomous software agents. Complex dialogues,
including dialogues embedded in one another, can be represented in
the formalism as sequences of moves in a combination of dialogue
games.  We show that our formalism can represent the different
types of dialogue in a standard typology, and we also provide
these dialogue-types with a game-theoretic semantics.



Friday, 22 Jun 2001, 12:30, CS202J

On compilation of large-scale feature constraint grammars

Liviu Ciortuz
Department of Computer Science
University of York Abstract:

This presentation concerns the conception of the feature constraint
programming language Light and its underlying implementation, the ABC
Light compiler.

The Light language itself is defined in the framework of the CLP(OSF)
constraint logic programming scheme. Its implementation is done around an
abstract machine for OSF-theory unification. This abstract machine,
called Light AM, largely extends the abstract machine for unification of
OSF-terms designed at the DEC Paris Research Laboratory.

Light was intended to do parsing with feature constraint grammars. It was
tested and tuned on a large HPSG grammar for English --- the so-called
LinGO grammar --- developed at the CSLI Center of the University of
Stanford, USA. This is why the control level (above unification) that we
implemented in the ABC Light compiler performs deductive head-corner
bottom-up chart-based parsing. However, the ABC Light system is not
limited to HPSG grammars.

Most part of the work on Light --- the laguage and the compiler --- was
done while the author was employed at the Language Technology Laboratory
of the German Research Center for Artificial Intelligence (DFKI) in
Saarbruecken, Germany.



 Friday, 29 June 2001, 12:30, CS202J

Implementing parameter and model learning in SLPs

Nicos Angelopoulos
Department of Computer Science
University of York

Abstract:

I plan to present some recent work done jointly with
James Cussens. In the introduction the syntax and three
probability distributions, based on log-linear principles and
ascribed on Stochastic Logic Programs (SLPs) will be provided.
The main part of the talk presents two algorithms; (1) on
parameter estimation from data, and (2) on a generally applicable
MCMC method for Bayesian learning of model structures. Due to its
importance, the latter will be examined in greater detail.
Implementations in Prolog will be sketched and some preliminary
results will be put forward for consideration.



Friday 20 July, 2001, 12:30, CS202J

A Knowledge Model for Automatic Configuration of Traffic Messages

Monica Robledo
Universidad Rey Juan Carlos I

Abstract:

This talk describes a knowledge model for a configuration problem in the domain of traffic control. The goal of this model is to help traffic engineers in the dynamic selection of a set of messages to be presented to drivers on variable message signals. This selection is done in a real-time context using data recorded by traffic detectors on motorways. The system follows an advanced knowledge-based solution that implements two abstract problem solving methods according to a model-based approach recently proposed in the knowledge engineering field.

Adopted solutions in the SAIDA System: Society of Intelligent Agents for decision support in river floods

Raquel Fuentetaja
Universidad Rey Juan Carlos I

Abstract:

In this talk we will present the architecture of the SAIDA system. The main goal of the system is to help operators at the hydrologic control center to manage the reservoirs. Three steps are necessary to achieve this goal: current situation evaluation, prediction of the future behaviour of the basin and recommendation of control actions to minimize the possibles damages at the problem areas. Due to the distribution of the elements in the basin an agent-based architecture is the most suitable for this purpose.



Friday 7th September, 2001, 12:30, CS202J

Detecting Symmetries in Relational Constraint Models

Zeynep Kiziltan
Department of Information Science
Uppsala University
Sweden

Abstract:

ESRA is a relational constraint modelling language that supports very expressive and declarative modelling. Moreover, ESRA models allow easy compile-time inference of symmetry-breaking constraints. I will first motivate the need for inferring symmetry-breaking constraints, and then show first ideas on how this can be achieved with ESRA models.



Friday 12th October, 2001, 12:30, CS202J

Generalised Named Entity Identification for Extending Concept Ontologies with Domain-dependent Information

Enrique Alfonseca
Department of Computer Science
University of York

Abstract:

Ontologies and semantic networks are now widely used for storing lexical and semantic information. Some of these lexical networks, such as WordNet, LDOCE or Comlex have made possible many advances in Natural Language Processing. However, they were all built by hand in a labour-intensive way. I present here a procedure to extend WordNet with domain-dependent information that requires a minimal human supervision.



Wednesday 17th October, 2001, 14:00, CS103

Two Small Steps from Procedural Programming to Modelling and Solving Complex Problems

Mark Wallace
IC-Parc
Imperial College

Abstract:

Constraint programming has established itself in research and industry as a very interesting and useful paradigm. In this talk we will examine one key idea that has made this possible: the guarded clause. As a first step, the guarded clause supports the move from reading a value in a traditional program state to testing the entailment of a guard from a constraint store. We examine this step using Wensley's algorithm for calculating square roots. As a second step, the guarded clause supports a mapping from a set of constraints expressed as a logic program to a set of numerical inequalities that are solved by integer linear programming. This unlocks a huge body of powerful algorithms which have, historically, proven difficult to reuse. Our conclusion is that abstruse research isn't always needed to achieve practical breakthroughs. A simple concept can have a major impact.



Wednesday 24th October, 2001, 14:00, CS103

Learning Grammars by Failing to Parse

James Cussens
Artificial Intelligence Group
University of York In this talk I aim to compare a wide range of approaches to learning grammars from sentences. These approaches differ in (i) the nature of grammar to be learnt, (ii) whether the sentences are annotated (iii) the extent to which linguistic knowledge is used, and (iv) the search and evaluation measures used by the grammar learning algorithm. Despite these differences, a central theme is that it is the failure to parse a sentence that acts as a trigger to grammar learning - we learn to parse from what we can't parse - the so-called parsing paradox. Interestingly, this theme is common to both the Chomskyan 'parameter-setting' paradigm _and_ machine learning approaches. I will explore recent work (including mine!) at the interface of these two paradigms asking, in the words of a recent paper by Pereira: "Formal Grammar and Information Theory: Together Again?"



Wednesday 31st October, 2001, 14:00, CS103

Confirmation-Guided Discovery of First-Order Rules from Structured Data

Peter Flach
Department of Computer Science
University of Bristol This work concerns learning first-order logic rules from data lacking an explicit classification predicate. Consequently, the learned rules are not restricted to predicate definitions as in supervised inductive logic programming. First-order logic offers the ability to deal with structured, multi-relational knowledge. Possible applications include first-order knowledge discovery, induction of integrity constraints in databases, multiple predicate learning, and learning mixed theories of predicate definitions and integrity constraints. One of the contributions of our work is a heuristic measure of confirmation, trading off novelty and satisfaction of the rule. The approach has been implemented in the Tertius system. The system performs an optimal best-first search, finding the k most-confirmed hypotheses, and includes a non-redundant refinement operator to avoid duplicates in the search. Tertius can be adapted to many different domains by tuning its parameters, and it can deal either with individual-based representations by upgrading propositional representations to first-order, or with general logical rules.



Wednesday 7th November, 2001, 14:00, CS122

Stochastic Constraint Research at Essex

Edward Tsang
Department of Computer Science
University of Essex This talk describes a branch of constraint research at Essex University, namely stochastic search methods, which started in the late 1980s. The talk will start with GENET, a neural network for constraint satisfaction. GENET was generalized to Guided Local Search (GLS), a penalty-based meta-heuristic method for constrained optimization. GLS, which is an on-going project, was embedded in a genetic algorithm, resulting in Guided Genetic Algorithm (GGA). GLS and GGA have been applied to real life problems.



Friday 16th November, 2001, 14:15, CS103

Graduate Study in Artificial Intelligence at York

The AI Group
Department of Computer Science
University of York

Abstract:

Ever wanted to know what AI research was all about? Now's your chance to find out - at least to find out what we do here in York. The York Computer Science AI group is holding an event to publicise its research. This includes areas such as:

Machine Learning, Automated Reasoning, Intelligent Agents (including RoboSoccer), Learning Natural Language.

We'll be giving a few _short_ presentations, and then we'll be presenting laptop demos and posters in parallel. Just look into what interests you. Refreshments will be provided. Don't worry if you can only come along for some of the time, just drop in.

Why are we doing this? Well, partly because we're hoping to attract graduate students into AI; so if you're considering graduate study this event might help you decide if AI is for you. But also we just think AI (the science, not the film!) is good stuff and we want more people to know about it. Come along and see what you think.



Thursday 22nd November, 2001, 12:30, CS202J

A Critical View on Local Search Cost: Preliminary Ideas.

Kris de Meyer
Department of Cybernetics
University of Reading

Abstract:

Stochastic Local Search algorithms for satisfiability testing (SAT) have outperformed complete algorithms on several problem classes, especially on hard random 3-SAT instances. However, even the most successful algorithms show a peak in search cost near the `satisfiability threshold'. Several descriptive cost models have been proposed (backbone size - backbone fragility). However, it is not clear whether they really explain problem hardness or just measure 'circumstantial' correlations.

As an alternative, it will be shown that it is the imbalance between the negated and unnegated occurences of the variables that leads to hard search spaces. Some simple parameters measuring the effects of imbalance will be proposed. It will be shown that although they capture the 'reason' why local search mechanisms are likely to make wrong decisions during the search, a perfect correlation with search cost is still not to be expected.

Although the talk concentrates on the WSAT algorithm with random 3-SAT problems,other problem types will be briefly touched, and the implications of these findings for local search in general and complete DPLL-like methods will be discussed.



Friday 14th December, 2001, 12:30, CS202J

Speeding up complete search for SAT problems

Lyndon Drake
Department of Computer Science
University of York

Abstract:

Solving SAT (propositional satisfiability) problems is in general intractable. However, both because of the theoretical importance of SAT and its many practical applications, it is important to find techniques for quickly solving SAT problem instances. One way of improving the performance of SAT solvers is to use automated inference to generate implied clauses. These implied clauses can speed up search, but only if the improved search time outweighs the cost of generating them. I have examined one method of automatically generating such clauses.

Using Fixpoints for Bounded Model Checking

Dan Sheridan
Department of Computer Science
University of York

Abstract:

Bounded model checking (BMC) is a comparatively new model checking technique which overcomes some of the problems of symbolic model checkers by limiting the depth of the search space and exploiting off-the-shelf Boolean satisfiability (SAT) checkers. While the technology is not highly developed compared to symbolic model checking, it has found widespread acceptance in industry. My talk will focus on the encoding into SAT that BMC uses, and how the encoding can be modified to make use of the fixpoint characterisations of temporal operators which form the basis of symbolic model checking.

Last updated on 10 March 2011