February 9th

Learnability in Inductive Logic Programming
C. David Page
Oxford University

Abstract

Inductive Logic Programming (ILP) is a rapidly-growing research area at the intersection of logic programming and machine learning. ILP involves the development of algorithms that inductively learn logic programs. A natural approach to ILP research is to study the learnability, under models from Computational Learning Theory (COLT), of various classes of logic programs. This talk describes how COLT learnability models can be applied to ILP, and it surveys the existing results on learnability in ILP.


March 8th

Synthesising Semantics for Extensions of Propositional Logic
Hans Juergen Ohlbach
Max Planck Institute for Informatics (Germany)

Abstract

Given a Hilbert style specification of a propositional extension of standard propositional logic, it is shown how the basic model theoretic semantics can be obtained by syntactic transformations of the axioms. The driving force for the transformations is the intention to eliminate certain properties, i.e. to turn them into tautologies.

The following transformations are considered. Elimination of the reflexivity and transitivity of a binary consequence relation yields the basic possible worlds framework. Elimination of the congruence properties of the connectives yields weak neighbourhood semantics. Elimination of certain monotonicity properties yields a stronger neighbourhood semantics. Elimination of certain closure properties yields relational possible worlds semantics for the connectives.

Propositional logic as basis of the specification allows to turn those parts which are not eliminated into second-order predicate logic (PL2) formulae. In many cases these formulae can be simplified to an equivalent first-order predicate logic (PL1) formula which describes the corresponding frame property. All transformations work for arbitrary n-place connectives. The steps can be fully automated by means of PL1 theorem provers and quantifier elimination algorithms. The meta theory guarantees soundness and completeness of all transformation steps.


March 11th

Solving Interval Constraint Problems
Peter Ladkin
Sterling University

Abstract

There are more ways of doing temporal logic that using tense logic. Interval constraint problems are NP-complete, and generalise interval graph problems from Operations Research. They have been a focus of research in AI stemming from work of Allen on 1983 on heuristic scheduling in planning, and have applications ranging from scheduling to DNA sequencing.

We built a fast solution algorithm, mixing pruning (path-consistency computations) with search. Solutions to smaller (20 variables) randomly-generated problems are obtained in 0.5 sec to 30 secs. Larger problems (100 to 500 variables) are solved in roughly cubic time, which is the (serial) complexity of path-consistency. Resource bounds are spatial rather than temporal. Significant search occured only in problems for which 8 <= n.p <= 12, where n is the number of variables, and p is the proportion of non-trivial constraints.

Improving path-consistency computations is more important for interval constraints than improving search. The mathematics of constraint problems, especially path-consistency, is neatly described by Tarski's relation algebra. If time permits, I shall discuss an n^2 lower bound and an n^2/log n upper bound for parallel local iterative algorithms for path-consistency.

This work is joint with Dr. Alexander Reinefeld of Universitaet Paderborn, with contributions by Professor Peter van Beek of the University of Alberta. Work on the mathematics of constraint problems is joint with Professor Roger Maddux of Iowa State University.


May 5th

Relating Complexity to Practical Performance in Parsing with Wide-Coverage Unification Grammars
John Carroll
University of Cambridge

Abstract

This talk demonstrates that exponential complexities with respect to grammar size and input length have little impact on the performance of three unification-based parsing algorithms, using a wide-coverage grammar. The results imply that the study and optimisation of unification-based parsing must rely on empirical data until complexity theory can more accurately predict the practical behaviour of such parsers.


May 19th - The First Annual Knowledge Representation and Reasoning Distinguished Lecturer

The TRAINS System: Integrating Natural Language and Reasoning
James Allen
University of Rochester (USA)

Abstract

The TRAINS project is aimed at building an intelligent planning assistant that is conversationally proficient in natural language dialog. The project represents a concerted effort to bring work in natural language understanding, dialog modeling, knowledge representation and planning into a single coherent system. The TRAINS domain involves scheduling problems in a simulated world consisting of cargo trains, cities and factories. A person, called the manager, is given particular tasks to achieve and must interact with the system in natural language to explore possible approaches and eventually decide on a plan. While a relatively simple domain, it is complex enough to push the state of the art in a wide range of areas from knowledge representation to spoken language processing. This talk will give an overview of problems faced and the results achieved over the first four years of the project.


May 20th - The First Annual Knowledge Representation and Reasoning Distinguished Lecturer

Reasoning about Time, Actions, Events, and Plans
James Allen
University of Rochester (USA)

Abstract

The representation of action is a very active area of research in AI currently, and there is a widespread recognition of the need for temporally explicit world representations. Most current researchers, however, still make strong simplifying assumptions about time, which make their results unsuited for practical application. I will show how temporal logic provides a clean theoretical basis for solving many hard problems in reasoning about actions, especially in complex situations that involve simultaneous actions and external events. I will then discuss some of our most recent work on representing and reasoning about plans, which builds on the temporal logic.


June 1st

Issues in Lexical Knowledge Representation
Gerald Gazdar
University of Sussex

Abstract

It is now widely agreed among natural language processing researchers that inheritance-based formalisms provide the appropriate framework for representing lexicons. But within this consensus, a number of important high-level issues remain unresolved, including

In this talk, I shall discuss a number of these topics. The talk will be nontechnical and I shall attempt to make the issues accessible to people with little or no background in natural language processing.


June 16th

Solving Satisfaction Problems with XOR-Resolution
Alan Frisch
University of York

Abstract

Though resolution and constraint satisfaction problems are two of the most developed areas in artificial intelligence, almost no connections between the two have been made. This talk explores the relationship between resolution and constraint satisfaction problems. The exploration results in the development of the XOR-resolution rule of inference, a generalization of ordinary resolution that can be used to solve constraint satisfaction problems. This talk proves that a clausal-form constraint satisfaction problem is backtrack-free if its constraints are closed under XOR-resolution. This generalizes the usual completeness result for resolution by telling us not only what happens when an unsatisfiable set of clauses is closed under resolution but also what happens when a satisfiable set of clauses is closed under resolution.


July 7th

A Practical Theory of Nonmonotonic Temporal Modelling
Craig MacNish
University of York


September 9th

Art Apreciation and Belief Revision
Mary-Anne Williams
University of Newcastle (Australia)


September 19th

Reasoning about Action in First-Order Logic
Charles Elkan
Univ of California at San Diego (USA)

Abstract

In this talk I will discuss how to axiomatize how actions change the world using the situation calculus ontology and standard first-order logic. The proposed method is similar to methods using circumscription or logic programming, but uses only a monotonic logic. Nevertheless, the method is quite powerful. It can handle forward and backward reasoning, planning, and narrative reasoning. In addition, it can be extended to handle actions with nondeterministic effects, and actions that have indirect effects, which is the ramification problem.


October 19th

The LOLITA Sytem as an example of Natural Language Engineering
Roberto Garigliano
University of Durham

Abstract

Dr Garigliano will talk about the theory and practice of building Natural Language systems. LOLITA is the second largest Haskell program known to the Haskell development team. (For those of you who have done/are doing FUN: Haskell is a functional programming language from which Gopher is derived.)


November 10th

Non-Well-Founded Set Theory as a Semantics for Semantic Nets
Robin Hill
University of Maryland at Menwith Hill

Abstract

Knowledge representation and reasoning systems that are used for cognitive modelling must capture mental intentions. SNePS (Semantic Network Processing System), developed by Shapiro and Rapaport el al, is such a system. A semantic-network system used for this purpose needs a semantics in which the nodes of a network are terms in the language of thought of the modelled cognitive agent and represent entities in the agent's mental universe, reflecting all the quirks of real concepts, such as a high degree of interdependence, inconsistency and circularity.

Non-well-founded set theory, with its legitimisation of circular structures, provides an interpretation for SNePS. A certain category of SNePS node (the base node), representing a discrete concept, is given a semantics that both influences and is influenced by its dominating compound (molecular) nodes. A semantic function mu is defined that assigns a non-well-founded set, formed from the outermost (sensory) nodes of the cognitive agent, to the other nodes of that agent. Investigation shows that this hyperset semantics supports representational principles of SNePS, such as the conceptual uniqueness of every node, and allows no node to be circular to the point of vacuity.


November 16th

Bayesian Inductive Logic Programming
Stephen Muggleton
Oxford University

Abstract

Inductive Logic Programming (ILP) involves the construction of first-order definite clause theories from examples and background knowledge. Unlike both traditional Machine Learning and Computational Learning Theory, ILP is based on lock-step development of Theory, Implementations and Applications. ILP systems have successful applications in the learning of structure-activity rules for drug design, semantic grammars rules, finite element mesh design rules and rules for prediction of protein structure and mutagenic molecules. The strong applications in ILP can be contrasted with relatively weak PAC-learning results (even highly-restricted forms of logic programs are known to be prediction-hard). It has been recently argued that the mismatch is due to distributional assumptions made in application domains. These assumptions can be modelled as a Bayesian prior probability representing subjective degrees of belief. Other authors have argued for the use of Bayesian prior distributions for reasons different to those here, though this has not lead to a new model of polynomial-time learnability. Incorporation of Bayesian prior distributions over time-bounded hypotheses in PAC leads to a new model called U-learnability. It is argued that U-learnability is more appropriate than PAC for Universal (Turing computable) languages. Time-bounded logic programs have been shown to be polynomially U-learnable under certain distributions. The use of time-bounded hypotheses enforces decidability and allows a unified characterisation of speed-up learning and inductive learning. U-learnability has as special cases PAC and Natarajan's model of speed-up learning.


Last updated on 10 March 2011