Wednesday 27th January 1999, 2.15 pm, CS103 (Departmental)

David Gilbert
City University and European Institute of Bioinformatics

Searching the TOPS Protein Topology Database, Pattern Discovery and Topology-Based Structure Comparision


Abstract

The representation of protein three-dimensional structures at the simplified topological level is useful for the understanding and
manual comparision of protein folds. Such a representation is embodied in the database of two-dimensional TOPS cartoons
available on the WWW (http://tops.ebi.ac.uk/tops). Also available on this web site is a database query system TopsQ, which
supports fast pattern searching over protein topology databases, and topology-based structure comparision program. See the
website for more details: http://www.cs.york.ac.uk/seminars/99spring/ 


Thursday 4th February 1999, 4.30 pm, CS103

Stephen Watkinson
University of York

Lexical Learning with Categorial Grammars

Abstract

This talk will report on preliminary work carried out into automatically generating lexicons from unmarked corpora. The approach is to provide a learner with a set of available categories in categorial grammar notation. Using these categories and the principles of (pure) categorial grammar, sentences from the corpus are probabilistically parsed. These parses and the history of previously parsed sentences are used to build a lexicon and, as a side product, mark-up the corpus. Preliminary experiments will be reported, that show that this approach may be of value and a set of future developments will be suggested to test and expand the system.


Thursday 11th February 1999, 4.30 pm, CS202J

Dimitar Kazakov
University of York

Learning parsers from Annotated Corpora

Abstract

Background: A-V Grammars. Unification Grammars. Shift-reduce parsers. LR(k) parsers. Logic Represention of shift-reduce and LR parsers.  Related work: Learning shift-reduce parsers (Zelle&Mooney). Learning fast LR parsers (Christer Samuelsson).

LAPIS or ILP learning of specialised LR parsers: Extracting backbone CFG from the corpus. Generation of LR parser. Rewriting the parser in Prolog. Construction of learning examples. Learning of specialised parsers. Results.

Speech recognition and LAPIS: IBM VoiceType. Spoken Language Understanding Prototype. Lapis for SRCL grammar induction.  Ongoing&Future Work. Conclusions 


Wednesday 17th February 1999, 2.15 pm, CS103 (Departmental Seminar)

Simon Colton
University of Edinburgh


HR: Automatic Theory Formation in Finer Algebras

The following web page gives a detailed account of HR:
HR - Automatic Concept Formation in Pure Mathematics


Wednesday 24 February 1999, 2.15 pm, CS103 (Departmental Seminar)

David Balding
University of Reading

Statistical Analysis of Human Genome Data

Abstract

This is an exciting time for those interested in what can be "read from the genes" about histories of humans and other species. Two key developments are now coming together: (1) appropriate stochastic models for the genealogical trees underlying samples of DNA sequences, and (2) computational statistics algorithms, particularly MCMC, and specifically for parameter spaces involving trees. I will review and illustrate these developments. The bad news is that the underlying reality is so complex that strong modelling assumptions are required, and even then there is typically large uncertainty about inferences of interest. However, some interesting inferences can be made which seem to be reasonably robust to plausible modelling assumption 


Thursday 11th March 1999, 4.30 pm, CS202J

Leo Caves
University of York

Physics-Based Biomolecular Simulation: Much Artifice, Little Intelligence

Abstract

Modelling of Biomolecular systems (proteins, nucleic acids, their complexes and their environment of water, membranes, ions etc.) is proving to be an invaluable approach to bridging the structure - function relationships of these complex systems. For example, the microscopic mechanism of chemical catalysis by an enzyme, or the mechanical action of a molecular motor can be represented in full atomic detail and macroscopic observables extracted from detailed atomistic results. The underlying computational models for physics-based atomistic simulation of biomolecules are introduced. The range of spatial and temporal scales and heterogeneity of interactions and their intrinsic stiffness, pose significant computational challenges. These demands drive, as in many other computational physical sciences, the development of advanced numerical methods and engender an insatiable thirst for machine cycles. I will provide an overview of the kinds of problems we tackle and aspects of our software and hardware solutions. This physics based approach to biomolecular systems behaviour is in contrast to the knowledge-based approaches that have been presented previously in tihs seminar series: I will do my best to identify aspects of my work which may present opportunities for AI or ML methods. 




Wednesday, 28 April, 2.15, CS103 (Departmental Seminar)

Gerry Altman
Department of Psychology
University of York


Rule Learning by Seven-Month-Old Infants and Neural Networks Abstract

Marcus et al.  (Science 283, 77-80, Jan 1st 1999) recently described a study in which 7-month old infants were familiarized with sequences of syllables generated by an artificial grammar, and were then able to discriminate between sequences generated both by that grammar and by another even though sequences in the familiarization and test phases employed different syllables.  They claimed that their infants were representing, extracting and generalizing abstract algebraic rules.  This conclusion was motivated also by their claim that the infants' discrimination could not be performed by a popular class of simple neural network model.  In this talk, I shall demonstrate how an SRO can model the Marcus data, and I shall argue that, contrary to rumour, generalizations by neural networks are alive and well.



Thursday 29 April, 4.30 pm, CS202J

Joseph Melia
Department of Philosophy
University of York


Non-standard Models and Second order logic (or, How semantics outruns syntax everytime)

Abstract

Whilst no amount of symbol manipulation will enable a computer to understand the meaning of 'Hamburger', symbol manipulation might at least enable a computer to understand central mathematics such as Number theory and Set-theory.  But I'll argue that the existence of non-standard models indicates that even this is something of a forelorn hope. 



Thursday 6 May, 1:15 pm, CS103

Toby Walsh
Department of Computer Science
University of Strathclyde


Search in a Small World

Abstract

You start talking to the person sat next to you on the aeroplane and within a few minutes discover an acquaintance in common. You therefore observe that we live in a "small world". In this talk, I will describe recent work to formalize this topological notion, to measure the frequency with which it occurs in a wide variety of graphs met in practice, and to study the impact of such a topology of search problems. The paper on which this talk is based can be found here.



Thursday 6 May, 4.30 pm. CS202J

Alan Frisch
Department of Computer Science
University of York


Building Theories into Instantiation


Building Background Knowledge into Inductive Logic Programming --- Everything I know about ILP (Part 1)

Introduction

This series of three talks (6 May, 13 May, 3 June) present everything I know about inductive logic programming.  The talks address the question of how background knowledge can be incorporated into the main operations of inductive logic programming:  instantiation, generalization, subsumption and refinement.

Though each talk is self contained, the second and third talks build upon the first.  To encourage attendance at all three talks, those who attend the first two will be given free admission to the third.

Results presented in the first two talks have been obtained jointly with David Page.
 

Abstract:

Instantiation ordering over formulas (the relation of one formula being an instance of another) have long been central to the study of automated deduction and logic programming.  One of the most common and effective ways of building a background theory into a deductive system has been to build it into the system's instantiation ordering.  We claim this will also prove to be the case for inductive learning systems.

Even a casual examination of the variety of instantiation orderings in use suggests that they are somehow related, but in exactly what way?  This talk presents a general instantiation ordering of which all these ordering are special cases.  The talk shows that this general ordering has semantic properties we desire in an instantiation ordering, implying that the special cases have these properties as well.



Thursday, 13 May, 4.30 pm, CS202J

Alan Frisch
Department of Computer Science
University of York


Building Theories into Generalization


Building Background Knowledge into Inductive Logic Programming --- Everything I know about ILP (Part 2 of 3)

Abstract

In deductive systems, instantiation is realized through the unification process, which computes maximal lower bounds in the instantiation ordering.  Turning unification on its head yields an operation called generalization - or anti-unification - that computes minimal upper bounds.  This talk examines two forms of generalization that incorporate background information.  it also investigates the applications of these forms of generalization to inductive learning using two models from computational learning theory.  As far as we know, these are the earliest results on computational learning theory for a first-order logical language.
 



Tuesday, 18 May, 3:00, Room 7-30, School of Computer Studies, University of Leeds (Distinguished Lecturer Series)

John McCarthy
Department of Computer Science
Stanford University


Creative Solutions to Problems

Abstract

Making genuinely creative programs is likely to remain a distant goal of artificial intelligence until someone comes up with a suitable new idea. Therefore, it is worthwhile to chip pieces of the creativity problem and work on them separately. This talk chips a piece out of the problem of creativity by studying the notion of a creative solution to a problem apart from studying how a person or machine finds the solution. In particular we say that a solution to a problem is creative if it involves concepts not present in the statement of the problem and the general knowledge surrounding it. As a small step toward programs that find creative solutions, we consider how to express the idea of a solution concisely. The adequacy of an idea of a solution is relative to the background of the person or program that will complete the solution. Conciseness isolates the idea from the background. 



Wednesday, 19 May, 2:15, CS103 (Distinguished Lecturer Series)

John McCarthy
Department of Computer Science
Stanford University


Logical Theories of Approximate Concepts

Abstract

Approximate concepts are essential for computers to represent common sense knowledge and do common sense reasoning. Relations between approximate concepts and some of their refinements need to be represented. We use mathematical logic fortified with contexts as objects to represent facts involving approximate concepts. Further innovations in logic may be required to treat approximate concepts as flexibly in logic as people do in thought and language. A sentence involving an approximate concept may have a definite truth value even if the concept is ill-defined. For example, it is definite that Mount Everest was climbed in 1953 even though exactly what rock and ice is included in that mountain is ill-defined. Part of our goal is to discuss how solid intellectual structures can be built on swampy conceptual foundations.



Wednesday 26th May 1999, 2.15pm, CS103 (Departmental Seminar)

David Hogg
Computer Vision
University of Leeds


Visual Models of Interaction

Abstract

It is widely believed that computers will be easier to use if we can communicate with them in ways that are more similar to our interactions with other people. One way of doing this is through a simulated human head with which the user converses using familiar visual and auditory modes of communication (e.g. speech and facial expression). Amongst other things (e.g. cognitive aspects of interaction), this requires fine control of changes in visual appearance to coordinate with the perceived behaviour of the user and to maintain the flow of an interaction. We describe a method for achieving this fine control using models of the possible ways in which the appearance of individuals in an interaction can jointly develop. These models are learnt automatically by observing typical interactions between real people.



 
 

Thursday, 3 June, 4.30 pm, CS202J

Alan Frisch
Department of Computer Science
University of York


Building Theories into Refinement


Building Background Knowledge into Inductive Logic Programming --- Everything I know about ILP (Part 3 of 3)

Abstract

Since its inception, the field of inductive logic programming has been centrally concerned with the use of background knowledge in induction. Yet, surprisingly, no serious attempts have been made to account for background knowledge in refinement operators for clauses, even though such operators are one of the most important, prominent and widely-used devices in the field. This talk shows how a sort theory, which encodes taxonomic knowledge, can be built into a downward, subsumption-based refinement operator for clauses.



Friday 4th June 1999, 2.15pm, CS103 (Departmental Seminar)

Donald Michie
Emeritus Professor of Machine Intelligence
University of Edinburgh


Alan Turing's `Child Machine' Project

Abstract:

A. M. Turing´s name is associated with three main topics.
1. solution of a problem in the foundations of mathematics via an abstraction known as the Universal Turing Machine (UTM),
2. engineering realizations of the UTM, and
3. potential uses of such computers to simulate human learning and cognition.

Under 3, his "Mind" paper of 1950 described what is today known as the "Turing Test". It is for evaluating claims that a given system can think. Debate around this first part of Turing´s paper, which was aimed at discomfiting philosophers, distracted attention from its closing, and more substantive, part. The latter was aimed at implementation.

Turing's phased plan was:

(1) develop arts of mechanised learning of theories from data;
(2) integrate them in such a manner as to support the education and self-education of what Turing called the "child machine".

His "imitation game" carried an unstated assumption. Both the interrogator, and the human agent to be compared with the machine, were to be highly educated Englishmen, -- able to discuss, but possibly not
able to "chat". Chat mode, which can be termed "social communication" is sustainable by expert conversationalists without conscious thought.

Education cannot even start without some capability to communicate socially. To be educable at all, a child must have the rudiments, or if necessary must acquire them through special training.

Some first moves towards an educable "child machine" will be mentioned.



Thursday, 26th August 1999, 1:15pm


 CS202J

                Richard Scherl, New Jersey Institute of Technology, USA


                Knowledge and Action: A situation Calculus Approach Abstract:

Intelligent agents acting in the world need to be able to accomplish their goals without having complete knowledge
of their environment.  Their plans must include steps to acquire the needed knowledge.  This talk will focus on
giving a formal account (based on the situation calculus) of the relationship between actions, perception, and
knowledge. Such a theory should correctly specify which properties of the world change and which remain the
same after performing any possible action.  Therefore, it is necessary to address the frame problem, i.e., how to
avoid specifying all of the properties that a particular action does not change. Additionally, an automated method
is developed for answering queries about whether or not a property (possibly involving knowledge) will hold in the
situation resulting from
the execution of a particular sequence of actions.



Thursday 30 September 1999, 1:15pm


CS202J

David Duffy, University of York


 A Review of the Inductive Proof Assistant Project

Abstract:

The Inductive Proof Assistant Project is concerned with extending CADiZ to perform inductive proofs via classical and rewrite-based strategies. This talk will review work done in the context of the project, which
includes work on mutually-recursive free types, partial functions, generalised induction principles, and implementation and application of unification. The presentation will consist primarily of examples rather than formal descriptions. The talk will also outline work to be done in the final few months of the project.



Thursday 7 October 1999, 1.15pm


CS202J

Toby Walsh, University of York


From NP to Undecidable Absract:

I'd like to use this slot to give people a survey of my research interests in automated reasoning, which cover a broad span from NP-complete problems like SAT and constraint satisfaction to undecidable
problems like inductive theorem proving. I'll discuss the sort of research questions which interest me, and then (according to a democratic vote) talk about one of these research areas in more technical detail.


Thursday 21 October 1999, 1.15pm


CS202J

Eduardo Alonso, University of York


Machine Learning Techniques for Logic-Based Multi-Agent Systems Abstract:

It is difficult and often infeasible to specify Multi-Agent Systems completely in advance, because there are frequently unforeseen situations and interactions that the agents may encounter. One solution is the use of machine learning techniques to enable agents to learn from and adapt to the environment.

So far, most learning algorithms that have been applied to agents are not logic-based, and instead other techniques such as reinforcement learning are used. Even though these techniques are successful in restricted domains, they strip agents of the ability to adapt in a domain-dependent fashion, based on background knowledge of the respective situation. This ability is crucial in complex domains where background knowledge has a large impact on the quality of the agents' decision making.

We plan to test the hypothesis that logic-based systems will perform effectively because of their ability to directly incorporate domain knowledge in the reasoning and learning processes of the agents. We plan to focus on conflict simulations (models of military battles) as an ideal application domain.

In the ongoing research we apply Inductive Logic Programming and Explanation-Based Learning methods to logic-based Multi-Agent Systems, with the following expected outcomes: (1) increased ability of Multi-Agent Systems to adapt to new/unknown scenarios, (2) exploitation of background knowledge in reasoning processes of the Multi-Agent Systems, (3) overcoming the communication bottleneck between agents with hierarchical command structures, and (4) development of a tool for military training and strategy development.



Friday 29 October 1999, 1.15pm


CS202J

Daniel Kustrin


Connectionist Propositional Logic

Abstract:

A novel purely connectionist implementation of propositional logic is constructed by combining neural Correlation Matrix Memory operations, tensor products and simple control circuits.The implementation is highly modular and expandable and in its present form it not only allows forward rule chaining but also implements a simple neural is-a hierarchy. Additionally, some biological relevance can be inferred from the architecture given Hebbian learning mechanism, CMM construction and utilisation of inhibitory synapses for control.



Thursday 4 November 1999, 1.15pm


CS202J

Patrick Olivier, University of York


 Camera planning: towards the automation of cinematography

Abstract:

Camera planning is the problem of placing a camera within a graphical world such that the resulting image satisfies a visual goal.  We thereforeneed to be able to both specify the visual goal (in a manner that is appropriate to the task at hand), and develop algorithms for computing the corresponding camera position, orientation and field of view (in a manner appropriate to thecomputationalresources available).

This talk will introduce a number of approaches to camera planning that we are currently developing. These vary from computationally intensive, "off-line" methods, that can satisfy relatively complex visual goals, to real-time techniques, which aim to impose minimal additional resource demands on the graphics subsystem. In addition to outlining approaches, I'll also address problems in the specification and evaluation of camera planning techniques, and other issues that need to be addressed to automatically produce cinematically appealing graphics applications.

I will be assuming that the audience has NO familiarity with computer graphics or optimisation techniques, so please feel free to come along and give me a hard time!  I'm hoping that I might also get some pointers to other work that might be  appropriate to automating camera planning specifically, and cinematography in general.


Wednesday 10 Novcmber 1999, 2.00 pm (Departmental Seminar)


CS103
 
 

Ian Gent, University of Saint Andrews


Algorithms and Problems for Quantified SATisfiability

Abstract:

The problem of propositional satisfiability (SAT) is receiving great attention in artificial intelligence.   A generalised version,
Quantified Satisfiability (QSAT) is starting to be investigated, because its richer language offers more expressiveness for problems such as planning. I will talk about some algorithms that have been introduced recently for QSAT.   I will also talk about the nature of QSAT problems themselves. Some benchmark problems from the literature turn out to be trivially soluble, and I will discuss how such pitfalls can be avoided.


Monday, 22 November, 1:15pm


CS202J

Tomonobu Ozaki, Keio University


 Integration of ILP system Progol with Relational Database

Abstract:

A problem with inductive logic programming (ILP) systems is that they need enormous computational time to obtain the results from a huge amount of data appearing in such problems as Data mining.

One solution might be to utilize database technology. In fact, some ILP systems as well as other data mining engine have been integrated with relational database management system. In this talk, we propose an ILP system Progol which was integrated with relational database through SQL.  In particular, we show an improved induction algorithm of this integrated system for the multi-valued classification problem.

I am currently working on this problem, and the talk will be a short presentation of some ideas and my work so far.



Thursday 25 November 1999, 1.15pm


CS103

AI Project Student Presentations:

Matthew Hammond


School Inspection Scheduling with Constraint Logic Programming

Kevin Jones


Emergency Doctor Rota Generation with Constraint Logic Programming

Peter Stock


Solving Non-Boolean Satisfiability Problems with the Davis-Putnam


Procedure

Matthew Slack


Solving Planning Problems with Non-Boolean Satisfiability Solvers


Thursday 2 December 1999, 1.15pm


CS202J

Flaviu Marginean



Thursday 9 December 1999, 1.15pm


CS202J

Manfred Kerber, School of Computer Science, University of


Birmingham


 Proof Planning - Case Studies and Future Directions

Abstract:

One can distinguish two major approaches to automated theorem proving, machine-oriented methods like the resolution method and human-oriented methods.  Proof planning as first introduced by Alan Bundy belongs to the second category. It tries to model the cognitive ability of humans to perform plausible reasoning, that is, to master huge search spaces by making good guesses.  The process of plausible reasoning is viewed as a planning process; its strength heavily relies on the availability of domain specific problem solving knowledge.

In the talk, I want to address mainly the following questions: what is proof planning, what are successful applications of it, what are major open problems, and how can they be addressed.

Last updated on 10 March 2011