Formal reasoning in the Z notation using the CADiZ toolset
Ian Toyn
University of York
The Z notation is used for formal specifications of systems. CADiZ is a set of tools for manipulating Z specifications. It extends an existing document preparation system (troff/latex/word) with a type-checker for the mathematical paragraphs of a specification and a browser based on a document previewing tool. Having established this effective browsing interface, some reasoning abilities have been added that are accessed through the same user interface. The result is a theorem prover whose user interface is superior, but whose reasoning abilities are as yet inferior, compared to most good theorem provers.
This seminar will take the form of an on-line demonstration of CADiZ using a portable computer linked to a video projector.
A Topological Transition Based Logic for the Qualitative Motion of Objec
ts
Andre Trudel
Jodrey School of Computer Science, Acadia University, Nova Scotia, Canada
We present a spatio-temporal ontology suitable for representing and reasoning about the qualitative motion of rigid bodies. This simple ontology provides a uniform treatment of motion in one, two, and three dimensional space. A succinct axiomatization is provided capturing the ontology. This first order logic is based on the transition of topological relations between objects.
A New Approach for Representing Interval-Based Temporal Information
Andre Trudel
Jodrey School of Computer Science, Acadia University, Nova Scotia, Canada
Every aspect of the world around us changes with time. Therefore if we are to use a computer to represent and reason about the real world, we must take time into account. There are two types of temporal information that need to be represented: point and interval based. An example of point based information is ``the book is on the table at time point 3''. An example of interval based information is ``John ran a while between times 5 and 9''.
A standard AI technique for representing information that is true or false at a point is to use a relation, say true, to associate information with a time point. For example, ``the book is on the table at time point 3'' can be represented as: true(3, on(book,table)). The above approach can be extended to represent interval based information. A relation is used to directly associate interval based information with a time interval. For example, ``John ran a while between times 5 and 9'' can be represented as: true(5,9, ran-a-while(John)). There are problems with this approach, which are discussed in the talk.
A new first order temporal logic is presented. The new logic is point based. To represent interval based information, we use the observation that what is true at every point in an interval completely determines what is true over the interval. We use the Riemann integral to relate an interval with its internal points. The approach is unique in AI. The talk concludes with a survey of research projects that use the new logic.
DCGs: Parsing as Deduction?
Chris Mellish
University of Edinburgh
Although the view of parsing as deduction has been very useful, many modern approaches in Computational Linguistics make use of other ideas, such as abduction and model construction. These different views seem to be in conflict and prompt one to ask what parsing really "is".
In this talk, I concentrate on Definite clause Grammars (DCGs), the paradigm example of "parsing as deduction". I show that, under some plausible assumptions, the computation involved in using deduction to derive consequences of DCG clauses produces exactly the same results as would be produced by a process of model building using a set of axioms derived syntactically from the original clauses.
This supports the view that parsing (and generation) can be seen uniformly as processes of model construction -- even in the original paradigm case there is some doubt as to whether "parsing as deduction" is the best, or only, explanation of what is happening.
Automating Commonsense Physical Reasoning
Ernest Davis
Courant Institute, New York University
A videotape of this lecture is available from the library of | |||||
|
It is almost twenty years since Pat Hayes' "Naive Physics Manifesto" first proposed a research programme of formalizing and implementing physical reasoning at the commonsense level. This type of commonsense reasoning is essential if we are to build practical systems that are capable of understanding, and interacting with, the everyday world around them. Though Hayes' methodology is in some respect problematic, the programme has made steady, though very slow, progress.
This talk will discuss a number of important issues that arise in this research:
Automating Qualitative Kinematic Reasoning
Ernest Davis
Courant Institute, New York University
A kinematic physical theory consists purely of geometric constraints on the shapes and motions of objects, without reference to such concepts as mass, force and energy. The most familiar kinematic theory is that of rigid solid objects, though kinematic theories can also be applied to non-rigid objects, fluids and shape-modifying processes.
Almost all work in computer science on automating problem solving in the kinematic theory of rigid objects assumes a complete enumeration of the objects involved and a precise description of their shapes. In many practical situations, however, a reasoner must make do with partial knowledge. This is the problem of qualitative kinematic reasoning, about which much less is known.
This talk will discuss three particular areas of research in qualitative kinematics:
Clausal Resolution for Temporal Logics
Michael Fisher
Manchester Metropolitan University
Point-based temporal logics are widely used in computer science and artificial intelligence, particularly in the specification and verification of reactive and multi-agent systems. However, due to inductive interaction between temporal operators, the mechanisation of these logics is difficult. While this interaction adds complexity, it also central to the use of the logics in a variety of applications.
In this talk, we outline a clausal resolution method for temporal logic that isolates this inductive element, effectively applying classical resolution elsewhere. Applications of the core temporal logic, together with examples of resolution proofs, will be presented. This simple approach will then be extended in a variety of ways, to temporal logics of knowledge and belief, used in the specification of multi-agent systems, and to branching-time temporal logics, used in the verification of reactive systems.
Combining Classical and Rewrite-Based Approaches to Proof by Induction
David Duffy
University of York
Several reasoning tools exist for the specification language Z but they all lack substantial facilities for performing inductive proofs. We propose the development of such a reasoning assistant, utilising aspects of both the classical and rewrite-based approaches to proof by induction. Our approach should overcome some of the limitations of present inductive proof systems, such as the severe restrictions on the class of theories to which they apply (e.g. canonical term rewriting systems), or the limited form of induction they allow (e.g. induction on variables). We propose to implement our reasoning mechanisms in the CADiZ system, an interactive reasoning tool for Z developed at the University of York.
This talk will outline our proposal, concentrating on the general principles underlying the inductive-reasoning component, and those aspects of the two approaches to induction that we wish to incorporate into the proof tool. The talk does not assume knowledge of Z.
Learning Constraints for Continuous Constraint Satisfaction
Marcello Pelillo
University of Venice
I will present two approaches to learn the compatibility model of relaxation lab eling processes, a general class of parallel distributed algorithms developed to solve "continuous" constraint satisfaction problems arising in computer vision and pattern recognition.
Experiments on specific applications will be described that confirm the effectiv eness of the proposed approaches.
A Constraint Logic Approach to Automated Deduction in Modal Logics
Richard Scherl
New Jersey Institute of Technology, USA
A general framework for the construction of deductive systems that use modal logic is developed. Sentences in modal logic are translated into a constraint logic in which the constraints represent the accessibility relation in the possible world semantics for these logics. The translation procedure and the constraint logic depend upon the specific modal logic being translated. The framework provides a mechanism for converting a wide variety of first order inference rules into inference rules for a constraint logic. The proofs of soundness and completeness of the constrained inference rules are produced by systematically transforming the proofs for the original inference rules. Special mechanisms are developed for reasoning about the constraints. The integration of these special purpose reasoners and the general deductive system is accomplished by drawing upon general results in the area of hybrid reasoning.
It is argued that a number of existing modal deduction methods can be viewed as instances of this general framework. The advantages of the general approach are simple proofs of correctness for various instances of the framework, applicability to a wide variety of logics and proof methods, and ease in incorporating additional features not available in other automated modal deductive systems.
A Yardstick for the Evaluation of Case-Based Classifiers
Tony Griffiths
University of York
A Case Base Reasoning Framework
Hugh Osborne
University of York
Case based systems typically retrieve cases from the case base by
applying similarity measures. The measures are usually constructed in
an ad hoc manner. This talk will present a
theoretical framework
for the systematic
construction of similarity measures. In addition to paving the way to a
design methodology for similarity measures, this systematic
approach facilitates the identification of opportunities for
parallelisation in case base retrieval.
Presented at EWCBR, November 14th - 16th, 1996.
Completeness and Categoricity in Region-Based Theories of
Space
Brandon Bennett
University of Leeds
The RCC theory of Randell Cohn and Cui is a 1st-order theory of spatial regions based on the primitive relation of connectedness. The axioms of the theory enable a large class of intuitively correct inferences to be made. However, it is readily apparent that there is no single model of the axioms: for instance the dimensionality of regions is not fixed --- one can interpret them as being of two, three or even higher dimension. Moreover, spatial configurations which are impossible in (say) 2D may become possible in 3 or more dimensions. This means that the entailments provable in the RCC theory are only those that hold in a very large class of possible models, most of which will have a very different structure to what is intended. In this talk I give motivation and a methodology for the construction of a complete and categorical theory of spatial regions. Such a theory would characterise a unique intended model and would make every 1st-order formula in the vocabulary of the theory either provably true or provably false. Additional axioms for the RCC theory are suggested to restrict the class of its models. Specifically I address the problems of fixing the dimension of regions and of providing an adequate set of existential axioms. We shall see that absolute completeness cannot be obtained in a purely 1st-order extension of RCC and from this negative result I shall draw some conclusions regarding the relationship between formal representation and mechanical reasoning in AI systems. On a more practical note, I shall talk briefly about ongoing work to incorporate spatial reasoning into applications in the areas of GIS, Virtual World Simulation and specifying the behaviour of intelligent agents. Several software demonstrators are under development.
Typed Feature Logics and HPSG
Suresh Manandhar
University of York
This is an informal introduction to HPSG and typed feature logics. If you have heard these terms before and would like to find out more then come to this talk. Read on for more details. HPSG (Head-driven Phrase Structure Grammar) is currently the most popular computationally oriented grammatical theory. HPSG grew out of work in unification grammars particularly GPSG (Generalised Phrase Structure Grammar). The underlying logical machinery behind HPSG is some extended version of a typed feature logic. Typed feature logics are typed extensions to feature logics which underly the logic of feature structures. In this talk I will provide an introduction to typed feature logics using feature structures as a starting point. I will then provide a brief introduction to HPSG highlighting how typed feature structures play a major role in the specification of HPSG grammars.
Similarity for Instance-Based Learning of Monomial Functions
Tony Griffiths
University of York
This talk will decribe results, from my thesis, concerning instance-based learning of monomial target concepts:
In this talk I will:
Logical Representations of Negotiation Protocols in Multiagent Systems
Eduardo Alonso
University of York
One of the primary research issues in distributed artificial intelligence (DAI) is how to have agents interact coherently, that is, how agents can coordinate their knowledge, goals, and skills to share the work associated with carrying out a joint plan or to resolve outright conflict arising from limited resources. There are two main points of view in DAI research, namely distributed problem solving (DPS) and multi-agent (MA) systems. In DPS, agents are centrally designed, and the system is built to be distributed so as to improve system performance, modularity, and/or reliability. In MA systems, agents are built without having the luxury of designing their interaction opponents: (possibly pre-existing) agents are assumed to be autonomous utility maximizers. The main DPS approach to cooperation is multi-agent planning (MAP). MAP takes the view that interaction between the separate activities of the nodes must be identified, and any conflicts should be identified and fixed before the plans are executed. Thus, rather than risking incoherent and inconsistent decisions, MAP insists that nodes plan out beforehand exactly how each will act and interact. Obviously, MAP systems are poorly suited to dynamically changing domains, where agents cannot wait for complete information about potential plan interactions before they begin acting. Up to now, logicians working in DAI have concentrated all their efforts in formalizing MAP protocols. On the other hand, the main MA systems approach to coordination is negotiation. The negotiation process involves identifying potential interactions either through communication or by reasoning about the current states and intentions of other agents and modifying the intentions of these agents to avoid harmful interactions or create cooperative situations. We introduce a logical model of negotiation. First we formalize the notion of autonomous agent extending the BDI model in order to represent agents' utilities. Then, an efficient and stable negotiation protocol is represented in a many-sorted first-order modal language with explicit reference to time points and intervals.
QLF for addressing quantifier scope ambiguities
Alistair Willis
University of York
The Core Language Engine (CLE), as described in Alshawi 1992, was notable for its use of Quasi-Logical-Form (QLF) as an intermediate representation for sentences of natural language, especially for ambiguous sentences. The QLF used in the original CLE did not have a clear semantics; a situation which was addressed in later papers. I will look at the way the later system addresses quantifier scope ambiguities. I will also draw attention to an apparent problem in the updated version of the QLF and suggest a way such a difficulty might be approached.