Link to UoY website
Department of Computer Science
Home
People
Research
Publications
News
Functions for local users
Relational Graph Matching

The relational structure of objects or symbols is a very powerful tool in computer vision and pattern recognition. In the relational paradigm, patterns are formed not only by the nature of the objects which make the pattern, but also by their relationships to each other. These relationships provide a very flexible representation which can be used in many tasks such as object recognition, scene matching and registration and tracking. For example, the work of Barrow and Popplestone first established the relational graph as a practical representation for scene matching.

Effective relational matching must be capable of accommodating two classes of error. The first of these are initial matching errors which result from ambiguities in object appearance created by the acquisition process. The second source of error is structural disturbance of the relational descriptions under match. These structural errors result from either poor initial image segmentation or the presence of noise and clutter.

Research at York has addressed this important topic by developing a Bayesian framework for inexact relational matching [Wilson and Hancock 1994b,1994d] [Evans et al 1995]. Novel to the approach is the idea of using the Hamming distance between graphs to gauge relational consistency [Wilson and Hancock 1994e], [Wilson et al 1994], [Wilson et al 1995].
The second phase of this research provides techniques for identifying and removing structural errors from relational graphs. Finally it provides an experimental comparison with more traditional methods[Wilson and Hancock 1995]. Here the graphs under match adapt their connectivity structure to exclude perceptually meaningless information.

Current research topics

Much recent work has looked at the edit distance approach to graph matching. The edit distance between two graphs is the cost of a set of elementary edit operations (such as edge or vertex deletion) which transform one graph into another. These costs are usually specified manually, but it should be possible to learn the costs from a sample set. These ideas could also be extended to attributed graphs.

A generative model is one which allows new examples to be generated from the model. This would be a valuable thing to do for graphs since it would allow new graphs to be generated for particular classes of data. These models could be formed as a probabilistic graph which combines samples, or by projection into a feature space, which could be coupled with a standard generative model.

Contacts: Edwin Hancock  Richard Wilson

Previous Next