Visual Navigation

This page describes the visual navigation work done under an 18-month EPSRC fast-track grant, which completed in March 2002

Visual navigation is a challenging application domain of Computer Vision because the robot must infer three dimensional structure using two dimensional images, and because the scene structure and lighting conditions can vary greatly as the robot moves around its environment.

Our initial approach has been to develop methods to detect and segment the ground plane from the rest of the scene using monocular, uncalibrated vision. Links to publications and results are given below.

* MPEG video results.
* IGR report (40K postscript).
* ICIG 2002 paper (1722K gzip postscript).
* ICRA 2002 paper (2305K pdf).
* IROS 2001 paper (659K pdf).
* IARP 2001 paper (950K pdf).

Currently, we are developing a framework in which we can integrate multiple diverse visual cues in order to guide a mobile robot in an indoor environment. The aim get the ActivMedia Robotics Pioneer robot shown on the right to automatically move around the lab and explore its environment using an on-board camera as its ``visual sense". The robot sends the video signal from its Sony pan/tilt/zoom camera via a unidirectional radio link to an off-board workstation (SGI O2) which is networked to a powerful 32 processor 9 gigabyte memory SGI origin 2000. These computers processes the images and send speed and steering commands to the robot via the wireless RS232 serial link. This arrangement is shown in the diagram below.

Document Actions
Latest News

THE award

ACAG Wins Top Award

find out more

OPPORTUNITIES

Globe

PHD 

STUDENTSHIPS

Log in


Forgot your password?
INTRANET

Group Pages

 

Please refer to the legal disclaimer covering content on this site.