The Amadeus Videoware project aims to develop a video processing architecture, which will support visually-diven human interaction with a wide range of ubiquitous devices.
We aim to implement a library of generic image processing, computer vision and pattern recognition algorithms in an FPGA-based architecture. The low-level, high bandwidth processes, such as smoothing and feature extraction, will be implemented directly in hardware, whilst higher level, lower bandwidth processes, such as task-oriented combination of visual cues, will be implemented in a software architecture. Thus part of the FPGA architecture will be configured as a relatively low power CPU.
The project is in two, two-year phases, overlapping by one year: The first phase will develop the generic videoware library and architecture, the second phase will involve development of an API to support a range of applications. Applications such as an "intelligent cooker hood" and "people-tracking" will demonstrate typical applications that the architecture can support. Our ultimate aim is to develop a system on a chip (SOC), where the video processing architecture is coupled with a CMOS image sensor.
The project is in collaboration with the University of York and Alpha
Mosaic Ltd Ltd.
©2003 Dept. of Computer Science, The University of York.