FPGA technology for implementation in visual sensor networks

Wai Chong Chia, Wing Hong Ngau, Li Minn Ang, Kah Phooi Seng, Li Wern Chew, Lee Seng Yeong

Research output: Chapter in Book or Report/Conference proceedingChapterpeer-review

Abstract

A typical configuration of Visual Sensor Network (VSN) usually consists of a set of vision nodes, network motes, and a base station. The vision node is used to capture image data and transmit them to the nearest network mote. Then, the network motes will relay the data within the network until it reaches the base station. Since the vision node is usually small in size and battery-powered, it restricts the resources that can be incorporated onto it. In this chapter, a Field Programmable Gate Array (FPGA) implementation of a low-complexity and strip-based Microprocessor without Interlocked Pipeline Stage (MIPS) architecture is presented. In this case, the image data captured by the vision node is processed in a strip-by-strip manner to reduce the local memory requirement. This allows an image with higher resolution to be captured and processed with the limited resources. In addition, parallel access to the neighbourhood image data is incorporated to improve the accessing speed. Finally, the performance of visual saliency in using the proposed architecture is evaluated.

Original languageEnglish
Title of host publicationVisual Information Processing in Wireless Sensor Networks
Subtitle of host publicationTechnology, Trends and Applications
PublisherIGI Global
Pages293-324
Number of pages32
ISBN (Print)9781613501535
DOIs
Publication statusPublished - 2011
Externally publishedYes

Cite this