Case ID: M20-255P

Published: 2021-05-05 10:57:23

Last Updated: 1677136225


Inventor(s)

Suren Jayasuriya
Odrika Iqbal
Andreas Spanias

Technology categories

Computing & Information TechnologyImagingIntelligence & SecurityPhysical Science

Licensing Contacts

Shen Yan
Director of Intellectual Property - PS
[email protected]

Coupled Tracking and Motion Deblurring via Coded Exposure: Algorithm and FPGA Architecture

Background

Object tracking is one of the most common applications for computer vision. In particular, it is utilized in surveillance, autonomous vehicles, robotics, and human-computer interaction. One main challenge for object tracking is the hardware limitations of tracking fast-moving objects. Fast-moving objects can induce motion blur at the image sensor due to long exposure times, which makes tracking difficult as blur can prevent accurate visual feature extraction. This is a particular problem for embedded imaging and vision platforms, where object tracking needs to be computationally lightweight and energy-efficient to preserve battery life while still being motion blur resistant.

 

The requirements of an optical system to track objects include the following: (1) high speed to image fast-moving objects, (2) low noise and no motion blur during exposure, and (3) energy-efficient and fast object detection algorithms to quickly make decisions in real time. However, it is near impossible to realize these objectives simultaneously in an imaging system. For instance, low noise implies long exposure times, which prevents high-speed tracking without motion blur. It is critical to co-design the hardware and the software to achieve these goals, as most bottlenecks in energy, latency, and throughput occur at the interface between sensor and processing units. Thus, a computational imaging system which overcomes traditional trade-offs (e.g., exposure vs. SNR, speed vs. motion blur) would be an ideal candidate for these objectives.

 

Invention Description

Researchers at Arizona State University have developed a simple algorithm to perform tracking-based motion deblurring, with a design emphasis on being lightweight and resource utilization friendly. This algorithm is based on a recent work in coded exposure, which was shown to be effective when coupled with an object tracker. An FPGA implementation of this algorithm was demonstrated, achieving 29 frames per second real-time performance with approximately 17x speed-up in clock cycles as compared to software.

 

Potential Applications

•       Surveillance

•       Autonomous vehicles

•       Robotics

•       Human-computer interaction

 

 

Research Homepage of Professor Suren Jayasuriya

Homepage of Professor Andreas Spanias