Cascaded Precision Computing for Efficient Convolutional Neural Networks

Description

Convolutional Neural Networks (CNNs) have gained popularity in many computer vision applications. For example, they are able to train and classify images, video, and speech at high accuracy. However, the multiple layers of convolution and pooling operations necessary for CNNs create redundancy. This makes it difficult to perform real-time classifications with low power consumption on today’s computing systems. Therefore, there is a need for an efficient way to reduce the amount of total computation in CNNs without affecting the output result of classification accuracy.

Researchers at Arizona State University have developed a new computation scheme for exploiting the computing nature of convolution and pooling operations. The approach divides input features into a group of precision values and cascade the operations. This way, the total number of data needed for convolution is dramatically reduced without affecting the output features or final classification accuracy. Thus, the approach only performs computations very close to the actual amount necessary, eliminating large redundancies. Consequently, hardware footprint and processing power are substantially reduced while producing the same output.

Potential Applications

  • Internet of things
  • System identification
  • Self-driving cars
  • Process control
  • Pattern recognition

Benefits and Advantages

  • Efficient – Performs at essential computations without redundancies
  • Agile – Capable of real-time and high accuracy classifications
  • Economical – Hardware and processing power are minimized for same output

For more information about the inventor(s) and their research, please see

Dr. Jae-sun Seo's Directory Page

Case ID:
M16-180P
Published:
08-09-2017
Last Updated:
05-21-2018

Inventor(s):

Jae-sun Seo Minkyu Kim

Patent Information

For More Information, Contact