Neural Networks for Large Scale Machine Learning

Description

The Internet of Things (IoT) is expected to contain over 26 billion devices (excluding PCs, tablets, and smartphones) by 2020 and reach a market size in excess of $14 trillion by 2025. These devices include things such as sensor-based medical devices, automobiles, manufacturing plants, power systems, and smart homes. In many IoT applications, a system needs to be in place to analyze patterns in the streaming data, detect certain types of events (e.g. impending failure or deteriorating performance) and take appropriate action. Machine learning systems would perform these tasks and thus become a critical element of a wide range of IoT applications. Neural networks are well positioned to address these challenges of large scale machine learning. Unfortunately, high-dimensional data is a problem for most machine learning methods.

Researchers at Arizona State University have invented a new neural networking method. This method can be parallelized at different levels of granularity, to ensure speed. The technology addresses the issue of high-dimensional data through class-based feature selection which allows the method to automatically perform dimension reduction for high-dimensional data. The method is able to automatically determine the important variables of a high-dimensional pattern classification problem and create pattern classifiers using a small set of these important variables. The method can learn from both streaming and stored data. Hardware implementation allows for a plethora of advantages over current technologies, such as localized learning and distributed decision-making.

Potential Applications

  • Internet of Things
  • Parallel Computations
  • Data storage/mining
  • Machine learning
  • Neural Networking
  • Robotics

Benefits and Advantages

  • Adaptable –
    • The method can learn from both stored and streaming data.
    • Able to exploit massively parallel computing hardware and able to deal with high-dimensional data.
    • Highly scalable, can deal with terabytes of data without resorting to sampling techniques.
  • Speed – Able to be parallelized on cluster computing platforms.
  • Low Cost – Reduces the volume of network traffic, lowering costs.
  • Hardware Implementation –
    • Localized learning and response reduces the volume of signal transmission through expensive networks.
    • Reduces the reliance on a single control center for decision-making, allowing for distributed control of machinery and equipment.
    • Makes learning machines widely deployable on an “anytime, anywhere” basis even when there is no access to a network and/or cloud facility.
    • Makes machine learning ubiquitous.

For more information about the inventor(s) and their research, please see

Dr. Asim Roy's directory webpage

Case ID:
M15-236P
Published:
02-26-2016
Last Updated:
05-21-2018

Inventor(s):

Asim Roy

Patent Information

For More Information, Contact