Office of Technology Transfer – University of Michigan

Architecture and hardware for sparse neural networks

Technology #6367

The proposed technology is a hardware architecture for the efficient implementation of a neural network. Neural networks are a special type of machine learning in computing that are based loosely on biological nervous systems. They have been extremely useful in technology development, but are difficult to implement because of massive time and energy requirements. This technology reduces both energy and time costs compared to other leading hardware designs for a wider range of applications.

Hybrid design features for efficient neural networks

Two hardware structures, a Bus and a Ring, are each ways of avoiding hardware that directly connects each neuronal unit with every other neuronal unit (complete connectivity) in an artificial neural network. Each of them offers distinct advantages and drawbacks. The proposed technology clusters neurons into Buses consisting of a fraction of the neuron units, and then connects these clusters through a ring structure. The structure of the hardware allows major advantages of both designs to work together for a chip which is space-efficient, time-efficient, and energy-efficient, opening up the door for higher-power computing using neural networks.

Applications

  • Image feature extraction
  • Voice recognition
  • Robotics gesture recognition
  • Autonomous Vehicle
  • Pattern recognition in large and complex data.
  • Extraction of salient features from images and video.
  • Development of smart robotics with specialized hardware.

Advantages

  • Higher efficiency than previous hardware designs.
  • Specialized specifically to deal with neural network algorithms.