Optics.org: Toshiba develops a Time Domain Neural Network (TDNN), said to have an extremely low power consumption. TDNN is composed of a massive number of tiny processing units that use Toshiba’s original analog technique, unlike conventional digital processors. TDNN was reported on November 8 at A-SSCC 2016 (Asian Solid-State Circuits Conference 2016).
In von Neumann type computer, most energy is consumed moving data from on-chip or off-chip memory devices to the processing unit. The most effective way to reduce movement of a datum is to have massive numbers of processing units, each dedicated to handling only one datum that is located close by. These datum points are given a weight during conversion of an input signal (e.g. an image of a cat) to an output signal (e.g. the recognition of the image as a cat). The closer the datum point is to the desired output, the higher the weight it is given. The weight provides a parameter that automatically guides the deep learning process.
The reported energy consumption per operation is 20.6 fJ, which is 6x lower than previously reported at ISSCC 2016.
Thanks to ER for the news!
In von Neumann type computer, most energy is consumed moving data from on-chip or off-chip memory devices to the processing unit. The most effective way to reduce movement of a datum is to have massive numbers of processing units, each dedicated to handling only one datum that is located close by. These datum points are given a weight during conversion of an input signal (e.g. an image of a cat) to an output signal (e.g. the recognition of the image as a cat). The closer the datum point is to the desired output, the higher the weight it is given. The weight provides a parameter that automatically guides the deep learning process.
The reported energy consumption per operation is 20.6 fJ, which is 6x lower than previously reported at ISSCC 2016.
Thanks to ER for the news!
Toshiba Presents Low Power Deep Learning Processor
Reviewed by MCH
on
November 08, 2016
Rating:
No comments: