The Next Generation of Deep Learning Hardware: Analog Computing – IEEE Journals & Magazine.
Initially developed for gaming and 3-D rendering, graphics processing units (GPUs) were recognized to be a good fit to accelerate deep learning training. Its simple mathematical structure can easily be parallelized and can therefore take advantage of GPUs in a natural way. Further progress in compute efficiency for deep learning training can be made by exploiting the more random and approximate nature of deep learning work flows. In the digital space that means to trade off numerical precision for accuracy at the benefit of compute efficiency. It also opens the possibility to revisit analog computing, which is intrinsically noisy, to execute the matrix operations for deep learning in constant time on arrays of nonvolatile memories. To take full advantage of this in-memory compute paradigm, current nonvolatile memory materials are of limited use. A detailed analysis and design guidelines how these materials need to be reengineered for optimal performance in the deep learning space shows a strong deviation from the materials used in memory applications.
Read more & PDF via BB. Also see “1951 – SNARC Maze Solver – Minsky / Edmonds.” Looking forward to analog wetware dongles for ML.
from Adafruit Industries – Makers, hackers, artists, designers and engineers! https://ift.tt/2Yhdz0Y
via IFTTT
Комментариев нет:
Отправить комментарий