Text

Formell modellering och analys av inbyggda system

Energy-Efficient Hardware Accelerator for Embedded Deep Learning

In this joint project, we aim at decreasing the power consumption and computation load of the current image processing platform by employing the concept of computation reuse.

Start

2019-01-01

Huvudfinansiering

STINT - The Swedish Foundation for International Cooperation in Research and Higher Education

Projektansvarig vid MDH

Universitetslektor

Masoud Daneshtalab

+4621103111

masoud.daneshtalab@mdh.se

In this joint project, we aim at decreasing the power consumption and computation load of the current image processing platform by employing the concept of computation reuse. Computation reuse suggests temporarily storing and reusing the result of a recent arithmetic operation for anticipated subsequent operations with the same operands. Our proposal is motivated by the high degree of redundancy that we observed in arithmetic operations of neural networks where we show that an approximate computation reuse can eliminate up to 94% of arithmetic operation of simple neural networks. This leads to up to 80% reduction in power consumption, which directly translates to a considerable increase in battery life time. We further presented a mechanism to make a large neural network by connecting basic units in two UT-MDH joint works.