Quantization and Deployment Od DNN On Microcontroller
Quantization and Deployment Od DNN On Microcontroller
Article
Quantization and Deployment of Deep Neural Networks
on Microcontrollers
Pierre-Emmanuel Novac 1, * , Ghouthi Boukli Hacene 2,3 , Alain Pegatoquet 1 , Benoît Miramond 1 , Vincent Gripon 2
1 Université Côte d’Azur, CNRS, LEAT, 06903 Sophia Antipolis, France; Alain.Pegatoquet@univ-cotedazur.fr
(A.P.); Benoit.Miramond@univ-cotedazur.fr (B.M.)
arXiv:2105.13331v2 [cs.LG] 23 Sep 2021
Abstract: Embedding Artificial Intelligence into low-power devices is a challenge that has been partly
overcome with recent advances in machine learning and hardware design. Presently, deep neural
networks can be deployed on embedded targets to perform different tasks such as speech recognition,
object detection or human activity recognition. However, there is still room for optimization of deep
neural networks in embedded devices. These optimizations mainly address power consumption,
memory and real-time constraints, but also an easier deployment at the edge. Moreover, there is still
a need to better understand what can be achieved for different use cases. This work focuses on the
quantization and deployment of deep neural networks on low-power 32-bit microcontrollers. First,
Citation: Novac, P.-E.; Boukli Hacene,
we outline quantization methods, relevant in the context of embedded execution on a microcontroller,
G.; Pegatoquet, A.; Miramond B.;
Gripon V. Quantization and
Then we present a new framework for end-to-end deep neural network training, quantization and
Deployment of Deep Neural Networks deployment. This open-source framework, called MicroAI, is designed as an alternative to existing
on Microcontrollers. Sensors 2021, 21, inference engines (TensorFlow Lite for Microcontrollers and STM32Cube.AI). Our framework can easily
2984. be adjusted and/or extended for specific use cases. Executions using single-precision 32-bit floating-
https://doi.org/10.3390/s21092984 point as well as fixed-point on 8- and 16-bit integers are supported. The proposed quantization method
is evaluated with three different datasets (UCI-HAR, Spoken MNIST and GTSRB). Finally, a comparison
Academic Editor: Alexander Wong study between MicroAI and both existing embedded inference engines is provided in terms of memory
and power efficiency. On-device evaluation was done using ARM Cortex-M4F-based microcontrollers
Received: 29 March 2021
(Ambiq Apollo3 and STM32L452RE).
Accepted: 20 April 2021
Published: 23 April 2021
Keywords: embedded systems; artificial intelligence; machine learning; quantization; power consump-
tion; microcontrollers
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
1. Introduction
Deep Neural Networks (DNN) are widely used presently to solve a range of problems, in-
cluding classification. DNN can classify all sorts of data such as audio, images or accelerometer
samples for tasks such as speech recognition, object recognition or human activity recognition
Copyright: © 2021 by the authors.
(HAR).
Licensee MDPI, Basel, Switzerland.
A well-known downside of DNN is its high energy consumption requirement. In par-
This article is an open access arti-
cle distributed under the terms and
ticular, the training phase is usually based on a large amount of data processed by costly
conditions of the Creative Commons
algorithms. Although the inference phase requires less processing power, it is still a costly
Attribution (CC BY) license (https:// process. Therefore, GPUs and ASICs are often used to perform such computations in the
creativecommons.org/licenses/by/ cloud [1].
4.0/).
However, cloud computing requires transmitting the collected data to a network server
to process it and fetch the result, thus requiring permanent connectivity, causing privacy
concerns as well as non-deterministic latency. As an alternative, computations can be done at
the edge on the device itself. By doing so, data do not need to be sent by the device to the cloud
anymore. However, running DNN on resource-constrained devices such as a microcontroller
used in Internet of Things (IoT) devices or wearables is a challenging task [2–4].
These devices have only a very small amount of memory, often less than 1 MiB. They
also run DNN algorithms several orders of magnitude more slowly than GPUs or even CPUs
(see Appendix A). The reason is that microcontrollers generally rely on a general-purpose
processing core that does not implement parallelization techniques such as thread-level
parallelism or advanced vectorization. Moreover, microcontrollers typically run at a much
lower frequency than GPUs (8 MHz to 80 MHz compared to 1 GHz to 2 GHz). Microcontrollers
can also be coupled with tiny battery cells. In some cases, for example when data are collected
in remote areas, they cannot even be recharged in the field. Therefore, performing inference at
the edge faces major issues in terms of real-time constraints, power consumption and memory
footprint. To meet these constraints, the deployment of a DNN must respect an upper bound
for one inference response time as well as an upper bound for the number of parameters of
the network.
As a result, a DNN must be limited in width and depth to be deployable on a micro-
controller. As has been observed, deeper and/or wider networks are often able to solve
more complex tasks with better accuracy [5]. As such, there is always a trade-off between
memory footprint, response time, power consumption and accuracy of the model. In a previ-
ous work [6], we presented a trade-off between memory footprint, power consumption and
accuracy when performing HAR on smart glasses. This work showed that HAR is feasible in
real time on a low-power Cortex-M4F-based microcontroller. However, we also concluded
that there was room for improvement in the memory footprint and power consumption.
A technique that can provide a significant decrease in the memory footprint is based on
network quantization. Quantization consists of reducing the number of bits used to encode
each weight of the model, so that the total memory footprint is reduced by the same factor.
Quantization also enables the use of fixed-point rather than floating-point encoding. In other
words, operations can be performed using integer rather than floating-point data types. This
is of interest because integer operations require considerably fewer computations on most
processor cores, including microcontrollers. Without a floating-point unit (FPU), floating-
point instructions must be emulated in software, creating a large overhead, as was illustrated
in [7]. In that study, a comparison between software, hardware and custom hybrid FPU
implementations was provided.
In this paper, we present an open-source [8] framework, called MicroAI, to perform
end-to-end training, quantization and deployment of deep neural networks on microcon-
trollers. The training phase relies on the well-known TensorFlow and PyTorch deep learning
frameworks. Our objective is to provide a framework that is easy to adapt and extend, while
maintaining a good compromise between accuracy, energy efficiency and memory footprint.
As a second contribution, we provide some comparative results using two different
microcontrollers (STM32L452RE and Ambiq Apollo3) and three different inference engines
(TensorFlow Lite for Microcontrollers, STM32Cube.AI and our own MicroAI). Results are
compared in terms of memory footprint, inference time and power efficiency. Finally, we
propose to apply 8-bit and 16-bit quantization methods on three datasets dealing with different
modalities: acceleration and angular velocity from body-worn sensors for UCI-HAR, speech
for Spoken MNIST and images for GTSRB. These datasets are light enough to be handled by a
deep neural network running on a microcontroller, but still relevant for applications relying
on embedded artificial intelligence.
Sensors 2021, 21, 2984 3 of 34
extension to the RISC-V ISA, which is open, with instructions to handle sub-byte quantization.
Unfortunately, as microcontrollers implementing RISC-V are still scarce on the market, and
not readily available with the proposed extension, this approach cannot be reasonably used to
deploy IoT devices since it requires manufacturing a custom microcontroller. Manufacturing a
custom microcontroller is not feasible when the goal is to release an IoT product on the market,
due to large costs, time and the required level of expertise. As a result, only off-the-shelf
microcontrollers are considered in this work. Only 8-bit, 16-bit and 32-bit precision will
therefore be studied.
Deep neural networks have already been deployed on 8-bit microcontrollers. One of
the first methods was proposed in [34]. Although interesting, this method requires a lot of
work to implement pseudo-floating-point coding, a custom multiplication algorithm over 16
bits, as well as a hyperbolic tangent approximation for the activation function, all in assembly
language. Over the last few years, implementations have relied on 32-bit microcontrollers
with either a hardware FPU or fixed-point computations. In addition, the Rectified Linear
Unit (ReLU) [35] has become widely used as an activation function and has the benefit of
being easily computed as a max between 0 and the layer’s output, thus being much less
complex than a hyperbolic tangent. In the meantime, neural network architectures and
training methods have continued to evolve to provide more and more efficient models. As a
result, applications such as spoken keyword spotting [36] and human activity recognition [6]
can now be performed in real time on IoT devices relying on low-power microcontrollers.
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
sign exponent significand
3.2. Fixed-Point
Fixed-point is another way to represent real numbers. In this representation, the integer
part and the fractional part have a fixed length. As a result, the dynamic range and the
resolution are directly limited by the length of the integer part and the length of the fractional
part, respectively. The resolution is constant across the whole dynamic range. In binary, the
Q notation is often used to specify the number of bits associated with each part. Qm.n is a
number where m bits are allocated to the integer part and n bits to the fractional part [39]. It
is important to note that we consider signed numbers in two’s complement representation,
the sign bit being included in the integer part. The number of bits for the integer part can be
increased to obtain a larger dynamic range, but it will conversely reduce the number of bits
allocated to the fractional part, thus reducing its precision.
Given a Qm.n signed number, its dynamic range is [−2m−1 , 2m−1 − 2−n ] and its resolution
is 2−n .
As an example, in Table 2, a signed Q16.16 number stored in a 32-bit register has 16 bits
for the integer part including 1 bit for the sign and 16 bits for the fractional part. This translates
to a dynamic range of [−32, 768, 32, 767.9999847], much smaller than the equivalent floating-
point representation, and a constant resolution of 1.5259 × 10−5 across the whole range, less
precise than the floating-point representation near 0.
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
integer part fractional part
0
1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00
where xi is an element of the floating-point vector x of length N.A positive value of m means
that m bits are required to represent the absolute value of the integer part, while a negative
value of m means that the fractional part has m leading unused bits. This enables a greater
precision to be obtained for vectors with numbers smaller than 2−1 , since the leading unused
bits can be removed and replaced instead by more trailing bits for precision.
From this we can compute the number of remaining bits n for the fractional part:
n = w−m−1 (2)
s = 2− n (4)
Two methods can be used to get the quantized weights of a deep neural network. These
methods are detailed in the following.
Layer parameters
Inputs
(Previous layer outputs) Weights Bias
Quantize Quantize
Quantize
Outputs
(Next layer inputs)
In case of a convolutional neural network, the convolutional and fully connected layers
require a quantization-aware training for the weights. Please note that batch normalization lay-
ers also require quantization-aware training. However, as we do not use batch normalization
in our experiments, it has not been implemented. For max-pooling layers, quantization-aware-
training is not required as they do not have weights. Moreover, as max-pooling only consists
Sensors 2021, 21, 2984 10 of 34
of an element-wise max, there is no need to quantize: inputs are already quantized from the
previous layer and the dynamic range cannot be expanded. Therefore, no quantization is
done on the max-pooling layers. It is similar for the ReLU activation which is considered
to be a separate layer. Conversely, the element-wise addition layer requires quantization. It
does not have trainable weights; however, the dynamic range of the output can increase after
adding two large numbers. Therefore, the same quantization process is applied to compute
the output scale factor.
5.1.2. STM32Cube.AI
STM32Cube.AI is a software suite from STMicroelectronics that enables the deployment
of deep neural networks onto their STM32 family of microcontrollers. STM32Cube.AI supports
deployment of trained deep neural network models from several frameworks including Keras
and TensorFlow Lite. A wide range of operations are supported [45], allowing the deployment
of several deep neural network architectures, such as multi-layer perceptron and convolutional
neural networks, including residual neural networks.
STM32Cube.AI is a software suite fully integrated with other STMicroelectronics develop-
ment and deployment tools such as STM32CubeMX, STM32CubeIDE and
STM32CubeProgrammer. This provides a very straightforward and easy to use flow. More-
over, a test application is included to evaluate the model on target with a real test dataset,
providing metrics on inference time, and ROM and RAM usage, without having to write a
single line of code.
Like TFLite Micro, STM32Cube.AI supports computations in floating-point binary32
format and fixed point on 8-bit integers. In fact, the quantization on 8-bit integers comes from
TFLite. There is no support for fixed point on 16-bit integers.
STM32Cube.AI also has an optimized inference engine that seems to be partially based
on CMSIS-NN. However, as the source code of the inference engine is not freely available, it is
not clear what is optimized and how.
The inference library is entirely proprietary/closed-source, therefore it is not possible to
manipulate and extend this library. This represents a major drawback in a research environ-
ment. It is also not possible to use STM32Cube.AI on microcontrollers which are not part of
the STMicroelectronics portfolio. The inference process and optimizations are not detailed,
but unlike TFLite Micro, the network topology is compiled into a set of function calls to the
closed-source library rather than being interpreted at runtime.
float32 Training
Deployment on microcontroller
Evaluation on microcontroller
Figure 3. MicroAI general flow for neural network quantization and evaluation on embedded target.
Sensors 2021, 21, 2984 13 of 34
TensorFlow Lite for Microcontrollers is a portable library that can be included in any
project. Therefore, it could be used for any 32-bit microcontroller. However only integration
with the SparkFun Edge platform with an Ambiq Apollo3 microcontroller is included in our
framework so far.
Similarly, KerasCNN2C produces a portable library that can be included in any project.
So far, only integration with the Nucleo-L452RE-P and the SparkFun Edge boards has been
performed. Support for other platforms can be added by providing project files that call
the inference code and a module that interfaces with the build and deployment tools for
that platform.
Please note that none of these tools can take a trained PyTorch model as an input to
deploy onto a microcontroller. The trained PyTorch model must therefore be converted to
a Keras model prior to the deployment. Our framework provides a module to perform
semi-automatic conversion from a PyTorch model to a Keras model. A Keras model that
matches the structure of the PyTorch model must be programmed and the matching between
the PyTorch model and Keras model layer names also must be specified. The semi-automatic
conversion module can then automatically copy the weights from the PyTorch model to the
Keras model and export it for use by one of the deployment tools.
5.6. KerasCNN2C: Conversion Tool from Trained Keras Model to Portable C Code
KerasCNN2C is a tool that we developed to automatically generate, from a trained Keras
model exported as an HDF5 file, a C library for inference. It can also be used independently of
the MicroAI framework.
In this work, only 1D models are evaluated on target. Work is underway for full support
of 2D-model deployment. Training and quantization are already supported, therefore 2D
models are evaluated offline. Here are the supported layers so far:
• Add
• AveragePooling1D
• BatchNormalization
• Conv1D
• Dense
• Flatten
• MaxPooling1D
• ReLU
• SoftMax
• ZeroPadding1D
Layers can have multiple inputs such as the Add layer, thus allowing residual neural
networks (ResNet) to be built. Sequential convolutional neural networks or multi-layer
perceptron models are also supported.
The generated library exposes a function in the model.h header to run the inference
process with the following signature:
1 void cnn (
c o n s t number_t in pu t [MODEL_INPUT_CHANNELS ] [ MODEL_INPUT_SAMPLES ] ,
3 o u t p u t _ l a y e r _ t y p e output ) ;
where number_t is the data type used during inference defined in the number.h header, and
MODEL_INPUT_CHANNELS and MODEL_INPUT_SAMPLES are the dimensions of the input defined
in the generated model.h header. The input and output arrays must be allocated by the caller.
The model inference function does not proceed to the conversion of the input from
fixed-point to floating-point representation when using a fixed-point inference code. The
caller must perform the conversion before feeding the buffer to the model inference function
(see Section 5.8).
Sensors 2021, 21, 2984 15 of 34
where long_number_t is a type twice the size of number_t and clamp_to_number_t saturates
and converts to number_t. Both are defined in the number.h header.
INPUT_SCALE_FACTOR is the scale factor for the first layer, defined in the model.h header.
The output array corresponds to the output of the model’s last layer, which is typically
a fully connected layer when solving a classification problem. If the purpose is to predict a
single class, the caller must find the index of the max element in the output array.
layers’ inference functions. The correct input and output buffers are passed to each layer
according to the graph of the model.
6. Results
All the results presented in this section rely on the same model architecture, a ResNetv1-6
network with the layers shown in Figure 4. The number of filters per layer f is the same for
all layers, but is modified to adjust the number of parameters of the model. The convolutional
and pooling layers are one-dimensional except when handling the GTSRB dataset, for which
they are two-dimensional.
Input
dims=(x,y,c)
Convolution
size=(3, 3)
stride=(1, 1)
padding=(1, 1)
filters=f
ReLU
Convolution
size=(3, 3)
stride=(1, 1)
padding=(1, 1)
filters=f
MaxPooling
size=(2, 2)
stride=(2, 2)
Convolution
size=(1, 1)
ReLU stride=(1, 1)
padding=(0, 0)
filters=f
Convolution
size=(3, 3) MaxPooling
stride=(1, 1) size=(2, 2)
padding=(1, 1) stride=(2, 2)
filters=f
ReLU
Convolution
size=(3, 3)
stride=(1, 1)
padding=(1, 1)
filters=f
ReLU
Convolution
size=(3, 3)
stride=(1, 1)
padding=(1, 1)
filters=f
ReLU
MaxPooling
size=(x/2, y/2)
stride=(x/2, y/2)
Flatten
FullyConnected
out=n_classes
For each experiment, the residual neural network is initially trained using 32-bit floating-
point numbers (i.e., without quantization), and then evaluated over the testing set. This
baseline version is depicted as float32 in the figures shown in the following.
Sensors 2021, 21, 2984 18 of 34
The float32 neural network is quantized for inference with fixed-point on 16-bit integers
and is then evaluated without additional training. This version is depicted as int16 in the
figures shown hereafter. Quantization is performed using the Q7.9 format for the whole
network, meaning the number of bits n for the fractional part is fixed to 9.
The float32 neural network is also trained and evaluated for inference with fixed-point
on 8-bit integers using quantization-aware training. This version is indicated as int8 in the
figures. In this case the fixed-point precision can vary from layer to layer and is determined
using the method introduced in Section 4.1.4.
The SGD optimizer is used for all experiments. The stability of the SGD optimizer
motivated this choice, especially for the quantization-aware training. Training parameters are
described below for each dataset. Additionally, training and testing sets are normalized using
the z-score of the training set. It is worth noting that Mixup [53] is also used during training.
Accuracy is not evaluated directly on the target due to the amount of time it would
require. Only inference time for the UCI-HAR dataset is measured on the target.
In the figures, each point represents an average over 15 runs.
UCI-HAR float32
0.950 UCI-HAR int8
UCI-HAR int16
0.945
0.940
Accuracy 0.935
0.930
0.925
0.920
0.915
20 30 40 50 60 70 80
Filters
Figure 5. Human Activity Recognition dataset (UCI-HAR): accuracy vs. filters.
0.950
80 80
40 48 64 48 64
40
0.945
32 32
0.940 40 48 64 80
24 24
32
Accuracy
0.935 24
0.930
16
0.925 16 16
Figure 6. Human Activity Recognition dataset (UCI-HAR): accuracy vs. parameter memory.
The quantization-aware training for fixed-point on 8-bit integers uses a batch size of 1024
over 140 epochs. Initial learning rate, momentum and weight decay are the same as for the
initial training. Learning rate is multiplied by 0.1 at epochs 40, 80, 100 and 120.
As can be observed in Figure 7 and regardless of the number of filters, the 16-bit quan-
tization (SMNIST int16) provides overall a similar accuracy compared to the floating-point
baseline (SMNIST float32). On the other hand, the accuracy drops by up to 1.07% when the
8-bit quantization is used. However, the accuracy drop slightly decreases when 48 filters per
convolution are used, and then stays around 0.5% or 0.6% for a higher number of filters.
In Figure 8, we can see that the 16-bit quantization is still the best solution in terms of
memory footprint. Despite the fact that the 8-bit quantization stays closer to 16-bit quantization
on SMNIST than on UCI-HAR, the 8-bit quantization does not provide any benefit over 16-bit
quantization in terms of accuracy vs. memory ratio, even for small neural networks.
0.960
0.955
Accuracy
0.950
0.945
0.940
0.935
0.930
20 30 40 50 60 70 80
Filters
Figure 7. Spoken digits dataset (SMNIST): accuracy vs. filters.
0.970
80 80
64 64
0.965 48 48
40 80 40
64
0.960 48
32 32
24 24
0.955
40
Accuracy
16 3216
0.950
24
0.945
0.940 16
0.98
GTSRB float32
GTSRB int8
GTSRB int16
0.97
0.96
Accuracy
0.95
0.94
0.93
0.92
20 30 40 50 60 70 80
Filters
Figure 9. German Traffic Sign Recognition Benchmark: accuracy vs. filters.
Moreover, even though the 8-bit quantization does not outperform the results obtained
with the 16-bit quantization, Figure 10 shows that the 8-bit quantization can represent an
interesting solution when a two-dimensional network is used on an image dataset.
Sensors 2021, 21, 2984 22 of 34
0.98
80 80
64 64
80
40 6448 40 48
0.97
3248 32
40
0.96 24 24
32
Accuracy 0.95
24
16 16
0.94
16
0.93
GTSRB float32
0.92 GTSRB int8
GTSRB int16
Figure 10. German Traffic Sign Recognition Benchmark: accuracy vs. parameter memory.
VDD_MCU is set to 1.8 V for the Nucleo-L452RE-P platform and current measurement
is taken from the IDD jumper. It does not have any on-board peripherals. On the SparkFun
Edge board, the measure of the current is done using the power input pin of the board (after
the programmer). The built-in peripherals were unsoldered from the board to eliminate
their power consumption. The current consumption was measured using a Brymen BM857s
auto-ranging digital multimeter configured in max mode. The energy results are based on
this maximum observed current consumption and the supply voltage of 3.3 V.
As can be seen in Table 3, and even though both platforms are built around a Cortex-M4F
core running at the same frequency, thanks to its subthreshold operation the SparkFun Edge
board consumes considerably less power than the Nucleo-L452RE-P, while also having more
Flash and RAM memory. However, results obtained with the CoreMark benchmark show that
the Ambiq Apollo3 microcontroller is slower than the STM32L452RE. It is worth noting that
the CoreMark results have been measured on the Ambiq Apollo3 microcontroller, while they
have been taken from the datasheet for the STM32L452RE microcontroller.
Sensors 2021, 21, 2984 23 of 34
The deep neural network used in our experiments is the residual neural network de-
scribed in Section 6. This network has been trained on the UCI-HAR dataset presented in
Section 6.1.1. Inference time is measured from 50 test vectors from the testing set of UCI-
HAR on the microcontrollers. TensorFlow Lite for Microcontrollers version 2.4.1 has been
used to deploy the deep neural network on the SparkFun Edge board, while STM32Cube.AI
version 5.2.0 has been used to deploy it on the Nucleo-L452RE-P board, both for the 32-bit
floating-point and fixed-point on 8-bit integers inference. Our framework is used to deploy
the deep neural network on both platforms for 32-bit floating-point, fixed-point on 16-bit
integers and fixed-point on 8-bit integer inference. It is worth noting that optimizations for the
Cortex-M4F provided by CMSIS-NN are enabled for both TensorFlow Lite for Microcontrollers
and STM32Cube.AI tools. Our framework does not make use of these optimizations yet. The
main characteristics of the frameworks are summarized in Table 4.
To compare software and hardware platforms, only the results with 80 filters per convo-
lution are analyzed below. Nevertheless, results with less than 80 filters are still available in
the tables of Appendix E to highlight how fast and efficient a small deep neural network can
be when deployed on a constrained embedded target. They also highlight a higher overhead
for very small neural networks especially for TensorFlow Lite for Microcontrollers compared
to our framework.
In Figure 11, we can observe that TFLite Micro has a higher overhead than
STM32Cube.AI, while MicroAI exhibits a slightly lower overhead than STM32Cube.AI. As
outlined in Table A4 of Appendix E, when the number of filters per convolution increases,
most of the ROM is used by the model’s weights.
The inference time obtained for both platforms and the different deployment tools is
illustrated in Figure 12. As can be seen, the STM32Cube.AI with the 8-bit inference provides
the best solution as it requires only 352 ms for one inference. In the same configuration,
TensorFlow Lite for Microcontrollers requires 592 ms for one inference. Finally, 1034 ms and
1003 ms are required for one inference using our framework on the Nucleo-L452RE-P board
and the SparkFun Edge board, respectively.
Sensors 2021, 21, 2984
TF TF
Lit Lit
eM e
icr Response time (ms) Mi
oS cro ROM footprint (kiB)
pa Sp
rk a rk
0
250
500
750
1000
1250
1500
1750
2000
0
100
200
300
400
Fu Fu
Mi nE Mi nE
cro dg cro d ge
AI ef AI
convolution.
loa flo
Sp
ar t3
Sp
ar a t3
kF 2 kF 2
Mi un Mi un
cro Ed cro Ed
AI ge A IN
ge
ST Nu flo ST uc flo
at at
M3 cle
o 32 M3 le oL 3 2
2C L4 2C 45
ub 52 ub 2R
e.A RE e.A EP
IN Pf IN flo
uc loa uc at
leo t3 le
L4 2 oL 32
52 45
Mi RE Mi 2R
cro Pf cro EP
AI loa AI flo
Sp t3 Sp at
ar 2 ar 32
kF k Fu
Mi un Mi nE
cro Ed cro dg
AI ge AI ei
Nu int Nu nt
c leo 16 cle 16
TF L4 TF oL
Lit 52 Lit 45
eM RE e Mi
2R
EP
icr Pi cro
oS nt int
pa 16 Sp 16
rk a rk
Fu Fu
Mi nE Mi nE
cro dg cro d ge
AI ei
nt A IS int
Sp
ar 8 pa 8
kF rk
Mi un Mi Fu
Ed nE
cro
AI ge
cro
AI d ge
N int Nu i nt
ST
M3
uc
leo 8 ST
M cle 8
2C L4 32 oL
ub 52 Cu 45
e.A RE b e.A 2R
Pi EP
IN nt IN int
uc
leo 8 uc
le 8
L4 oL
52 45
RE 2R
Pi EP
nt int
8 8
int8
int8
int16
int16
float32
float32
Figure 11. ROM footprint for TFLite Micro, STM32Cube.AI and MicroAI with 80 filters per convolution.
24 of 34
Figure 12. Inference time for 1 input for TFLite Micro, STM32Cube.AI and MicroAI with 80 filters per
Sensors 2021, 21, 2984 25 of 34
When using fixed-point on 16-bit integers for the inference, our framework provides
approximately the same performance as with 8 bits. The reason is that the inference code is
the same: similar instructions are generated, and computations are performed using 32-bit
registers. On the Nucleo-L452RE-P, we can observe that the inference time for one input is
1223 ms, while it is only 1042 ms on the SparkFun Edge board. We guess this improvement is
due to different implementations around the core in terms of memory access, especially the
cache for the Flash memory.
Figure 12 also shows that, whatever the tool and target, the 32-bit floating-point inference
is slower than with 16- or 8-bit quantization. We can also observe that our framework requires
1561 ms and 1512 ms for one inference on the SparkFun Edge and the Nucleo-L452RE-P boards,
respectively. The STM32Cube.AI requires 1387 ms for one inference on the Nucleo-L452RE-P
board. Our framework therefore exhibits a comparable performance to the STM32Cube.AI.
Finally, we can see that TensorFlow Lite for microcontrollers on the SparkFun Edge board
provides lower performance, requiring 2087 ms to perform one inference.
To conclude, and as outlined in Figure 13, we can say the SparkFun Edge board provides
the best power efficiency in all situations. The reason is that the SparkFun Edge board power
consumption is approximately 6 times lower than the Nucleo-L452RE-P. Using the SparkFun
Edge board and TensorFlow Lite for Microcontroller with fixed-point on 8-bit integers, one
inference requires 0.45 µWh of energy consumption. In contrast, our framework requires
0.75 µWh and 0.78 µWh on the SparkFun Edge board for inference with fixed-point on 8-bit
and 16-bit integers, respectively. When 32-bit floating-point is used for inference on the
SparkFun Edge board, our framework provides a better energy efficiency than TensorFlow
Lite for Microcontrollers as it requires 1.17 µWh instead of 1.57 µWh.
7
float32
int16
int8
6
5
Energy (µWh)
0
16
8
32
32
16
8
2
int
t3
int
nt
int
t3
int
at
at
int
Pi
loa
loa
ge
ge
P
flo
flo
RE
RE
ge
EP
ef
Pf
Ed
Ed
ge
2R
Ed
52
52
RE
RE
dg
un
un
Ed
45
L4
L4
nE
52
52
kF
kF
Fu
n
leo
leo
L
Fu
Fu
L4
L4
ar
ar
rk
leo
rk
rk
uc
Sp
Sp
uc
pa
leo
leo
uc
pa
pa
IN
IN
IS
uc
uc
cro
AI
N
oS
IS
.A
cro
N
IN
AI
Mi
cro
cro
A
be
icr
AI
cro
Mi
e.A
cro
ite
Mi
Mi
u
cro
eM
Mi
2C
L
Mi
ub
TF
Mi
Lit
M3
2C
TF
ST
M3
ST
Figure 13. Energy consumption for 1 input for TFLite Micro, STM32Cube.AI and MicroAI with 80 filters
per convolution.
Sensors 2021, 21, 2984 26 of 34
Concerning the energy consumed on the Nucleo-L452RE-P board, our framework requires
4.58 µWh, 5.42 µWh and 6.70 µWh for one inference using fixed-point on 8-bit integers, on
16-bit integers and 32-bit floating-point, respectively. In comparison, only 6.15 µWh of energy
is required for one inference when the STM32Cube.AI framework is used with 32-bit floating-
point. Finally, we can see that the required energy for one inference when using STM32Cube.AI
with fixed-point on 8-bit integers is 1.56 µWh on the Nucleo-L452RE-P. This amount of energy
is similar to the one obtained with TensorFlow Lite for Microcontrollers on the SparkFun Edge
board when performing floating-point inference.
7. Discussion
First, a high variance is observable when we compare the accuracy results obtained
on the three datasets versus the model size. This variability makes it difficult to draw any
definitive conclusions. However, there is a trend in our results that provides some insights
into performance for each experiment.
As has been shown, execution using fixed-point on 8-bit and 16-bit integers provides a
significant decrease in the inference time, thus also reducing the average power consumption.
As power consumption is a key parameter in embedded systems, shorter inference times are
interesting as they make it possible either to reduce the microcontroller’s operating frequency
or to put the microcontroller in sleep mode for a longer period between two inferences. In
addition, execution using 8-bit and 16-bit integers also provides a significant reduction in
memory footprint. The memory required for the model parameters is divided by 4 and 2 for
for 8-bit and 16-bit quantization, respectively. It is worth noting that the RAM usage, which is
not illustrated here, is also reduced.
Our results also show that performing inference using quantization with fixed-point on
16-bit integers does not lead to a drop in accuracy, whatever test case is considered. Moreover,
inference using 16 bits does not require quantization-aware training to achieve such results.
As both the power consumption and the memory footprint can be decreased, fixed-point
quantization on 16-bit integers is therefore always preferable to 32-bit floating-point inference.
Conversely, 8-bit quantization does not provide a substantial improvement over 16-bit
quantization. Moreover, 8-bit quantization requires performing quantization-aware training.
It is worth noting that quantization-aware training for 8-bit quantization introduces more
variance in the results over the baseline, and is also more sensitive to a change in the training
parameters. As it is quite difficult to achieve a stable training, it is preferable to use an
optimizer such as SGD with conservative parameters, instead of optimizers such as Adam
or RAdam, to reduce the variance of the results, even though it means achieving a lower
maximum accuracy.
During our experiments, it was also observed that the 8-bit post-training quantization
of TensorFlow Lite achieved better results compared to the 8-bit quantization-aware training
provided by our framework. This is likely due to the combination of per-filter quantization,
asymmetric range and non-power-of-two scale factor, as well as optimizations of TensorFlow
Lite to avoid unnecessary truncation and thus loss of precision. We also observed that using
9 bits instead of 8 bits during the post-training quantization allows us to outperform the
TensorFlow Lite quantization performance. Some results showing this improvement are
available in Appendix C for the UCI-HAR dataset. From these results, we can conclude
that the slight additional precision brought by the combination of per-filter quantization,
asymmetric range and non-power of two scale factor does in fact matter. Implementing these
methods in our framework seems therefore required to reduce the accuracy loss of our 8-bit
quantization.
Another benefit of 8-bit quantization is that SIMD instructions can be used (with some
classes of microcontrollers) to improve the inference time and thus further reduce the power
Sensors 2021, 21, 2984 27 of 34
8. Conclusions
In this work, we presented a framework to perform quantization and then deployment
of deep neural networks on microcontrollers. This framework represents an alternative to
the STM32Cube.AI proprietary solution and TensorFlow Lite for Microcontrollers, an open-
source but complex environment. Inference time and energy efficiency measured on two
different embedded platforms demonstrated that our framework is a viable alternative to the
aforementioned solutions to perform deep neural network inference. Our framework also
introduces a fixed-point on 16-bit integer post-training quantization which is not available
with the two other frameworks. We have shown that this 16-bit fixed-point quantization
provides an improvement over a 32-bit floating-point inference, while being competitive with
fixed-point on 8-bit integer quantization-aware training. It provides a reduced inference time
compared to floating-point inference. Moreover, the memory footprint is divided by two
while keeping the same accuracy. The 8-bit quantization provides further improvements in
inference time and memory footprint but at the cost of a slight decrease in accuracy and a
more complex implementation.
Work is still in progress to implement some optimization techniques for fixed-point on
8-bit integer inference. Three optimizations are especially targeted: per-filter quantization,
asymmetric range and non-power-of-two scale factor. In addition, using SIMD instructions
in the inference engine should help further decrease the inference time. These optimizations
would therefore make our framework more competitive in terms of inference time and
accuracy. Another possible improvement for fixed-point on integers inference consists of
using 8-bit quantization for the weights and 16-bit quantization for the activations. TensorFlow
Lite for Microcontrollers is currently in the process of implementing this technique. Mixed
precision can indeed provide a way to reduce the memory footprint of layers that do not need a
Sensors 2021, 21, 2984 28 of 34
high-precision representation (using 8 bits for weights and activations), while keeping a higher
precision (16-bit representation) for layers that need it. The CMix-NN [58] library already
provides an implementation of convolution functions for various data type configurations (in
2, 4 and 8 bits). To further improve power consumption and memory footprint, binary neural
networks can also be considered. However, to run them efficiently on microcontrollers, binary
neural networks would need to be implemented using bit-wise operations on 32-bit registers.
This way, as many as 32 computations could be performed in parallel.
Apart from quantization, other techniques can be used to improve the execution of deep
neural networks on embedded targets. One of these techniques is the big/LITTLE DNN
approach [59] where the inference is first done on a very small deep neural network. Then, if
the confidence is too low, inference is done using a larger deep neural network to reduce the
confusion of the classification task. This technique allows a fast inference response time for
most inputs, thus lowering the power consumption. In fact, it has been shown that the set
of inputs that are difficult to classify and so require running the bigger deep neural network
is small. However, this approach does not lower the memory footprint. Other techniques
such as pruning can also be used to obtain a smaller deep neural network while keeping
the same accuracy. When structured pruning [60] is used, for instance, entire filters are
completely removed from the convolutional neural network model. This reduces both the
memory footprint and the power consumption. Finally, other optimization techniques also
consider new neural network architectures. One can cite for example the recently published
MCUNet [2] framework with its TinyNAS tool that aims to identify the neural network model
that will best perform on the target.
Future works will also be dedicated to the deployment of neural network architectures
on FPGA using high-level synthesis tools such as Vivado. In fact, a feasibility study has
already been performed and has shown that our framework can be also used for deployment
on FPGA. Moreover, work is in progress to natively support automatic PyTorch deployment.
To do so, the features provided by the torch.fx module of the newly released PyTorch 1.8.0
are used.
Finally, we are currently working on a real application of our framework that consists in
integrating artificial intelligence into smart glasses [61] to perform, among other tasks, human
activity recognition in the context of elder care. Preliminary results have been published in [6].
Supplementary Materials: The open-source MicroAI software framework [8] is available online at
https://bitbucket.org/edge-team-leat/microai_public.
Author Contributions: Investigation, P.N.; methodology, P.N. and G.B.H.; software, P.N. and G.B.H.;
supervision, A.P., B.M. and V.G.; writing—original draft preparation, P.N.; writing—review and editing,
G.B.H., A.P., B.M., V.G. All authors have read and agreed to the published version of the manuscript.
Funding: This research is funded by “Université Côte d’Azur”, “CNRS”, “Région Sud Provence-Alpes-
Côte d’Azur, France” and “Ellcie Healthy”.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design
of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in
the decision to publish the results.
Sensors 2021, 21, 2984 29 of 34
Table A1. Microcontroller (STM32L452RE), CPU (Intel Core i7-8850H) and GPU (NVidia Quadro
P2000M) platforms. Power consumption figures for the GPU and the CPU are the TDP values from the
manufacturer and do not reflect the exact power consumption of the device.
Table A2. Comparison of 32-bit floating-point inference time for a single input on a microcontroller, a
CPU and a GPU. The neural network architecture is described in Section 6 with the number of filters per
convolution layer varying from 16 to 80, and the dataset is described in Section 6.1.1. For the CPU and
the GPU, the inference batch size is set to 512 and the dataset is repeated 104 times to try to compensate
for the large startup overhead compared to the total inference time. Measurements are averaged over at
least 5 runs.
Table A3. Number of arithmetic and logic operations with fixed-point on integers inference for the main
layers of a residual neural network with f the number of filters (output channels), s the number of input
samples, c the number of input channels, k the kernel size, n the number of neurons and i the number of
input layers to the residual Add layer. Conv1D is assumed to be without padding and with a stride of 1.
0.955
UCI-HAR int8 MicroAI QAT
UCI-HAR float32
UCI-HAR int9 MicroAI PTQ
UCI-HAR int8 TFLite PTQ
0.950
0.945
Accuracy
0.940
0.935
32 34 36 38 40 42 44 46 48
Filters
Figure A1. Accuracy vs. filters for baseline (float32), 8-bit post-training quantization from TensorFlow
Lite (int8 TFLite PTQ), 8-bit quantization-aware training from our framework (int8 MicroAI QAT), and
9-bit post-training quantization from our framework (int9 MicroAI PTQ). The neural network architecture
is described in Section 6 with the number of filters per convolution layer varying from 32 to 48, and the
dataset is described in Section 6.1.1.
Before being deployed and evaluated, the appropriate code must be generated and built
for the targeted platform by running the following command:
1 m i c r o a i < c o n f i g . toml > prepare_deploy
Once the binaries are generated, they can be deployed, and the model can be evaluated
on the target by running the following command:
1 m i c r o a i < c o n f i g . toml > deploy_and_evaluate
Sensors 2021, 21, 2984 31 of 34
Table A4. ROM footprint vs. filters for TFLite Micro, STM32Cube.AI and MicroAI.
Table A5. Inference time for one input vs. filters for TFLite Micro, STM32Cube.AI and MicroAI.
Table A6. Energy consumption for 1 input vs. filters for TFLite Micro, STM32Cube.AI and MicroAI.
Energy (µWh)
Framework Target Data Type 16 Filters 24 Filters 32 Filters 40 Filters 48 Filters 64 Filters 80 Filters
TFLiteMicro SparkFunEdge float32 0.135 0.221 0.330 0.469 0.647 1.058 1.569
MicroAI SparkFunEdge float32 0.040 0.116 0.195 0.297 0.428 0.765 1.174
MicroAI NucleoL452REP float32 0.247 0.675 1.148 1.753 2.478 4.327 6.700
STM32Cube.AI NucleoL452REP float32 0.378 0.771 1.202 1.789 2.412 4.083 6.146
MicroAI SparkFunEdge int16 0.031 0.085 0.144 0.216 0.293 0.502 0.783
MicroAI NucleoL452REP int16 0.199 0.533 0.910 1.410 2.038 3.528 5.421
TFLiteMicro SparkFunEdge int8 0.070 0.098 0.130 0.169 0.211 0.314 0.445
MicroAI SparkFunEdge int8 0.030 0.076 0.130 0.195 0.283 0.495 0.754
MicroAI NucleoL452REP int8 0.191 0.477 0.801 1.209 1.700 2.924 4.581
STM32Cube.AI NucleoL452REP int8 0.143 0.239 0.356 0.495 0.647 1.072 1.560
Sensors 2021, 21, 2984 32 of 34
References
1. Wang, Y.; Wei, G.; Brooks, D. Benchmarking TPU, GPU, and CPU Platforms for Deep Learning. arXiv 2019, arXiv:1907.10701.
2. Lin, J.; Chen, W.M.; Lin, Y.; Cohn, J.; Gan, C.; Han, S. MCUNet: Tiny Deep Learning on IoT Devices. In Proceedings of the 34th
Conference on Neural Information Processing Systems (NeurIPS 2020), Online, 6–12 December 2020.
3. Lai, L.; Suda, N. Enabling Deep Learning at the IoT Edge. In Proceedings of the International Conference on Computer-Aided
Design (ICCAD’18), San Diego, CA, USA, 5–8 November 2018; Association for Computing Machinery: New York, NY, USA, 2018;
doi:10.1145/3240765.3243473.
4. Kromes, R.; Russo, A.; Miramond, B.; Verdier, F. Energy consumption minimization on LoRaWAN sensor network by using an
Artificial Neural Network based application. In Proceedings of the 2019 IEEE Sensors Applications Symposium (SAS), Sophia
Antipolis, France, 11–13 March 2019; pp. 1–6, doi:10.1109/SAS.2019.8705992.
5. Tan, M.; Le, Q. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the 36th International
Conference on Machine Learning (PMLR 2019), Long Beach, CA, USA, 9–15 June 2019; Chaudhuri, K., Salakhutdinov, R., Eds.;
Volume 97, pp. 6105–6114.
6. Novac, P.E.; Russo, A.; Miramond, B.; Pegatoquet, A.; Verdier, F.; Castagnetti, A. Toward unsupervised Human Activity Recognition
on Microcontroller Units. In Proceedings of the 2020 23rd Euromicro Conference on Digital System Design (DSD), 2020, Kranj,
Slovenia, 26–28 August 2020; pp. 542–550, doi:10.1109/DSD51259.2020.00090.
7. Pimentel, J.J.; Bohnenstiehl, B.; Baas, B.M. Hybrid Hardware/Software Floating-Point Implementations for Optimized Area and
Throughput Tradeoffs. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2017, 25, 100–113, doi:10.1109/TVLSI.2016.2580142.
8. Novac, P.E.; Pegatoquet, A.; Miramond, B. MicroAI, a software framework for end-to-end deep neural networks training, quantization
and deployment onto embedded devices. Version 1.0. 2021. doi:10.5281/zenodo.5507397.
9. Choi, J.; Chuang, P.I.J.; Wang, Z.; Venkataramani, S.; Srinivasan, V.; Gopalakrishnan, K. Bridging the accuracy gap for 2-bit quantized
neural networks (qnn). arXiv 2018, arXiv:1807.06964.
10. Esser, S.K.; McKinstry, J.L.; Bablani, D.; Appuswamy, R.; Modha, D.S. Learned step size quantization. arXiv 2019, arXiv:1902.08153.
11. Nikolić, M.; Hacene, G.B.; Bannon, C.; Lascorz, A.D.; Courbariaux, M.; Bengio, Y.; Gripon, V.; Moshovos, A. Bitpruning: Learning
bitlengths for aggressive and accurate quantization. arXiv 2020 arXiv:2002.03090.
12. Uhlich, S.; Mauch, L.; Yoshiyama, K.; Cardinaux, F.; Garcia, J.A.; Tiedemann, S.; Kemp, T.; Nakamura, A. Differentiable quantization
of deep neural networks. arXiv 2019, arXiv:1905.11452.
13. Hubara, I.; Courbariaux, M.; Soudry, D.; El-Yaniv, R.; Bengio, Y. Binarized Neural Networks. In Advances in Neural Information
Processing Systems; Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., Garnett, R., Eds.; Curran Associates, Inc.: 2016, Barcelona, Spain,
5–10 December 2016; Volume 29.
14. Rastegari, M.; Ordonez, V.; Redmon, J.; Farhadi, A. Xnor-net: Imagenet classification using binary convolutional neural networks. In
Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; Springer: Cham,
Switzerland, 2016; pp. 525–542.
15. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient
Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861.
16. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer
parameters and <0.5 MB model size. arXiv 2016, arXiv:1602.07360.
17. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In Proceed-
ings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255,
doi:10.1109/CVPR.2009.5206848.
18. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778, doi:10.1109/CVPR.2016.90.
19. Han, S.; Pool, J.; Tran, J.; Dally, W. Learning both weights and connections for efficient neural network. In Proceedings of the 28th
International Conference on Neural Information Processing Systems - Volume 1, Montreal, Canada, 7–10 December 2015; MIT Press:
Cambridge, MA, USA, 2015; pp. 1135–1143.
20. Yamamoto, K.; Maeno, K. PCAS: Pruning Channels with Attention Statistics. arXiv 2018, arXiv:1806.05382.
21. Hacene, G.B.; Lassance, C.; Gripon, V.; Courbariaux, M.; Bengio, Y. Attention based pruning for shift networks. arXiv 2019,
arXiv:1905.12300.
22. Ramakrishnan, R.K.; Sari, E.; Nia, V.P. Differentiable Mask for Pruning Convolutional and Recurrent Networks. In Proceedings of the
2020 17th Conference on Computer and Robot Vision (CRV), Ottawa, ON, Canada, 13–15 May 2020; pp. 222–229.
23. He, Y.; Ding, Y.; Liu, P.; Zhu, L.; Zhang, H.; Yang, Y. Learning Filter Pruning Criteria for Deep Convolutional Neural Networks
Acceleration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19
June 2020; pp. 2009–2018.
24. Han, S.; Mao, H.; Dally, W.J. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman
coding. arXiv 2015, arXiv:1510.00149.
Sensors 2021, 21, 2984 33 of 34
25. Fard, M.M.; Thonet, T.; Gaussier, E. Deep k-means: Jointly clustering with k-means and learning representations. Pattern Recognit.
Lett. 2020, 138, 185–192.
26. Cardinaux, F.; Uhlich, S.; Yoshiyama, K.; García, J.A.; Mauch, L.; Tiedemann, S.; Kemp, T.; Nakamura, A. Iteratively training look-up
tables for network quantization. IEEE J. Sel. Top. Signal Process. 2020, 14, 860–870.
27. He, Z.; Fan, D. Simultaneously Optimizing Weight and Quantizer of Ternary Neural Network Using Truncated Gaussian Approxima-
tion. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA,
16–20 June 2019; pp. 11430–11438, doi:10.1109/CVPR.2019.01170.
28. Lee, E.; Hwang, Y. Layer-Wise Network Compression Using Gaussian Mixture Model. Electronics 2021, 10, 72,
doi:10.3390/electronics10010072.
29. Vogel, S.; Raghunath, R.B.; Guntoro, A.; Van Laerhoven, K.; Ascheid, G. Bit-Shift-Based Accelerator for CNNs with Selectable
Accuracy and Throughput. In Proceedings of the 2019 22nd Euromicro Conference on Digital System Design (DSD), Kallithea, Greece,
28–30 August 2019; pp. 663–667, doi:10.1109/DSD.2019.00106.
30. Courbariaux, M.; Bengio, Y.; David, J.P. Training deep neural networks with low precision multiplications. arXiv 2015. arXiv:1412.7024.
31. Holt, J.L.; Baker, T.E. Back propagation simulations using limited precision calculations. In Proceedings of the IJCNN-
91-Seattle International Joint Conference on Neural Networks, Seattle, WA, USA, 8–12 July 1991; Volume ii, pp. 121–126,
doi:10.1109/IJCNN.1991.155324.
32. Vanhoucke, V.; Senior, A.; Mao, M.Z. Improving the speed of neural networks on CPUs. In Proceedings of the Deep Learning and
Unsupervised Feature Learning Workshop (NIPS 2011), Granada, Spain, 12-17 December 2011
33. Garofalo, A.; Tagliavini, G.; Conti, F.; Rossi, D.; Benini, L. XpulpNN: Accelerating Quantized Neural Networks on RISC-V Processors
Through ISA Extensions. In Proceedings of the 2020 Design, Automation & Test in Europe Conference & Exhibition, DATE 2020,
Grenoble, France, 9–13 March 2020; IEEE: New York, NY, USA, 2020; pp. 186–191, doi:10.23919/DATE48585.2020.9116529.
34. Cotton, N.J.; Wilamowski, B.M.; Dundar, G. A Neural Network Implementation on an Inexpensive Eight Bit Microcontroller.
In Proceedings of the 2008 International Conference on Intelligent Engineering Systems, Miami, FL, USA, 25–29 February 2008;
pp. 109–114, doi:10.1109/INES.2008.4481278.
35. Nair, V.; Hinton, G.E. Rectified Linear Units Improve Restricted Boltzmann Machines. In Proceedings of the 27th International
Conference on International Conference on Machine Learning (ICML’10), Haifa, Israel, 21–24 June 2010; Omnipress: Madison, WI,
USA, 2010; pp. 807–814.
36. Zhang, Y.; Suda, N.; Lai, L.; Chandra, V. Hello Edge: Keyword Spotting on Microcontrollers. arXiv 2018, arXiv:1711.07128.
37. IEEE Standard for Floating-Point Arithmetic. IEEE Std 754-2019 (Revision of IEEE 754-2008); IEEE: Piscataway, NJ, USA, 2019; pp.1–84,
doi:10.1109/IEEESTD.2019.8766229.
38. Micikevicius, P.; Narang, S.; Alben, J.; Diamos, G.; Elsen, E.; Garcia, D.; Ginsburg, B.; Houston, M.; Kuchaiev, O.; Venkatesh, G.; et al.
Mixed Precision Training. In Proceedings of the 6th International Conference on Learning Representations, Vancouver, Canada, 30
April–3 May 2018.
39. ARM. ARM Developer Suite AXD and armsd Debuggers Guide, 4.7.9 Q-Format; ARM DUI 0066D Version 1.2; Arm Ltd.: Cambridge, UK,
2001.
40. David, R.; Duke, J.; Jain, A.; Reddi, V.; Jeffries, N.; Li, J.; Kreeger, N.; Nappier, I.; Natraj, M.; Regev, S.; et al. TensorFlow Lite Micro:
Embedded Machine Learning on TinyML Systems. arXiv 2020, arXiv:2010.08678.
41. STMicroelectronics. STM32Cube.AI. Available online: https://www.st.com/content/st_com/en/stm32-ann.html (accessed on 19
March 2021).
42. Google. TensorFlow Lite for Microcontrollers Supported Operations. Available online: https://github.com/tensorflow/tensorflow/
blob/master/tensorflow/lite/micro/kernels/micro_ops.h (accessed on 22 March 2021).
43. Google. TensorFlow Lite 8-Bit Quantization Specification. Available online: https://www.tensorflow.org/lite/performance/
quantization_spec (accessed on 19 March 2021).
44. Jacob, B.; Kligys, S.; Chen, B.; Zhu, M.; Tang, M.; Howard, A.; Adam, H.; Kalenichenko, D. Quantization and Training of Neural
Networks for Efficient Integer-Arithmetic-Only Inference. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and
Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 2704–2713, doi:10.1109/CVPR.2018.00286.
45. STMicroelectronics. Supported Deep Learning toolboxes and layers, Documentation embedded in X-CUBE-AI Expansion Package
5.2.0, 2020. Available online: https://www.st.com/en/embedded-software/x-cube-ai.html (accessed on 19 March 2021).
46. Nordby, J. emlearn: Machine Learning inference engine for Microcontrollers and Embedded Devices. 2019. Available online:
https://doi.org/10.5281/zenodo.2589394 (accessed on 18 February 2021).
47. Sakr, F.; Bellotti, F.; Berta, R.; De Gloria, A. Machine Learning on Mainstream Microcontrollers. Sensors 2020, 20, 2638,
doi:10.3390/s20092638.
48. Givargis, T. Gravity: An Artificial Neural Network Compiler for Embedded Applications. In Proceedings of the 26th Asia and South
Pacific Design Automation Conference (ASPDAC’21), Tokyo, Japan, 18–21 January 2021; Association for Computing Machinery: New
York, NY, USA, 2021; pp. 715–721, doi:10.1145/3394885.3431514.
Sensors 2021, 21, 2984 34 of 34
49. Wang, X.; Magno, M.; Cavigelli, L.; Benini, L. FANN-on-MCU: An Open-Source Toolkit for Energy-Efficient Neural Network Inference
at the Edge of the Internet of Things. IEEE Internet Things J. 2020, 7, 4403–4417.
50. Tom’s Obvious Minimal Language. Available online: https://toml.io/ (accessed on 19 March 2021).
51. Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings
of the 32nd International Conference on Machine Learning, Lille, France, 6–11 July 2015; Bach, F., Blei, D., Eds.; PMLR: Lille, France,
2015; Volume 37, pp. 448–456.
52. Jinja2. Available online: https://palletsprojects.com/p/jinja/ (accessed on 19 March 2021).
53. Zhang, H.; Cissé, M.; Dauphin, Y.N.; Lopez-Paz, D. mixup: Beyond Empirical Risk Minimization. In Proceedings of the 6th
International Conference on Learning Representations, Vancouver, Canada, 30 April–3 May 2018.
54. Davide, A.; Alessandro, G.; Luca, O.; Xavier, P.; Jorge, L.R.O. A Public Domain Dataset for Human Activity Recognition using
Smartphones. In Proceedings of the ESANN, Bruges, Belgium, 24–26 April 2013.
55. Khacef, L.; Rodriguez, L.; Miramond, B. Written and Spoken Digits Database for Multimodal Learning. 2019. Available online:
https://doi.org/10.5281/zenodo.3515935 (accessed on 18 February 2021).
56. Warden, P. Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition. arXiv 2018, arXiv:1804.03209.
57. Stallkamp, J.; Schlipsing, M.; Salmen, J.; Igel, C. The German Traffic Sign Recognition Benchmark: A multi-class classification
competition. In Proceedings of the 2011 International Joint Conference on Neural Networks, San Jose, CA, USA, 31 July–5 August
2011; pp. 1453–1460, doi:10.1109/IJCNN.2011.6033395.
58. Capotondi, A.; Rusci, M.; Fariselli, M.; Benini, L. CMix-NN: Mixed Low-Precision CNN Library for Memory-Constrained Edge
Devices. IEEE Trans. Circuits Syst. II Express Briefs 2020, 67, 871–875, doi:10.1109/TCSII.2020.2983648.
59. Park, E.; Kim, D.; Kim, S.; Kim, Y.; Kim, G.; Yoon, S.; Yoo, S. Big/little deep neural network for ultra low power inference.
In Proceedings of the 2015 International Conference on Hardware/Software Codesign and System Synthesis (CODES + ISSS),
Amsterdam, The Netherlands, 4–9 October 2015; pp. 124–132, doi:10.1109/CODESISSS.2015.7331375.
60. Anwar, S.; Hwang, K.; Sung, W. Structured Pruning of Deep Convolutional Neural Networks. J. Emerg. Technol. Comput. Syst. 2017,
13, 1–18, doi:10.1145/3005348.
61. Arcaya-Jordan, A.; Pegatoquet, A.; Castagnetti, A. Smart Connected Glasses for Drowsiness Detection: a System-Level Modeling
Approach. In Proceedings of the 2019 IEEE Sensors Applications Symposium (SAS), Sophia Antipolis, France, 11–13 March 2019; pp.
1–6, doi:10.1109/SAS.2019.8706022.