Research Internship Project
FPGA Based Parallel SNN Solution for Energy-Autonomous IoT nodes
Context
As the Internet of Things (IoT) continues to evolve, the integration of Artificial Intelligence (AI) with
edge computing, i.e., Edge AI, emerges as a powerful synergy. This combination leverages Machine
Learning (ML) algorithms to locally process sensor data, offering real-time intelligent decision-making.
However, at this level of integration, the need to process large amounts of heterogeneous data
imposes challenging constraints in terms of efficiency, accuracy and resource utilization. Moreover, in
the context of energy-autonomous wireless sensor nodes, each calculation has to cope with a limited
and time-varying harvested energy. As a consequence, the requisite for energy frugality severally
hampers the performance of these ML algorithms, highlighting the necessity for dynamicity and
adaptability. The next generation of Artificial Neural Networks (ANN) known as Spiking Neural
Networks (SNN) [1] emerges as a promising candidate for autonomous nodes. SNN significantly
improves energy and computing efficiency given their intrinsic sparsity, dynamic behavior and event-
based representation. One approach to optimize the energy consumption is by reducing the needed
computations, properly managing resource utilization and memory access [2]. As such, the objective
of this work is to propose a low latency Field Programmable Gate Array (FPGA) based parallel SNN
solution for energy-autonomous IoT nodes.
Project mission
The project mission will be organized in several periods:
- Bibliographic study and introduction to SNN (training and implementation), different SNN FPGA
architecture/accelerators [1] and energy-autonomous nodes - Develop methods to parallelize and implement a given quantized SNN onto a variable number of Neural Processing Units (NPU) in the SPLEAT architecture [3]
- Propose efficient memory access and data grouping, i.e., binary tensor, to properly manage resource
utilization and improve latency Compare different architectures and evaluate the trade-off between quality of service – resource utilization and energy consumption, i.e., bit-serial, parallel, mix of bit-serial and parallel - Publication in an international conference.
References
[1] M. Bouvier et al., Spiking Neural Networks Hardware Implementations and Challenges: A Survey,” ACM J. Emerg. Technol. Comput. Syst., vol. 15, no. 2, pp. 1–35, Apr. 2019
[2] Castagnetti, A., Pegatoquet, A. & Miramond, B. Trainable quantization for Speedy Spiking Neural Networks. Frontiers in Neuroscience 17, (2023)
[3] N. Abderrahmane et al., “SPLEAT: SPiking Low-power Event-based ArchiTecture for in-orbit processing of satellite imagery,” in (IJCNN), Jul. 2022, pp. 1–10
Practical information
Location: LEAT Lab / SophiaTech Campus, Sophia Antipolis
Duration: 6 months from march 2025
Profile: embedded programming, hardware design, IoT, machine learning
Research keywords: Embedded systems, FPGA, SNN, signal processing, Edge AI
Application: CV, Motivation letter, Grades
Contact and supervision
Ghattas Akkad, Benoît Miramond, Alain Pegatoquet
LEAT Lab – University Cote d’Azur / CNRS
Polytech Nice Sophia
04.89.15.44.39. / ghattas.akkad@univ-cotedazur.fr, Alain.PEGATOQUET@univ-cotedazur.fr