The « neuromorphic » event-based approach to vision and image sensing is recently gaining substantial attention as it proposes solutions to the problems encountered with conventional technology of image processing. The output of such a sensor is a time-continuous stream of pixels, delivered at unprecedented temporal resolution, containing zero redundancy and encoding orders of magnitude higher dynamic range than conventional image sensors. However, due to the lack of alternatives so far, the event-based, asynchronous output of these sensors have been processed using conventional computing devices such as CPUs and GPUs. This way of processing is obviously non-ideal and does not allow to fully benefit from the unique characteristics of such sensors. In this postdoctoral project, we will attempt to develop an event-based processing chain applied to a realistic data in the context of moving objects. To reach this goal, we will explore and compare on-chip the most recent learning methods adapted to spiking neural networks.
Plus d'info