Research Directory

A comprehensive resource directory for event-based vision and neuromorphic computing research. Covering everything from cutting-edge algorithms to hardware platforms and datasets.

Community Resources

3.2k+
GitHub Stars
142+
Contributors
1,573+
Commits
50+
Datasets

Essential Reading

Event-based Vision: A Survey

Gallego, G., Delbruck, T., Orchard, G., et al.
IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), 44(1):154-180, Jan. 2022

The definitive comprehensive survey covering all aspects of event-based vision - essential reading for anyone entering this field.

Download PDF →

Upcoming Events & Challenges

IROS 2025 Workshop on Event-Based Vision

Latest advances in event-based vision for robotics applications.

Workshop Details →

Event-based Stereo SLAM Challenge

Deadline: September 30th, 2025 | IROS 2025 Competition

Challenge Details →

ESA ELOPE: Event-based Lunar Optical Flow Challenge

Deadline: August 31st, 2025 | Space applications of event-based vision

Challenge Details →

Hardware & Devices

Commercial Event Cameras

iniVation Cameras (DVS, DAVIS)

Pioneer sensors from Institute of Neuroinformatics. DVS128, DAVIS240/346 series.

Prophesee Metavision Sensors

High-resolution event cameras with advanced software ecosystem. Up to 1280×720 resolution.

Samsung & Sony Development

Next-generation prototypes: Samsung VGA DVS, Sony 35.6Mpixel hybrid sensors, OmniVision 3-wafer stacked designs.

CelePixel CeleX Series

High-resolution sensors including CeleX-V 1 Megapixel event camera.

Company Website

Neuromorphic Processors

Intel Loihi

Neuromorphic research chip for spiking neural networks and event-based processing.

DYNAP (aiCTX AG)

Dynamic Neuromorphic Asynchronous Processor with 256 neurons and 128K synapses.

IBM TrueNorth & SpiNNaker

Large-scale neuromorphic computing platforms for event-driven applications.

Research Applications

SLAM & Visual Odometry

Event-based simultaneous localization and mapping using temporal contrast changes for robust navigation in challenging environments.

Key Methods: Direct methods, feature tracking, visual-inertial fusion, stereo SLAM

Optical Flow & Motion Estimation

High-speed motion field estimation using event streams, enabling real-time perception for fast-moving objects and high-speed robotics.

Applications: Drone navigation, collision avoidance, motion segmentation

3D Reconstruction & Depth Estimation

Monocular and stereo depth estimation using event cameras, including structured light and LiDAR fusion approaches.

Techniques: Contrast maximization, deep learning, semi-dense reconstruction

Object Recognition & Tracking

Pattern recognition and object tracking using sparse event representations and spiking neural networks.

Methods: HOTS, HFIRST, graph neural networks, attention mechanisms

Specialized Applications

Space Applications

Satellite tracking, space debris monitoring, lunar navigation

Automotive

Driver monitoring, lane detection, collision avoidance

Biomedical

Retinal implants, eye tracking, neural interfaces

Industrial

Quality control, vibration analysis, high-speed monitoring

Key Datasets

Driving & Automotive

DDD20 - End-to-End Event Camera Driving Dataset

Fusing frames and events for improved steering prediction

Dataset Page

DSEC - Stereo Event Camera Dataset

Large-scale stereo dataset for driving scenarios

Dataset Page

Prophesee Automotive Detection Dataset

Large scale event-based detection dataset for automotive applications

Code & Data

Recognition & Classification

N-MNIST & N-Caltech101

Neuromorphic versions of classic vision datasets

Dataset Collection

N-CARS Dataset

Large real-world event-based dataset for car classification

Dataset Page

DVS128 Gesture Dataset

Gesture recognition dataset used in IBM's neuromorphic system

Dataset Page

Multi-Sensor & Robotics

MVSEC - Multi Vehicle Stereo Event Camera

Event camera dataset for 3D perception

Dataset Page

VECtor - Versatile Event-Centric Benchmark

Multi-sensor SLAM dataset with IMU, LiDAR, and cameras

Dataset Page

M3ED - Multi-Robot Multi-Sensor Multi-Environment

Large-scale multi-modal dataset for robotics

Dataset Page

Software & Tools

Development Frameworks

jAER (Java Address-Event Representation)

Real-time sensory-motor processing for event-based sensors

Project Page

Prophesee OpenEB

Open source library for event-based vision

GitHub Repository

Tonic

Event datasets and transforms library (like Torchvision for events)

GitHub Repository

ROS Integration

RPG DVS ROS

ROS driver and tools for dynamic vision sensors

GitHub Repository

Prophesee ROS Wrapper

ROS driver and messages for Prophesee event-based sensors

GitHub Repository

Algorithm Libraries

EV-FlowNet

Self-supervised optical flow estimation for event cameras

GitHub Repository

E2VID

Video reconstruction from event cameras

GitHub Repository

Event Feature Tracking

Probabilistic data association for feature tracking

GitHub Repository

Featured Research Papers

Machine Learning & Computer Vision

Picking Up Quantization Steps for Compressed Image Classification

Li Ma, Peixi Peng, Guangyao Chen, Yifan Zhao, Siwei Dong, Yonghong Tian

Novel approach to reduce neural network sensitivity to compressed images by utilizing quantization steps from compressed files. Proposes quantization aware confidence (QAC) and quantization aware batch normalization (QABN) for improved classification performance.

arXiv:2304.10714

PID-NET: A Novel Parallel Image-Dehazing Network

Wei Liu, Yi Zhou, Dehua Zhang, Yi Qin

Advanced image dehazing network combining CNN and Vision Transformer architectures with lightweight attention mechanisms and redundant feature filtering for superior performance in various atmospheric conditions.

Electronics 2025

DQnet: Cross-Model Detail Querying for Camouflaged Object Detection

Raphael Berner et al.

Cross-model detail querying approach for detecting camouflaged objects in complex visual environments, advancing computer vision capabilities for challenging detection scenarios.

arXiv:2212.08296

Applied AI & Agricultural Sciences

Deep Learning Methods for Fruit Fly Regurgitation Studies

Tongzhou Zhou, Wei Zhan, Mengyuan Xiong

Comprehensive application of deep learning and computer vision techniques for studying fruit fly regurgitation behavior, including I3D behavior recognition, U-Net segmentation with CBAM attention, and YOLOv5+DeepSort trajectory tracking.

Front. Plant Sci. 2024

Q-YOLO: Efficient Inference for Real-time Object Detection

Qing Niao et al.

Optimized YOLO architecture for efficient real-time object detection with improved inference speed and accuracy, focusing on practical deployment scenarios.

arXiv:2307.04816

Cross-Disciplinary AI Applications

Emerging Topics in Computer Vision and AI

Recent advances by our research team span multiple domains including neuromorphic vision systems, event-based processing, biological behavior analysis, and agricultural AI applications.

Interdisciplinary Research Impact

Our collaborations bridge computer science with agricultural sciences, biological research, and industrial applications, demonstrating the versatility of modern AI techniques across diverse scientific domains.

Leading Research Groups

Europe

Institute of Neuroinformatics (INI)

University of Zurich & ETH Zurich | Pioneer in neuromorphic vision sensors

Robotics and Perception Group (RPG)

University of Zurich | Event-based vision for high-speed robotics

Robotic Interactive Perception

TU Berlin | Event-based robot vision algorithms

Event-Driven Perception for Robotics (EDPR)

Istituto Italiano di Tecnologia (IIT) | Bio-inspired robotics

North America

GRASP Lab

University of Pennsylvania | Kostas Daniilidis's group

Perception and Robotics Group

University of Maryland | Fermüller's Lab on event-based vision

Intel Labs

Neuromorphic Computing | Mike Davies - Loihi development

Nano(neuro)electronics Research Lab

Purdue University | Neuromorphic hardware research

Asia-Pacific

International Centre for Neuromorphic Systems

Western Sydney University | Australia

Camera Intelligence Lab

Peking University | China

Mobile Perception Lab

ShanghaiTech University | China

Visual Intelligence Lab

KAIST | South Korea

Educational Resources

Courses & Tutorials

Event-based Robot Vision (TU Berlin)

Comprehensive course with YouTube videos and slides

YouTube Channel

Event Camera Tutorial (Tobi Delbruck)

Introductory video series about DVS technology

YouTube Video

Telluride Neuromorphic Workshops

Annual workshops on neuromorphic engineering

Workshop Website

Recent PhD Theses

Event-based Algorithms for Geometric Computer Vision

Alex Zhu, University of Pennsylvania, 2019

Event Cameras: from SLAM to High Speed Video

Henri Rebecq, University of Zurich, 2019

How to See with an Event Camera

Cedric Scheerlinck, Australian National University, 2021

Get Involved

This directory is based on the community-driven Event-based Vision Resources repository with 3.2k+ GitHub stars and 142+ contributors.

Logorythms