Research Directory
A comprehensive resource directory for event-based vision and neuromorphic computing research. Covering everything from cutting-edge algorithms to hardware platforms and datasets.
Community Resources
Essential Reading
Event-based Vision: A Survey
Gallego, G., Delbruck, T., Orchard, G., et al.
IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), 44(1):154-180, Jan. 2022
The definitive comprehensive survey covering all aspects of event-based vision - essential reading for anyone entering this field.
Download PDF →Upcoming Events & Challenges
IROS 2025 Workshop on Event-Based Vision
Latest advances in event-based vision for robotics applications.
Workshop Details →Event-based Stereo SLAM Challenge
Deadline: September 30th, 2025 | IROS 2025 Competition
Challenge Details →ESA ELOPE: Event-based Lunar Optical Flow Challenge
Deadline: August 31st, 2025 | Space applications of event-based vision
Challenge Details →Hardware & Devices
Commercial Event Cameras
iniVation Cameras (DVS, DAVIS)
Pioneer sensors from Institute of Neuroinformatics. DVS128, DAVIS240/346 series.
Prophesee Metavision Sensors
High-resolution event cameras with advanced software ecosystem. Up to 1280×720 resolution.
Samsung & Sony Development
Next-generation prototypes: Samsung VGA DVS, Sony 35.6Mpixel hybrid sensors, OmniVision 3-wafer stacked designs.
CelePixel CeleX Series
High-resolution sensors including CeleX-V 1 Megapixel event camera.
Company WebsiteNeuromorphic Processors
Intel Loihi
Neuromorphic research chip for spiking neural networks and event-based processing.
DYNAP (aiCTX AG)
Dynamic Neuromorphic Asynchronous Processor with 256 neurons and 128K synapses.
IBM TrueNorth & SpiNNaker
Large-scale neuromorphic computing platforms for event-driven applications.
Research Applications
SLAM & Visual Odometry
Event-based simultaneous localization and mapping using temporal contrast changes for robust navigation in challenging environments.
Optical Flow & Motion Estimation
High-speed motion field estimation using event streams, enabling real-time perception for fast-moving objects and high-speed robotics.
3D Reconstruction & Depth Estimation
Monocular and stereo depth estimation using event cameras, including structured light and LiDAR fusion approaches.
Object Recognition & Tracking
Pattern recognition and object tracking using sparse event representations and spiking neural networks.
Specialized Applications
Space Applications
Satellite tracking, space debris monitoring, lunar navigation
Automotive
Driver monitoring, lane detection, collision avoidance
Biomedical
Retinal implants, eye tracking, neural interfaces
Industrial
Quality control, vibration analysis, high-speed monitoring
Key Datasets
Driving & Automotive
DDD20 - End-to-End Event Camera Driving Dataset
Fusing frames and events for improved steering prediction
Dataset PageProphesee Automotive Detection Dataset
Large scale event-based detection dataset for automotive applications
Code & DataRecognition & Classification
Multi-Sensor & Robotics
VECtor - Versatile Event-Centric Benchmark
Multi-sensor SLAM dataset with IMU, LiDAR, and cameras
Dataset PageM3ED - Multi-Robot Multi-Sensor Multi-Environment
Large-scale multi-modal dataset for robotics
Dataset PageSoftware & Tools
Development Frameworks
jAER (Java Address-Event Representation)
Real-time sensory-motor processing for event-based sensors
Project PageROS Integration
Algorithm Libraries
Featured Research Papers
Machine Learning & Computer Vision
Picking Up Quantization Steps for Compressed Image Classification
Li Ma, Peixi Peng, Guangyao Chen, Yifan Zhao, Siwei Dong, Yonghong Tian
Novel approach to reduce neural network sensitivity to compressed images by utilizing quantization steps from compressed files. Proposes quantization aware confidence (QAC) and quantization aware batch normalization (QABN) for improved classification performance.
arXiv:2304.10714PID-NET: A Novel Parallel Image-Dehazing Network
Wei Liu, Yi Zhou, Dehua Zhang, Yi Qin
Advanced image dehazing network combining CNN and Vision Transformer architectures with lightweight attention mechanisms and redundant feature filtering for superior performance in various atmospheric conditions.
Electronics 2025DQnet: Cross-Model Detail Querying for Camouflaged Object Detection
Raphael Berner et al.
Cross-model detail querying approach for detecting camouflaged objects in complex visual environments, advancing computer vision capabilities for challenging detection scenarios.
arXiv:2212.08296Applied AI & Agricultural Sciences
Deep Learning Methods for Fruit Fly Regurgitation Studies
Tongzhou Zhou, Wei Zhan, Mengyuan Xiong
Comprehensive application of deep learning and computer vision techniques for studying fruit fly regurgitation behavior, including I3D behavior recognition, U-Net segmentation with CBAM attention, and YOLOv5+DeepSort trajectory tracking.
Front. Plant Sci. 2024Q-YOLO: Efficient Inference for Real-time Object Detection
Qing Niao et al.
Optimized YOLO architecture for efficient real-time object detection with improved inference speed and accuracy, focusing on practical deployment scenarios.
arXiv:2307.04816Cross-Disciplinary AI Applications
Emerging Topics in Computer Vision and AI
Recent advances by our research team span multiple domains including neuromorphic vision systems, event-based processing, biological behavior analysis, and agricultural AI applications.
Interdisciplinary Research Impact
Our collaborations bridge computer science with agricultural sciences, biological research, and industrial applications, demonstrating the versatility of modern AI techniques across diverse scientific domains.
Leading Research Groups
Europe
Institute of Neuroinformatics (INI)
University of Zurich & ETH Zurich | Pioneer in neuromorphic vision sensors
Robotics and Perception Group (RPG)
University of Zurich | Event-based vision for high-speed robotics
Robotic Interactive Perception
TU Berlin | Event-based robot vision algorithms
Event-Driven Perception for Robotics (EDPR)
Istituto Italiano di Tecnologia (IIT) | Bio-inspired robotics
North America
GRASP Lab
University of Pennsylvania | Kostas Daniilidis's group
Perception and Robotics Group
University of Maryland | Fermüller's Lab on event-based vision
Intel Labs
Neuromorphic Computing | Mike Davies - Loihi development
Nano(neuro)electronics Research Lab
Purdue University | Neuromorphic hardware research
Asia-Pacific
International Centre for Neuromorphic Systems
Western Sydney University | Australia
Camera Intelligence Lab
Peking University | China
Mobile Perception Lab
ShanghaiTech University | China
Visual Intelligence Lab
KAIST | South Korea
Educational Resources
Courses & Tutorials
Event-based Robot Vision (TU Berlin)
Comprehensive course with YouTube videos and slides
YouTube ChannelRecent PhD Theses
Event-based Algorithms for Geometric Computer Vision
Alex Zhu, University of Pennsylvania, 2019
Event Cameras: from SLAM to High Speed Video
Henri Rebecq, University of Zurich, 2019
How to See with an Event Camera
Cedric Scheerlinck, Australian National University, 2021
Get Involved
This directory is based on the community-driven Event-based Vision Resources repository with 3.2k+ GitHub stars and 142+ contributors.