FURI | Fall 2025

Event-based Video Reconstruction & Restoration with Diffusion Models

Data icon, disabled. Four grey bars arranged like a vertical bar chart.

Traditional frame-based cameras often struggle to capture movements in dynamic or low-light environments, resulting in motion blur or darkened frames. Event-based cameras asynchronously detect pixel-level brightness changes, enabling high temporal resolution and dynamic range. However, they generate sparse and unconventional data streams that do not readily translate into clear, coherent videos. This project investigates how diffusion models, a class of models known for synthesizing realistic images, can reconstruct photorealistic videos from sparse event streams. This approach would result in robust, high-fidelity video reconstruction that can be applied to use cases in autonomous navigation, surveillance, and high-speed imaging.

Student researcher

Ishita Ranjan

Computer science

Hometown: Chandler, Arizona, United States

Graduation date: Spring 2026