Neuromorphic Data Classification Tasks
This guide provides an overview of the neuromorphic event-driven vision experiments conducted with spikeDE. These experiments demonstrate the superiority of our fractional-order Spiking Neural Networks (f-SNNs) over traditional integer-order SNNs in processing spatiotemporal correlations inherent in neuromorphic data.
All source code for these experiments is open-source and reproducible. You can access the specific implementation scripts and configurations in our GitHub repository.
Overview
Neuromorphic data, captured by Dynamic Vision Sensors (DVS), consists of asynchronous event streams with microsecond-level temporal resolution. Traditional SNNs often struggle to capture long-range temporal dependencies due to their Markovian nature (memoryless).
Our f-SNN framework replaces standard integer-order neuron dynamics with fractional-order differential equations (f-ODEs). This introduces a power-law memory kernel, allowing the network to retain information from distant past events, resulting in:
- Higher Classification Accuracy: Consistently outperforming LIF-based baselines (SpikingJelly, snnTorch).
- Enhanced Robustness: Superior stability against noise, occlusion, and temporal jitter.
- Energy Efficiency: Comparable energy consumption despite the added computational complexity of fractional dynamics, thanks to lower average firing rates.
Datasets
We evaluated our framework on five major neuromorphic benchmarks:
| Dataset | Description | Task |
|---|---|---|
| N-MNIST | Neuromorphic version of MNIST generated by moving digits in front of a DVS. | Digit Classification (10 classes) |
| DVS128 Gesture | 11 dynamic hand gestures performed by 29 subjects under varying lighting. | Gesture Recognition |
| N-Caltech101 | Event streams generated from static Caltech101 images via camera motion. | Object Classification (101 classes) |
| DVS-Lip | High-temporal resolution recordings of lip movements for visual speech recognition. | Lip Reading |
| HarDVS | A large-scale dataset containing over 100k samples of human activities. | Human Action Recognition (300 classes) |
Experimental Setup
Architecture & Baselines
We implemented two backbone architectures to ensure fair comparison:
- CNN-based: Following the
DVSNetarchitecture. - Transformer-based: Using the
Spikformerbackbone.
Baselines: We compared our f-LIF neurons against standard LIF neurons implemented in popular frameworks including SpikingJelly and snnTorch.
Key Hyperparameters
- Optimizer: Adam
- Time Steps (\(T\)):
- 16 steps for N-MNIST, DVS128 Gesture, N-Caltech101, and DVS-Lip.
- 8 steps for HarDVS (due to large sequence length).
- Fractional Order (\(\alpha\)): Tuned per dataset (e.g., \(\alpha=0.5\) for DVS128 Gesture, \(\alpha=0.8\) for HarDVS). Setting \(\alpha=1\) recovers the standard integer-order model.
- Preprocessing: Event data was converted into frame representations using the standard SpikingJelly pipeline. Input resolution was uniformly adjusted to \(128 \times 128\).
Key Results
Our f-SNN models achieved state-of-the-art or superior results across all datasets. For instance, on the DVS128 Gesture dataset using a Transformer backbone, f-SNN achieved 95.83% accuracy, significantly outperforming the SpikingJelly baseline (93.40%).
| Dataset | Architecture | LIF (SpikingJelly) | LIF (snnTorch) | f-LIF (spikeDE) |
|---|---|---|---|---|
| N-MNIST | CNN | 99.27% | 99.08% | 99.48% |
| DVS128 Gesture | Transformer | 93.40% | 88.99% | 95.83% |
| N-Caltech101 | Transformer | 72.63% | 65.67% | 76.27% |
| HarDVS | CNN | 46.10% | 46.26% | 47.66% |
Robustness Experiments
The f-SNN framework consistently maintained higher accuracy under high-intensity noise and occlusion compared to integer-order baselines, validating the theoretical stability of fractional-order systems. You can find the detailed experiments in our ICLR 2026 Paper.
Reproducing the Results
The experiments are organized in the examples/ICLR_Release/Neuromorpic directory of our repository.
Directory Structure
examples/ICLR_Release/Neuromorpic/
├── file/
│ ├── scripts/ # Launch scripts (e.g., run_n101_transformer.sh)
│ └── logs/ # Training logs
├── train_n101_transformer.py
├── train_hardvs_cnn.py
└── ...
Running the Experiments
You can run specific tasks using the provided shell scripts:
# Run N-Caltech101 with Transformer
bash file/scripts/run_n101_transformer.sh
# Run HarDVS with CNN
bash file/scripts/run_hardvs_cnn.sh
# Or run all neuromorphic tasks
bash file/scripts/run_all.sh
For detailed dataset download links and MD5 checksums, please refer to the README.md in the examples/ICLR_Release/Neuromorpic folder.
Tip
For more theoretical details on fractional calculus in SNNs, please refer to other tutorials in this documentation or our ICLR 2026 Paper.