spikeDE.neuron
This module provides a flexible and extensible framework for building Spiking Neural Networks (SNNs) that seamlessly bridge standard integer-order dynamics with advanced fractional-order calculus. Unlike traditional frameworks that rely on discrete step-by-step updates, spikeDE reimagines neurons as continuous dynamical systems. This architectural shift allows users to upgrade standard models into Fractional-Order Spiking Neurons, endowing them with infinite memory and complex temporal dependencies without altering core logic.
At the heart of this module is the separation of concerns: neuron classes define instantaneous dynamics (computing derivatives), while external solvers (via SNNWrapper) handle state evolution and fractional integration. This design supports a wide range of models, from classic Integrate-and-Fire variants to sophisticated noisy-threshold and hard-reset mechanisms, all compatible with surrogate gradient learning.
Key Features
- Modular Architecture: Stateless neuron modules compute derivatives (\(dv/dt\)) independently of state history, allowing them to work interchangeably with standard (
odeint) and fractional (fdeint) solvers. - Learnable Parameters: Supports learnable membrane time constants (\(\tau\)) via exponential reparameterization and customizable surrogate gradient functions (e.g., arctan, sigmoid) for effective backpropagation through non-differentiable spikes.
- Extensibility: Provides a clear
BaseNeuroninterface for defining custom dynamics, ensuring that user-defined neurons automatically inherit fractional capabilities when wrapped in the appropriate solver.
BaseNeuron
BaseNeuron(
tau: float = 0.5,
threshold: float = 1.0,
surrogate_grad_scale: float = 5.0,
surrogate_opt: str = "arctan_surrogate",
tau_learnable: bool = False,
)
Bases: Module
Base class for spiking neuron models with configurable membrane time constant and surrogate gradients.
This abstract class provides the foundational structure for spiking neurons. It supports learnable or fixed membrane time constant (\(\tau\)) and customizable surrogate gradient functions for backpropagation through non-differentiable spikes.
The effective membrane time constant is computed as:
where \(\tau_0\) is the initial value and \(\theta\) is a learnable parameter.
Subclasses must implement the forward method to define specific dynamics.
Attributes:
-
initial_tau(float) –Initial value of the membrane time constant \(\tau_0\).
-
tau_param(Parameter | None) –Learnable parameter \(\theta\) if
tau_learnable=True; otherwiseNone. -
tau(float) –Fixed \(\tau\) used when
tau_learnable=False. -
threshold(float) –Firing threshold \(V_{\text{th}}\).
-
surrogate_grad_scale(float) –Scaling factor for surrogate gradient steepness.
-
surrogate_f(Callable) –Surrogate gradient function (e.g., arctan-based).
-
tau_learnable(bool) –Whether \(\tau\) is trainable.
Parameters:
-
tau(float, default:0.5) –The base membrane time constant \(\tau\). Used directly if
tau_learnable=False, or as a scaling factor iftau_learnable=True. -
threshold(float, default:1.0) –The membrane potential threshold at which the neuron fires a spike.
-
surrogate_grad_scale(float, default:5.0) –Scaling factor applied inside the surrogate gradient function to control gradient magnitude during backpropagation.
-
surrogate_opt(str, default:'arctan_surrogate') –Name of the surrogate gradient function to use. Must be a key in the global
surrogate_fdictionary (e.g.,"arctan_surrogate"). -
tau_learnable(bool, default:False) –If
True, \(\tau\) becomes a learnable parameter. IfFalse, \(\tau\) remains fixed.
- API Reference spikeDE.neuron IFNeuron
- API Reference spikeDE.neuron LIFNeuron
Source code in spikeDE/neuron.py
forward
Performs one step of neuron state update.
Must be overridden by subclasses to implement specific spiking dynamics.
Parameters:
-
v_mem(Tensor) –Membrane potential tensor of shape
(batch_size, ...). -
current_input(Tensor) –Input current tensor, same shape as
v_mem.
Returns:
-
tuple[Tensor, Tensor]–A tuple
(dv_dt, spike)where:dv_dt: Effective derivative of membrane potential.spike: Continuous spike approximation in [0, 1].
Raises:
-
NotImplementedError–Always raised here; subclass must implement.
Source code in spikeDE/neuron.py
get_tau
get_tau() -> float
Returns the effective membrane time constant \(\tau\).
Ensures positivity via exponential reparameterization when learnable.
Returns:
-
float–Scalar tensor representing \(\tau\).
Source code in spikeDE/neuron.py
IFNeuron
IFNeuron(
tau: float = 0.5,
threshold: float = 1.0,
surrogate_grad_scale: float = 5.0,
surrogate_opt: str = "arctan_surrogate",
tau_learnable: bool = False,
)
Bases: BaseNeuron
Integrate-and-Fire (IF) spiking neuron model with surrogate gradients.
This model integrates input without leakage. The dynamics follow:
where \(\sigma\) is a differentiable surrogate.
Note
Despite inheriting tau, this model behaves as a pure integrator
when leakage is disabled (i.e., no decay term on v_mem).
Source code in spikeDE/neuron.py
forward
Forward pass for IF neuron dynamics (discrete-time, dt=1.0).
Parameters:
-
v_mem(Tensor) –Current membrane potential.
-
current_input(Tensor | None, default:None) –Input current (same shape as
v_mem).
Returns:
Source code in spikeDE/neuron.py
LIFNeuron
LIFNeuron(
tau: float = 0.5,
threshold: float = 1.0,
surrogate_grad_scale: float = 5.0,
surrogate_opt: str = "arctan_surrogate",
tau_learnable: bool = False,
)
Bases: BaseNeuron
Leaky Integrate-and-Fire (LIF) spiking neuron model with surrogate gradients.
Implements classic leaky dynamics governed by:
where \(\sigma\) is a differentiable surrogate.
Source code in spikeDE/neuron.py
forward
Forward pass for LIF neuron dynamics (discrete-time, dt=1.0).
Parameters:
-
v_mem(Tensor) –Current membrane potential.
-
current_input(Tensor | None, default:None) –Input current (same shape as
v_mem).
Returns: