Skip to content

Liquid State Machine

Reservoir computing with spiking neurons. The reservoir provides a high-dimensional, temporally-rich representation; only the readout layer is trained.

LSM (Complete System)

LSM

Bases: Module

Liquid State Machine with reservoir and trainable readout.

The reservoir provides a high-dimensional, temporally-rich representation of the input, and the readout maps reservoir states to task outputs.

Args: input_size: Input feature dimension. reservoir_size: Number of neurons in the reservoir. output_size: Number of output classes/features. readout: Readout type ('linear' or 'mlp'). train_reservoir: If True, reservoir weights are trainable. Default False (standard LSM approach). **reservoir_kwargs: Additional arguments passed to :class:LiquidReservoir.

Examples: >>> lsm = LSM(input_size=64, reservoir_size=200, output_size=10) >>> state = lsm.init_state(batch_size=8) >>> x = mx.random.normal((8, 64)) >>> out, state = lsm(x, state) >>> out.shape [8, 10]

init_state

init_state(batch_size: int) -> dict

Initialize LSM state.

Args: batch_size: Number of samples.

Returns: Reservoir state dictionary.

__call__

__call__(x: array, state: dict) -> tuple[mx.array, dict]

Process one timestep.

Args: x: Input of shape [batch, input_size]. state: Reservoir state from previous timestep.

Returns: Tuple of (output, new_state) where output has shape [batch, output_size].

forward_sequence

forward_sequence(x_seq: array, state: dict | None = None) -> tuple[mx.array, dict]

Process a full input sequence.

Args: x_seq: Input sequence of shape [T, batch, input_size]. state: Initial state. If None, initialized internally.

Returns: Tuple of (outputs, final_state) where outputs has shape [T, batch, output_size].

LiquidReservoir

LiquidReservoir

Bases: Module

Recurrent spiking reservoir with sparse random connectivity.

The reservoir receives input via a random input projection, processes it through recurrent spiking dynamics, and outputs the resulting spike pattern.

Args: input_size: Dimensionality of the input. reservoir_size: Number of neurons in the reservoir. connectivity: Connection probability (for Erdos-Renyi). spectral_radius: Target spectral radius of recurrent weights. exc_ratio: Fraction of excitatory neurons (Dale's law). beta: Membrane decay factor for reservoir LIF neurons. threshold: Spike threshold. topology: Connectivity topology ('erdos_renyi', 'small_world', 'scale_free'). input_scaling: Scaling factor for input weights. seed: Random seed for reproducibility.

Examples: >>> reservoir = LiquidReservoir(input_size=64, reservoir_size=200) >>> state = reservoir.init_state(batch_size=8) >>> x = mx.random.normal((8, 64)) >>> spk, state = reservoir(x, state) >>> spk.shape [8, 200]

init_state

init_state(batch_size: int) -> dict

Initialize reservoir state.

Args: batch_size: Number of samples in the batch.

Returns: Dictionary with 'mem' (membrane potential) and 'spk' (previous spikes) tensors.

__call__

__call__(x: array, state: dict) -> tuple[mx.array, dict]

Process one timestep through the reservoir.

Args: x: Input of shape [batch, input_size]. state: Reservoir state from previous timestep.

Returns: Tuple of (spikes, new_state) where spikes has shape [batch, reservoir_size].

Topology Generators

Erdos-Renyi

erdos_renyi

erdos_renyi(n: int, p: float = 0.1, exc_ratio: float = 0.8, spectral_radius: float = 0.9, rng: Generator | None = None) -> np.ndarray

Erdos-Renyi random sparse connectivity.

Each possible connection exists independently with probability p. Weights are drawn from N(0,1) and rescaled so the matrix has the desired spectral radius. Dale's law is enforced: the first int(exc_ratio * n) neurons are excitatory (positive outgoing weights) and the rest are inhibitory (negative outgoing weights).

Args: n: Number of reservoir neurons. p: Connection probability. exc_ratio: Fraction of excitatory neurons. spectral_radius: Target spectral radius for weight scaling. rng: Optional numpy random generator for reproducibility.

Returns: Weight matrix of shape (n, n) as float32 numpy array.

Small-World (Watts-Strogatz)

small_world

small_world(n: int, k: int = 6, p_rewire: float = 0.1, exc_ratio: float = 0.8, spectral_radius: float = 0.9, rng: Generator | None = None) -> np.ndarray

Watts-Strogatz small-world topology.

Starts with a ring lattice where each neuron connects to its k nearest neighbours, then randomly rewires edges with probability p_rewire.

Args: n: Number of reservoir neurons. k: Each node connects to k nearest neighbours (must be even). p_rewire: Rewiring probability. exc_ratio: Fraction of excitatory neurons. spectral_radius: Target spectral radius. rng: Optional numpy random generator.

Returns: Weight matrix of shape (n, n) as float32 numpy array.

Scale-Free (Barabasi-Albert)

scale_free

scale_free(n: int, m: int = 3, exc_ratio: float = 0.8, spectral_radius: float = 0.9, rng: Generator | None = None) -> np.ndarray

Barabasi-Albert scale-free topology.

Grows the network by attaching new nodes with m edges using preferential attachment.

Args: n: Number of reservoir neurons. m: Number of edges per new node. exc_ratio: Fraction of excitatory neurons. spectral_radius: Target spectral radius. rng: Optional numpy random generator.

Returns: Weight matrix of shape (n, n) as float32 numpy array.