Jumping Ahead: Improving Reconstruction Fidelity with JumpReLU Sparse Autoencoders

Rajamanoharan, Lieberum, Sonnerat et al. (DeepMind) (2024)

Read paper

Tags: architecture, jumprelu, deepmind

Abstract

We propose JumpReLU SAEs which use a discontinuous activation function with a learnable threshold, achieving improved reconstruction fidelity and sparsity compared to standard ReLU SAEs.