Invariant Slot Attention

1Northeastern University, 2Google Research
ICML 2023

*Work done while at Google. Equal contribution.

We equip slots in Slot Attention with positions, scales and optionally orientations. This enables us to discover objects regardless of their pose and scale.

Abstract

Automatically discovering composable abstractions from raw perceptual data is a long-standing challenge in machine learning. Recent slot-based neural networks that learn about objects in a self-supervised manner have made exciting progress in this direction. However, they typically fall short at adequately capturing spatial symmetries present in the visual world, which leads to sample inefficiency, such as when entangling object appearance and pose. In this paper, we present a simple yet highly effective method for incorporating spatial symmetries via slot-centric reference frames. We incorporate equivariance to per-object pose transformations into the attention and generation mechanism of Slot Attention by translating, scaling, and rotating position encodings. These changes result in little computational overhead, are easy to implement, and can result in large gains in terms of data efficiency and overall improvements to object discovery. We evaluate our method on a wide range of synthetic object discovery benchmarks namely CLEVR, Tetrominoes, CLEVRTex, Objects Room and MultiShapeNet, and show promising improvements on the challenging real-world Waymo Open dataset.

BibTeX

@inproceedings{biza23invariant,
  author       = {Ondrej Biza and
                  Sjoerd van Steenkiste and
                  Mehdi S. M. Sajjadi and
                  Gamaleldin F. Elsayed and
                  Aravindh Mahendran and
                  Thomas Kipf},
  title        = {Invariant Slot Attention: Object Discovery with Slot-Centric Reference
                  Frames},
  booktitle    = {ICML},
  year         = {2023}
}