Key Points

1. The paper explores the enforcement, discovery, and promotion of symmetries in machine learning models using a linear-algebraic approach.

2. The paper focuses on the use of smooth sections of vector bundles with fiber-linear Lie group actions as central objects in the study of symmetries in machine learning models.

3. The Lie derivatives are shown to fully encode the connected subgroup of symmetries for sections of vector bundles, providing a means to understand and work with equivariant machine learning models.

4. The paper illustrates how the promoting and enforcing of continuous symmetries in machine learning models are dual problems based on the bilinear structure of the Lie derivative.

5. It discusses the use of convex penalties based on fundamental operators to promote symmetries in machine learning models as inductive biases during training.

6. The paper explores the extension of the linear algebraic framework for discovering continuous symmetries to identify symmetries of arbitrary submanifolds.

7. Data-driven methods for discretizing and approximating the fundamental operators are described for enforcing, promoting, and discovering symmetry in machine learning models.

8. The paper emphasizes that these theoretical concepts, while general, offer efficient computational implementations via simple linear algebra.

9. Finally, the limitations of the approach are highlighted, emphasizing the importance of making appropriate choices for key objects such as the candidate group and the space of functions.

Summary

Symmetry in Physics and Machine Learning
The paper explores the concept of symmetry in physics and its application to machine learning models. It discusses the fundamental role of symmetry in physics and emphasizes the importance of incorporating symmetry into machine learning models. The paper introduces a unifying theoretical and methodological framework for incorporating symmetry into machine learning models through three tasks: enforcing known symmetries, identifying the symmetries of a model or data set, and promoting symmetries as much as possible.

Mathematical Framework for Symmetry Tasks
The authors demonstrate that these tasks can be cast within a common mathematical framework whose central object is the Lie derivative associated with fiber-linear Lie group actions on vector bundles. They extend and unify several existing results by showing that enforcing and discovering symmetry are linear-algebraic tasks that are dual with respect to the bilinear structure of the Lie derivative. The paper also proposes a novel way to promote symmetry by introducing a class of convex regularization functions based on the Lie derivative and nuclear norm relaxation to penalize symmetry breaking during the training of machine learning models.

The paper presents methods for enforcing symmetry in multilayer perceptrons and neural networks acting on spatial fields by implementing linear constraints on the weights defining each layer of a neural network. The authors also discuss how to compute the connected subgroup of symmetries by identifying its Lie subalgebra and how to enforce equivariance with respect to arbitrary group actions via linear constraints on the integral kernels.

Methods for Discovering Symmetry
The paper further explores methods for discovering symmetry by computing the nullspaces of linear operators, such as those associated with symmetries of submanifolds and dynamical systems. It shows how to reveal the symmetries of submanifolds via the nullspace of a closely related operator and discusses the significance of these results in identifying conserved quantities for dynamical systems.

Overall, the paper provides a comprehensive framework for incorporating symmetry into machine learning models and demonstrates the practical application of these methods in enforcing, identifying, and promoting symmetries to improve the performance and interpretability of machine learning models.

Reference: https://arxiv.org/abs/2311.00212