Key Points

1. The paper introduces the concept of Neural Developmental Programs (NDPs) as a step towards creating neural networks that grow through a developmental process mirroring key properties of embryonic development in biological organisms.

2. It contrasts the self-assembling and growing nature of biological networks with the hand-designed structure and limited learning capabilities of current artificial neural networks.

3. The paper explores two instantiations of NDPs, one evolution-based and the other gradient descent-based, and demonstrates the feasibility of the approach in continuous control problems and growing topologies with specific properties such as small-worldness.

4. The NDP approach is tested on various tasks including reinforcement learning and classification, showcasing its potential in growing neural networks for different applications.

5. The paper discusses the possibilities of extending the NDP approach to incorporate activity-dependent and reward-modulated growth and adaptation, and the potential to consolidate a different pathway for training neural networks and developing AI systems.

6. It outlines future directions for research, including the interplay between genome size, developmental steps, and task performance, and the potential of NDPs to discover wiring diagrams and grow networks with arbitrary topological properties.

Summary

The paper investigates the role of neural developmental programs (N DP) in growing neural networks, aiming to mirror the key properties of embryonic development in biological organisms. The approach utilizes a graph neural network encoding where the growth of a policy network is controlled by N DP, which operates through local communication alone to decide if a neuron should replicate and how each connection in the network should set its weight. The study explores the impact of neural growth on various machine learning benchmarks and optimization methods, showcasing the potential for growing functional policy networks solely based on local communication of neurons. The approach differentiates itself from methods that grow networks during evolution, as it grows networks during the agent's lifetime.

The paper emphasizes the limitations of indirect genome-to-phenotype encodings, highlighting the importance of enabling robustness to perturbations and unexpected changes in artificial neural networks. It discusses the underrepresented research area of developmental and self-organizing algorithms for growing neural networks, introducing a new paradigm for self-assembling artificial neural networks. Moreover, the paper presents the two different instantiations of N DP, including an evolution-based version and a differentiable version trained with gradient descent-based approaches. The differentiable N DP demonstrates competitive performance on tasks such as reinforcement learning and classification.

The study also delves into the domain of developmental encodings, exploring the potential to grow networks with arbitrary topological properties and analyzes the implications of neural growth on various tasks. The researchers also advocate for further exploration of the incorporation of activity-dependent and reward-modulated growth and adaptation in future work. The paper concludes by highlighting the potential of N DP as a unifying principle to capture properties important for biological intelligence and pave the way for the development of new methodologies for artificial intelligence systems. The research is supported by a GoodAI Research Award and a European Research Council (ERC) grant.

Referencehttps://arxiv.org/abs/2307.08197