Naturally Occurring Equivariance in Neural Networks
AI

Naturally Occurring Equivariance in Neural Networks

This article is part of the Circuits thread, an experimental format collecting invited short articles and critical commentary delving into the inner workings of neural networks. Curve Detectors High-Low Frequency Detectors Contents Convolutional neural networks contain a hidden world of symmetries within themselves. This symmetry is a powerful tool in understanding the features and circuits […]

Understanding RL Vision
AI

Understanding RL Vision

Contents In this article, we apply interpretability techniques to a reinforcement learning (RL) model trained to play the video game CoinRun . Using attribution combined with dimensionality reduction as in , we build an interface for exploring the objects detected by the model, and how they influence its value function and policy. We leverage this

Communicating with Interactive Articles
AI

Communicating with Interactive Articles

Computing has changed how people communicate. The transmission of news, messages, and ideas is instant. Anyone’s voice can be heard. In fact, access to digital communication technologies such as the Internet is so fundamental to daily life that their disruption by government is condemned by the United Nations Human Rights Council . But while the

Self-classifying MNIST Digits
AI

Self-classifying MNIST Digits

Contents This article is part of the Differentiable Self-organizing Systems Thread, an experimental format collecting invited short articles delving into differentiable self-organizing systems, interspersed with critical commentary from several experts in adjacent fields. Growing Neural Cellular Automata Self-Organising Textures Growing Neural Cellular Automata demonstrated how simple cellular automata (CAs) can learn to self-organise into complex

Thread: Differentiable Self-organizing Systems
AI

Thread: Differentiable Self-organizing Systems

Thread: Differentiable Self-organizing Systems How can we construct robust, general-purpose self-organising systems? Self-organisation is omnipresent on all scales of biological life. From complex interactions between molecules forming structures such as proteins, to cell colonies achieving global goals like exploration by means of the individual cells collaborating and communicating, to humans forming collectives in society such

Curve Detectors
AI

Curve Detectors

This article is part of the Circuits thread, an experimental format collecting invited short articles and critical commentary delving into the inner workings of neural networks. An Overview of Early Vision in InceptionV1Naturally Occurring Equivariance in Neural Networks Every vision model we’ve explored in detail contains neurons which detect curves. Curve detectors in vision models

Exploring Bayesian Optimization
AI

Exploring Bayesian Optimization

Many modern machine learning algorithms have a large number of hyperparameters. To effectively use these algorithms, we need to pick good hyperparameter values. In this article, we talk about Bayesian Optimization, a suite of techniques often used to tune hyperparameters. More generally, Bayesian Optimization can be used to optimize any black-box function. Let us start

An Overview of Early Vision in InceptionV1
AI

An Overview of Early Vision in InceptionV1

This article is part of the Circuits thread, a collection of short articles and commentary by an open scientific collaboration delving into the inner workings of neural networks. Zoom In: An Introduction to Circuits Curve Detectors The first few articles of the Circuits project will be focused on early vision in InceptionV1 — for our purposes, the

Visualizing Neural Networks with the Grand Tour
AI

Visualizing Neural Networks with the Grand Tour

The Grand Tour is a classic visualization technique for high-dimensional point clouds that projects a high-dimensional dataset into two dimensions. Over time, the Grand Tour smoothly animates its projection so that every possible view of the dataset is (eventually) presented to the viewer. Unlike modern nonlinear projection methods such as t-SNE and UMAP, the Grand

Thread: Circuits
AI

Thread: Circuits

In the original narrative of deep learning, each neuron builds progressively more abstract, meaningful features by composing features in the preceding layer. In recent years, there’s been some skepticism of this view, but what happens if you take it really seriously? InceptionV1 is a classic vision model with around 10,000 unique neurons — a large number, but

Scroll to Top