What’s Developmental Biology got to do with Machine Learning?

Slime mold fruiting bodies, photograph by Ian Lindsay on Pixabay, used under the Pixabay License.

Intelligence All the Way Down

We typically think of the brain as the seat of intelligence, and, well, we typically think this thought with the brain as well. The human brain has been described by some as “the most complex thing in the universe,” although that should be taken with a grain of salt as it is usually a human brain that makes these claims. An alternative perspective combines the ideas of general intelligence and universal computation and says, “sure, a brain is cool and all, but it’s not the fundamental unit of intelligence by most reasonable estimates.”

Society is more complicated than a single human brain, but it can be argued that a society can be reduced to a repetition of its building blocks (brains) and their interactions. Each individual brain doesn’t need to know much in the grand scheme of things, and yet collectively they can do things like sending some of them to the moon. Similarly, we can ask what the intelligence in brains and other bits of biology is made up of. Individual neurons are quite complex in their own right, but they know very little about anything on their own. In fact we can observe complex and seemingly intelligent behavior at scales at least down to single, non-neuronal cells. In fact we can do that right now by watching this Lacrymaria ciliate searching for food:

A single cell can exhibit decision making and learning, and when groups of cells cooperate the result is all the beauty and diversity of multicellular life. Cells that form part of a larger organism have to figure out where to go, what to do, and what to become, and they do so with no top-down controller. Instead, individual cells communicate with their neighbors and make decisions that, accumulated over time, define and accomplish the goals of a larger organism.

Collage of biological structures by flickr user sivadyt, used with permission.

We can recognize the collective decision-making and learning behavior of multicellular tissues and organisms as a form of intelligence emerging from individual cells. Crucially, these cells must maintain some form of local communication with their numbers, without intercellular communication cells may revert to individualistic behaviors and motivations, leading to pathologies like metastatic cancer. Collective decision making provides an alternative model for intelligent systems. This model comes with plentiful biological examples to learn from, and the potential for real-world impacts in clinical realms like regenerative medicine or treating tumors.

Awakening Ancient Powers From the Old Days of AI

A common plot device in fantastical fiction is the re-awakening of ancient powers beyond the current state of understanding, usually ill-fated. Bilbo finds a ring in the Misty Mountains, Godzilla wakes up grouchy thanks to nuclear testing, and Kirby naively pieces together the Star Rod, unleashing a nightmare on Dreamland.

That face when you thought you were saving the day, but you unleashed a cosmic horror instead. Neural cellular automata reconstruction of “Kirby,” trained with code from https://github.com/rivesunder/dca (MIT License)

The history of artificial intelligence is studded with a similar theme of revival, albeit with less cosmic horror. As the reader may be well aware, the deep learning paradigm of modern machine learning and artificial intelligence is rooted in earlier periods of research. The roots of modern deep learning go back to the McCulloch-Pitts neuron, and Rosenblatt’s Perceptron, but it wasn’t until realizing the potential of neural networks as universal function approximators in the connectionism of the 1980s that neural networks began to look really promising. At some point several people thought it might be a good idea to train neural networks on a GPU, and now we have a deep neural network that folds proteins like it’s their job (it is).

Structure of Striatin-interacting protein 1 predicted by AlphaFold.

Similarly, cellular models (known as cellular automata, or CA) have had several research heydays scattered throughout the last century. CA have been historically used for modeling physical and biological processes, as well as more theoretical studies of complexity and computation. While they have been out of the limelight for a few years or so, that may be changing in part due to the speed and capability of modern computers. On their own, many CA formulations have been demonstrated capable of universal computation, meaning that they could simulate any computable process given enough time and memory. You can even run Life in Life if you’re into that sort of thing.

Accumulated cellular automata pattern by Rive Sunder, used with permission

Recently, researchers have been experimenting with a new variant of cellular automata. In short, researchers have revived a second ancient power of artificial intelligence by combining cellular automata with neural networks. Perhaps we don’t yet fully understand their potential, but we do know that CA can easily be formulated to be capable of universal computation, and that neural networks are universal function approximators. What kind of systems can be built by combining the two concepts?

Natural Cellular Automata

Neural cellular automata combine the local, self-organizing rules of cellular automata with the differentiable learning and universal approximation power of neural networks. That is, the rules for these cellular automata are not defined by a simple sum of their neighbors, as in John Conway’s famous Game of Life. Instead they are formulated as multi-layer neural networks, and the specific parameters of these networks are learned rather than designed. One striking application of neural cellular automata is as a differentiable model for the biological processes by which an organism develops its shape. In biology, these processes guide the growth of a fully-formed organism from single-cell beginnings, but also power the ability of flatworms and newts to regenerate their bodies when injured.

A collage of damage to a pixel image of a lizard, all of which were grown and able to be regenerated by a neural CA model. Collage of screenshots of interactive figure from Mordvintsev et al. CC BY https://distill.pub/2020/growing-ca/

Neural CA can also be used to perform the computer vision and image processing tasks that are usually the purview of convolutional neural networks, like classification. That’s not too surprising, as CA can be understood as a particular arrangement of a convolutional network. Thus CA can be used for classification and image segmentation, and the collective “voting” of a community of cells can improve the interpretability of model uncertainty and feature recognition.

Neural CA have also been used to grow apartment buildings and machines in the video game Minecraft, some of which spring to movement when the game is started. The decentralized decision making and repair properties of neural CA are attractive traits that we would very much like to impart to real-world machines, but what kind of machine can support the sort of decentralized computation that neural CA can run on?

Full Circle

The first answer brings us back to where we started, from inspiration through simulation to application. If we can fully understand biological development and regeneration in simulation using differentiable, neural, cellular automata, this should empower those fields as engineering disciplines rather than purely experimental ones. With currently available knowledge and pharmacological and optogenetic tools, Mike Levin at Tufts University and others have managed to grow flatworms with two heads, induce frogs to grow eyeballs and limbs in strange places, and to revert implanted tumors to behave as normal (frog) tissue. A nuanced engineering approach to designing perturbations using neural CA should provide more fine-grained control for experiments in biological morphogenesis and regeneration. In turn, insights from biological experiments can inform better CA models, and so on, resulting in a virtuous biomimetic cycle.

Planarian (flatworm) photo by Wikipedia contributor Eduard Solà used under CC BY SA 3.0

--

--

--

Welcome to the Macromoltek blog! We're an Austin-based biotech firm focused on using computers to further the discovery and design of antibodies.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Goldilocks model - not too dull and not too expressive.

Optimization Theory and Applications

Building a better query engine

Data Preprocessing for Machine Learning

How to train graph convolutional network models in a graph database

Advancements in Computer Vision

Download ImageNet dataset? A true story

Top 9 Algorithms for Machine Learning Beginners

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Macromoltek, Inc.

Macromoltek, Inc.

Welcome to the Macromoltek blog! We're an Austin-based biotech firm focused on using computers to further the discovery and design of antibodies.

More from Medium

Relation between Overfitting and High variance?

Traffic Accidents in London City: Perspective of Two Personas

Useful useless models: an approach to model complex cybersecurity problem solving

The Intersectionality of Sustainability and Technology