One interesting feature of the laws of physics is that many of them appear to be the result of the bulk behavior of much smaller components. For example, atoms and molecules in a gas move over a wide range of velocities. When confined within a container, these particles continually collide with surfaces, creating forces.
However, it is not necessary to know the velocities of all particles to determine this force. Instead, their effects are averaged into a predictable and measurable bulk property called pressure.
This and other bulk properties such as temperature, density, and elasticity are very useful due to the physical laws that govern them. More than 100 years ago, physicists such as Willard Gibbs recognized the mathematical properties of these laws and the ways in which physicists and engineers now routinely use them in everything from laboratory experiments to large-scale industrial processes. We decided on statistical shorthand notation.
The success of so-called statistical physics raises the possibility that other systems composed of vast numbers of similar entities also have their own “laws of physics.” In particular, physicists have long expected that the bulk properties of neurons might be well-suited to this kind of approach.
neurophysics
The behavior of single neurons is well understood. But when you put them together into networks, more important behaviors emerge, such as sensory perception, memory, and thinking. It is hoped that statistical or mathematical approaches to these systems will reveal the neurophysical laws that explain the overall behavior of the nervous system and brain.
“Providing statistical-mechanical descriptions of these and other emerging phenomena of life is an old dream of physics,” said Leenoy Meshram of the University of Washington and William of Princeton University, who reviewed advances in the field. Bialek says.
“These aspirations are seen in a new light with the development of the ability to measure the brain’s electrical activity, sampling thousands of individual neurons simultaneously over hours or days.”
Of course, the nature of these laws is fundamentally different from that of traditional statistical physics. The core of the difference is that neurons link together to form complex networks, and the behavior of one neuron can be closely correlated with the behavior of neighboring neurons.
It is relatively easy to formulate a set of equations that capture this behavior. However, it soon becomes clear that these equations are not easily solvable except in trivial situations.
Instead, physicists must consider all possible correlations between pairs of neurons and use experimental evidence to constrain what correlations are possible.
The problem, of course, is that the number of pairs grows exponentially with the number of neurons. So, as the number of neurons increases, the question arises: how much data do we need to collect to constrain the model?
One standard system in which this is often measured is the retina. It consists of a network of light-sensitive neurons whose activity between neighboring neurons is known to be correlated. Therefore, when one neuron is activated, there is a high probability that the neuron next to it will also be activated. (This is the reason for the gently evolving coral-like pattern of vision that people sometimes notice when they first wake up.)
Experiments in this field began by monitoring the behavior of a small number of neurons, then tens, then hundreds, and now approaching thousands (but not millions). This data turns out to be useful in constraining the model to the point where it can predict the behavior of neurons with surprising accuracy, for example, when asked to predict the number of active neurons in a given set. Ta.
This suggests that the system of equations accurately captures the behavior of the retinal network. In other words, “the model is actually a solution to the mathematical problem we are trying to solve,” Meshram and Bialek say.
Of course, the retina is a highly specialized part of the nervous system, so a key question is whether similar techniques can be generalized to advanced cognitive tasks performed in other parts of the brain.
Actions to take in an emergency
One challenge here is that the network can exhibit emergent behavior. This is not the result of random or weak correlation. Instead, the correlations can become very strong and spread like an avalanche throughout the network.
Networks that exhibit this property are said to be in a critical state and are connected in special ways that enable this behavior. This importance turns out to be general in nature, suggesting that networks can adjust themselves in special ways to achieve it.
“Self-organized criticality” has been extensively studied over the past two decades, and there has been some success in explaining it mathematically. However, how exactly this self-regulation works is the focus of much ongoing research.
It is not yet clear how powerful these approaches will be. Meshulam and Bialek are mindful of the observation that some natural phenomena are amenable to the kind of analysis that physicists excel at. “All the birds in a flock agreeing to fly in the same direction is similar to an arrangement of rotating magnets,” they say.
The fact that this is just a metaphor concerns them. Although metaphors can aid understanding, the actual behavior of these systems is often much more complex and subtle.
However, there are reasons to think that mathematical models can evolve further. “The explosion of data on networks of real neurons presents an opportunity to move beyond metaphor,” they say, adding that data from millions of neurons could soon inform this debate. I added that it should be helpful.
“Our experimentalist friends will continue to push the frontier by combining the tools of physics and biology to access more of the brain in this way,” Meshram and Bialek said. concludes. “The prospects for the theory are bright.”
Reference: Statistical mechanics of real neuron networks: arxiv.org/abs/2409.00412