Cambridge scientists have shown that placing physical constraints on an artificial-intelligent system – much in the same way that the human brain has to develop and function within physical and biological constraints – allows complex organisms to develop brain features to solve them. work work
Neural systems, like the brain, organize themselves and make connections, so they must balance competing demands. For example, power and resources are required to grow and sustain the network in physical space, while at the same time optimizing the network for data processing. These trade-offs shape all brains within and across species, which may help explain why many brains converge on similar organizational solutions.
Jascha Achterberg, a Gates Scholar at the Medical Research Council Cognition and Brain Sciences Unit (MRC CBSU) at the University of Cambridge, said: “The brain is not only great at solving complex problems, but it also does so using very little energy. “The new work shows that we use as few resources as possible. Considering the brain’s problem-solving abilities as well as its goals can help us understand why brains look the way they do.”
It stems from a broad principle, which is that biological systems generally evolve to make the most of the energetic resources available to them. The solutions they arrive at are often very elegant and reflect trade-offs between the various forces imposed on them.”
Dr. Daniyal Akarka, Co-lead author, MRC CBSU
In a study published today, Dr Nature is machine intelligence, Achterberg, Akarca and colleagues developed an artificial system intended to model a very simplified version of the brain and apply physical constraints. They found that their system developed some key features and techniques similar to the human brain.
Instead of real neurons, the system uses computational nodes. Neurons and nodes are similar in function, each of which takes an input, transforms it and produces an output, and a single node or neuron can connect to multiple others, all inputting information must be computed.
In their system, however, the researchers applied a ‘physical’ constraint to the system. Each node was assigned a specific location in a virtual space and the further apart two nodes were, the more difficult it was for them to communicate. This is similar to how neurons in the human brain are organized.
The researchers gave the system a simple task to complete – in this case a simplified version of a maze navigation task commonly given when studying the brains of animals such as rats and macaques, where it has to combine multiple pieces of information to make a decision about the shortest. Route to get to the end point.
One of the reasons the team chose this particular task is that in order to complete it, the system must maintain a number of elements—start positions, end positions, and intermediate steps—and once it learns to do the job reliably, it can. Observe at different moments in the test, which nodes are important. For example, a particular cluster of nodes may encode finish locations, while others encode available routes, and it is possible to track which nodes are active at different stages of the task.
Initially, the system does not know how to complete the task and makes mistakes. But when it is given feedback it gradually learns to get better at the task. It learns by changing the strength of connections between its nodes, just as we learn when the strength of connections between brain cells changes. Then the system repeats the task over and over until it finally learns to perform it correctly.
With their system, however, physical limitations meant that the farther apart two nodes were, the more difficult it was to create connections between the two nodes in response to feedback. In the human brain, forming and maintaining connections spanning a large physical distance is costly.
When the system was asked to perform the task under these constraints, it used some of the techniques used by real human brains to solve the task. For example, to get around limitations, artificial systems have begun to develop hubs—highly connected nodes that act as conduits for transmitting information across networks.
What was even more surprising, however, was that the response profiles of the individual nodes began to change themselves: in other words, instead of having a system where each node codes for a specific feature of the maze task, such as the location of the target or the next choice, the nodes evolved a Flexible coding scheme. This means that at different moments the nodes can be firing for a mixture of features of the maze. For example, the same node may be able to encode multiple locations in a maze, rather than requiring specialized nodes to encode specific locations. This is another feature seen in the brains of complex organisms.
Co-author Professor Duncan Astell, from Cambridge’s Department of Psychiatry, said: “This simple constraint – it’s hard to keep the wiring nodes apart – forces artificial systems to develop quite complex properties. Interestingly, these are properties shared by biological systems such as the human brain. I think It tells us something fundamental about why our brains are organized the way they are.”
Understanding the human brain
The team is hopeful that how their AI system can contribute to these limitations, differences in shape between human brains, and differences seen in people who experience cognitive or mental health problems.
Co-author Professor John Duncan of MRC CBSU said: “This artificial brain gives us a way to make sense of rich and confusing data when we record the activity of real neurons in real brains.”
Achterberg added: “Artificial ‘brains’ allow us to ask questions that are impossible to see in real biological systems. We can train the system to perform tasks and then play experimentally with the constraints we impose, to see what “special individuals’ brains look like. starting.”
Implications for designing future AI systems
The results may also be of interest to the AI community, where they may allow the development of more efficient systems, especially in situations where physical constraints are likely.
Dr. Akarka said: “AI researchers are constantly trying to figure out how to create complex, neural systems that can encode and perform tasks in a flexible way that is efficient. To achieve this, we think that neurobiology will give us a lot of inspiration. For example, we The overall wiring cost of the system we built is much lower than what you would expect from a typical AI system.”
Many modern AI solutions use architectures that only superficially resemble the brain. The researchers say their work shows that the type of problem AI is solving will influence which architecture is most powerful to use.
Achterberg says: “If you want to create an artificial-intelligent system that solves a human-like problem, eventually the system will look much closer to a real brain than systems running on a large compute cluster that specialize in different tasks. Performed by humans. Our artificial The architecture and structure we see in the ‘brain’ exists because it is useful for dealing with specific brain-like challenges.”
This means that robots that have to process a large amount of constantly changing information with limited energy resources can benefit from not dissimilar to the structure of our brains.
Achterberg added: “The brains of robots deployed in the real physical world are probably going to look like our brains because they face the same challenges as us. They have to constantly process new information coming in from their sensors while controlling their bodies. Moving through space toward a goal.” “For many systems must perform all their computations with a limited supply of electrical energy and, therefore, to balance this strong limitation with the amount of information required to be processed, will likely require a similar brain structure to ours.”
The research was funded by the Medical Research Council, Gates Cambridge, James S. McDonnell Foundation, Templeton World Charity Foundation and Google DeepMind.
Source:
Journal Reference:
Achterberg, J., etc. (2023). Spatially embedded recurrent neural networks reveal broad links between structural and functional neuroscience findings. Nature is machine intelligence. doi.org/10.1038/s42256-023-00748-9.