The worlds of artificial intelligence and neuroscience are increasingly intertwined. AI, initially drawing inspiration from our understanding of the brain, is now providing powerful new tools to unravel the complexities of human cognition.
This creates a fascinating feedback loop: insights from neuroscience can inspire more sophisticated AI, while advanced AI techniques help us decode the brain’s mysteries. At the heart of this convergence is the work of researchers like Dr. Rahul Biswas, a postdoctoral scholar at the University of California, San Francisco (UCSF), who is exploring this dynamic interface.
In an interview with TechTalks, he explained how his research explores the mechanisms of communications between different brain regions, simulates brain activity using generative AI, and ultimately seeks to inform both our understanding of ourselves and the development of more human-like AI.
Charting the brain’s inner workings
Dr. Biswas’s journey into this field began with a foundation in statistical modeling. “It started when it was still just statistical modeling, from around the year 2012-2013,” he said in an interview with TechTalks. “I have been working in that area and currently it is progressing into AI and neuroscience.”
This progression led him to focus his research on a particularly challenging area: inferring causal brain networks. “Mostly during my PhD, I focused on inferring causal brain networks, so causal brain network modeling from neural signals,” Dr. Biswas notes. This early work laid the groundwork for his current explorations at the intersection of AI and neuroscience.
Understanding how the brain works often involves observing which areas become active during certain tasks. For a long time, researchers could see correlations, i.e., different brain regions lighting up together. However, correlation doesn’t tell the whole story.
The challenge, as Dr. Biswas puts it, was moving beyond these observations: “We could understand how different brain regions kind of act together—they’re correlated—but we were not able to extract causation out of it. Meaning, does brain region A influence brain region B when the person is seeing something?”
Answering this question of “who influences whom” is crucial. Is region A sending a signal to region B, or are both responding to a third, unseen factor?
Dr. Biswas’s research focuses on developing methods to move beyond simple associations to map this directed influence, or causal flow. “My main thesis was to discern those differences and be able to tell how the signal exactly flows between brain regions,” he explains. This ability to map causal networks has significant implications.
For instance, understanding how brain network dynamics change in conditions like Alzheimer’s, Parkinson’s, chronic pain, or depression could lead to earlier and more accurate diagnoses. “Knowing those changes in brain network will eventually help guide more targeted interventions, and early diagnostics for those brain diseases,” Dr. Biswas says. “Those causal networks can predict diseases well in advance; they’re like good biomarkers for diseases.”
This deeper understanding can pave the way for personalized treatments, perhaps even cognitive exercises designed to strengthen or dampen specific brain interactions.
Generative AI and “foundation models for the brain”
Beyond understanding existing networks, Dr. Biswas is also working on generative AI algorithms that can simulate parts of the brain, which he describes as “a foundation model for the brain” or ChatGPT for the brain. To build these foundation models, Dr. Biswas’s team explores various advanced AI architectures, including diffusion models, transformer models, and vision-language models (VLMs), similar to those powering leading-edge AI.
Just as large language models are trained on vast amounts of text to understand and generate language, these brain foundation models would be trained on rich, multimodal neural data. This includes “the stimulus, the reaction, and the brain signals,” Dr. Biswas explains, forming a three-pronged approach to capturing brain dynamics.
The appeal of such in silico (computer-simulated) neuroscience is clear. “Then you don’t need to basically do live experiments so much,” Dr. Biswas notes, pointing out that “experimentation is very expensive, not only in terms of finances, but also in terms of resources.” These models could allow researchers to query a simulated brain, run virtual experiments, and rapidly test hypotheses.
This ambitious endeavor is fueled by a “data explosion” in neuroscience. Dr. Biswas highlights the impact of technologies like Neuropixels, state-of-the-art silicon neural probes that have revolutionized electrophysiology by enabling simultaneous recording of electrical activity from thousands of individual neurons across multiple brain regions.
Coupled with a growing culture of open data sharing, researchers have access to unprecedented amounts of neural data. The Allen Institute, for instance, spearheads this open science approach by providing large-scale, high-quality datasets that are crucial for this type of research. An example is MICrONS (Machine Intelligence from Cortical Networks), a multi-modal dataset that provides a functional wiring diagram of a cubic millimeter of mouse visual cortex. This massive dataset combines high-resolution electron microscopy (EM) reconstructions of neural circuits, revealing the morphology and connectivity of over 200,000 cells and 500 million synapses, with corresponding functional imaging data from tens of thousands of those same neurons as the mouse responded to visual stimuli. Such rich, multimodal datasets are invaluable not only for training the complex foundation models Dr. Biswas is developing but also for informing their architecture with real anatomical data. Dr. Biswas’s lab works with these public resources and also collaborates to collect their own.
Interestingly, the known physical structure of the brain can also guide the development of these AI models. “We’re taking the structural information that has been recorded through electron microscopy of the brain and adding it as a constraint into the AI models so that they respect this structural information or anatomical connectivity information between the neurons,” Dr. Biswas says.
This involves modifying the AI’s architecture to mirror, to some extent, how real neurons are connected, potentially making the models more biologically plausible and efficient. The parameter sizes for these models currently range from 10 million to 500 million, with explorations into billion-parameter models underway.
Decoding experiences and the power of causal insight
One specific application of generative AI in Dr. Biswas’s work is “visual experience reconstruction” from functional magnetic resonance imaging (fMRI) signals.” The goal is to look at brain activity and reconstruct the visual input that generated it. This technology could potentially aid individuals with visual or communication disabilities.
While powerful, predictive AI models, including these foundation models for the brain, have inherent limitations. They are, as Dr. Biswas points out, “dependent on the training data.” If a model hasn’t encountered a specific scenario during training, it may struggle to predict outcomes accurately for that new situation. This is where causal models retain a distinct advantage.
Dr. Biswas illustrates a counterfactual—a “what if” question about an unobserved scenario, such as what would have happened if a patient had taken medication A instead of medication B. “Causal models are typically good at answering counterfactual questions,” he states, “even when that specific scenario was not within the distribution of data that the model was trained on.” For this reason, he sees both approaches as complementary: “It will always be that way, those two going hand-in-hand so that we have both sides with us: the predictive model, and the causal model.”
Towards human-like AI and deeper understanding of ourselves
Ultimately, the insights gained from studying the brain can profoundly influence the trajectory of AI development. “Firstly, if we understand the brain better… we can have AI that is more human-like and mimics the brain,” Dr. Biswas suggests.
Beyond achieving high performance on benchmarks, the goal is to create AI with more flexibility, adaptability, and contextual understanding characteristic of human intelligence. Such AI could more seamlessly integrate into complex, real-world scenarios where human-like reasoning and predictability are paramount.
Looking forward, Dr. Biswas is also keen to apply these advanced tools to explore the more nuanced aspects of human experience, such as different cognitive states during activities like meditation. With today’s sophisticated causal and AI models, “we are able to get a richer picture of what goes on in the brain,” he says.
By dissecting the causal mechanisms of brain function and leveraging generative AI to simulate its complexities, scientists are not only unlocking the secrets of human intelligence but also charting a course for artificial intelligence that is more robust, adaptable, and perhaps, more aligned with our own cognitive abilities. The journey promises profound discoveries about the mind and the machines we endeavor to build in its likeness.