The 2022 publication of Marek and colleagues’ paper on brain-wide association studies (BWAS) ignited one of the most intense methodological debates the neuroimaging field has seen in years.1 The central message that many reported associations between functional connectivity and behavioural measures were unstable or inflated when tested in large samples was not new.2,3 But its forceful demonstration, paired with striking visualizations and clear recommendations, resonated far beyond its original scientific context.
Within weeks, the paper was not only discussed across scientific social media, neuroscience conferences, and journal clubs, but also spilled into mainstream media. The dominant narrative quickly narrowed to a reductive takeaway: small-sample BWAS are essentially doomed, and functional magnetic resonance imaging (fMRI)-based markers of cognition or mental health are unreliable at best and invalid at worst.
It was a moment that revealed as much about the sociology of science as about neuroimaging methodology. Healthy methodological critique is central to scientific progress. But the pace and amplification of reaction, where subtleties were lost in 280-character summaries, distorted the conversation. Instead of prompting reflection on how to design better studies, the discourse often tipped into sweeping dismissals of an entire research program.
Against this backdrop, a new paper by Ooi et al.4 arrives as a welcome rebalancing force. Published in Nature, the study offers a careful, empirically grounded demonstration of how to improve the reliability and predictive utility of BWAS. Their central message is straightforward but consequential: longer fMRI acquisitions substantially stabilize functional connectivity estimates, boost behavioural prediction accuracy, and ultimately reduce the effective cost per reliable association.
The strength of the work lies in its breadth. The authors validate their findings across multiple cohorts, including ADNI, UK Biobank, and HCP, which span different populations, scanners, and acquisition protocols.5–7 They examine both resting-state and task fMRI. They quantify improvements with increasing scan length, showing diminishing returns but clear gains up to (and beyond) 20–30 minutes.8,9
An important aspect of the original debate deserves renewed attention: multivariate models fared substantially better than univariate ones. In the original analyses, multivariate approaches (e.g., support vector regression, and canonical correlation) showed higher predictive accuracy and more graceful degradation with decreasing sample size compared to single-edge associations. This reflects a deeper property: the cognitive and clinical phenomena BWAS aim to predict are distributed, and methods that model distributed patterns naturally capture more useful variance.10,11 This point becomes relevant again when considering the strongest results in the new work by Ooi and colleagues. Notably, the gains from longer scans are largest for multivariate models. This convergence between the Marek critique and the Ooi solution reinforces a central point: the brain-behaviour associations of interest are distributed phenomena. Multivariate models succeed not because they are more “sophisticated,” but because they better reflect the biological organization of the system they aim to predict.12,13
In contrast to the momentous reaction to the Marek et al. critique, the Ooi et al. solution has generated relatively little noise on social or traditional media. This should not be surprising. Solutions rarely go viral; critiques do. Solutions require attention to detail, careful engineering, and a willingness to embrace complexity. They do not lend themselves to sound bites.
Still, longer scans are not a panacea. They address one set of limitations exceptionally well, namely measurement noise, sampling variability, and unstable estimates. But they do not address another set of challenges that run deeper: the mismatch between what BWAS attempts to measure and what accumulating evidence suggests about brain organization.
Even with perfect measurement, BWAS will always be constrained by its conceptual framing. At their core, BWAS applications assume that functional connectivity reflects a relatively stable, trait-like property of the individual—much like height or blood pressure. The hope is that, by sampling connectivity over time and averaging, one can converge on a person’s “true” intrinsic connectivity profile.
This assumption simplifies analysis but conflicts with a large body of evidence that brain function is inherently dynamic. Functional connectivity is not a fixed trait; it is an emergent property of ongoing, metastable neural activity.14–17 The brain moves through a landscape of states, some transient, some stable, some nested within broader cycles. These transitions depend on internal dynamics, task context, neuromodulatory tone, and spontaneous fluctuations that are not noise but structure.18,19
A particularly clear articulation of this view comes from work by Shine and colleagues, who show that large-scale cortical integration and segregation fluctuate over time, shaping cognitive performance via changes in neuromodulatory state and manifold structure.20,21 From this perspective, the connectome per se is not a trait, but rather a pattern generated by trajectories through a dynamical landscape. Complementary work from Sagar showed behavioural differences related to individual topologies that encapsulate network dynamics.22
Longer scans improve our measurement of these states. They increase the reliability of connectivity estimates within a given dynamical regime.23 But they do not tell us how many regimes were visited, how long the system spent in each, or whether individuals differ more in their transitions than in their stationary configurations. This is not a critique of Ooi et al. as their goal was not to provide a theory of brain dynamics. It serves as a reminder that statistical reliability is not equivalent to mechanistic understanding.
To put it differently: longer scans improve what BWAS measures, but do not resolve the question of what BWAS should be trying to measure. This distinction is important because association studies often implicitly confound measurement stability with biological interpretation. A reliable measure is necessary for prediction, but reliability alone does not guarantee that the measure corresponds to a stable underlying mechanism.24 Two individuals can show equally reliable connectivity matrices while engaging fundamentally different dynamical patterns.25,26 Under these circumstances, increasing the precision of static estimates can only take us so far.
A dynamical systems perspective offers an alternative view. Instead of treating the functional connectome as a trait, we can think of it as a set of transient patterns generated by trajectories through a structured landscape.27,28 From this perspective, what matters is not only how much data we sample, but which states the brain occupies during sampling. Individual differences may arise more from differences in state transitions, flexibility, or accessibility of dynamical regimes than from differences in stationary connectivity profiles.29 Prediction models will succeed when the sampled regime aligns with the regime relevant to the behaviour and may fail otherwise, regardless of sample length.
This perspective does not diminish the value of longer scans. If anything, it strengthens their importance. Richer sampling enhances our ability to observe a broader portion of the dynamical landscape. It also suggests that the next step is not simply to extend acquisition time indefinitely. Rather, it is to design BWAS that acknowledge the multiscale, metastable, and context-dependent nature of brain function.
Adopting a dynamical systems perspective does not require a huge shift in how we do experiments. Many existing resting-state and task fMRI datasets already contain rich temporal information. The challenge lies less in data acquisition than in how these data are summarized and interpreted. Treating fMRI time series as samples from evolving state spaces, instead of collapsing them into static averages, allows existing datasets to be analyzed in ways that better align with the brain’s dynamical organization.
The path forward may involve hybrid approaches: combining longer acquisitions with computational models that estimate underlying dynamical regimes using generative models that link observed functional patterns to plausible mechanisms.25,30,31 These approaches move the field beyond the binary of “large samples vs. small samples,” toward a conception of brain-behaviour mapping that respects statistical rigour and biological reality.
The BWAS debate has already reshaped the field in important ways. It has encouraged transparency, sharper thinking about sample sizes, and a recognition of the pitfalls of small-N, high-dimensional inference.32 The new work by Ooi et al. represents exactly the kind of constructive response we should celebrate: empirically motivated, methodologically sound, and pragmatically oriented.
This debate also highlights an opportunity that goes beyond whether BWAS is “viable” to reconsidering what kinds of brain-behaviour relationships we aim to discover. If we continue to treat functional connectivity as a static trait, even perfect measurement will leave us short of the mechanistic insights needed to understand cognition or inform clinical interventions.
The future of BWAS will be strongest if it embraces both sides of this equation. We need the rigour exemplified by Ooi et al.—careful measurement, thoughtful design, and honest quantification of stability. We need a conceptual shift that situates these measurements within a broader understanding of the brain as a dynamical system.
Longer scans are an important part of the solution. A richer theory of brain dynamics may be the other part. Together, they offer a path forward where prediction and explanation need not be at odds, and where methodological precision aligns naturally with biological insight.
Funding Sources
ARM reports funding from the Natural Sciences and Engineering Research Council of Canada (RGPIN-2024-05969) and Canadian Institutes of Health Research (PJT204049).
Conflicts of Interest
ARM declares no competing interests.
