Neuroscience, Computing, Performance, and Benchmarks: Why It Matters to Neuroscience How Fast We Can Compute
About this Research Topic
Over the past decades, computing has become an integral part of neuroscience. Novel methods and tools in computational neuroscience and advances in our computational capabilities allowed the study of increasingly complex models and questions. The confluence of our ability to simulate and the availability of better experimental data recently has given rise to a number of detailed models of brain tissue.
As it was recently reviewed, this has been possible because simulation tools (e.g. NEURON/CoreNEURON, NEST, LFPy etc) have matured and have become reliable research instruments that can be used by scientific groups for their respective questions. Importantly, by using these open source and community standard simulators, computational groups can focus on their scientific questions and leave the details on how the computations are done to the community of simulator developers - exactly as it should be.
However, there are two developments that leave room for uncertainty. The first development is that despite the advances of neurosimulators, certain prevalent scientific questions remain challenging to be computationally addressed. Amongst those questions are whole-brain models that require a continued increase in computing resources, which in turn demand the expansion of our computational capabilities. Similarly, questions requiring long time-scales as for example in plasticity studies, but also for the emerging field of neuroinspired machine learning and the extensive training runs, fundamentally challenge our ability to come back with our answers fast (or how much biological time we can simulate in a day). Lastly, clinically relevant simulations for brain signals for use cases such as surgery planning will make it necessary to deliver the required computations in practical settings and with enough trajectories to have good confidence in the predictions.
The second development is that our ability of cramming more and more transistors into computer chips and in turn obtaining an exponential performance growth essentially for free (at least from the perspective of the user) is slowing down and may even halt in the next decade or become too expensive. While this “end of Moore’s law” perspective has been mostly a computer engineering challenge, it is increasingly becoming a problem for computational disciplines as well, including neuroscience.
This Research Topic aims to bring the importance of the interplay between what we can compute and the neuroscientific questions we can ask the neuroscience community at large, rather than letting it be discussed amongst neurosimulator developers and the computer science community only. The goal is to give the neuroscience community an overview on which questions we can ask today, a glimpse of what it will take to ask
novel questions in the years to come, and to possibly lay out which questions may be out of reach until we find fundamentally new solutions.
In order to do so, we want to assemble original research papers on neurosimulation technology (software + hardware) that address simulator performance and how the community can readily benefit from it (e.g. the solutions being integrated in community tools).
Potential topics for contributions include, but are not limited to:
● Large-scale simulation of neuronal models
● Energy-efficient simulations for neuroscience
● Performance of neuronal simulators
● Understanding of computational cost and bottlenecks
● Strategies for benchmarking generic simulators
● Frameworks and approaches for online and offline analysis of large-scale simulations
● Computing architectures for neurosimulation, e.g. neuromorphic, hybrid, etc.
Keywords: Brain-Scale Simulation, Performance, Benchmarks, Neuromorphic, Performance Models
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.