Understand and improve animal and machine intelligences worldwide.
Our central motivation is to understand animal and machine intelligences, and in particular how learning and memory arises in such systems. We believe that all intelligences operate under certain shared principles, much like all matter and energy are governed by certain principles. By determining these foundational principles, we can increase the rate at which we understand and improve intelligences.
To do so, we design, build, study, and apply statistical machine learning and big data science techniques. The tools we develop and utilize follow some basic design principles, including philosophical (validity, uncertainty), statistical (consistency, efficiency, robustness), and computational (scalability, tunability); see this blog post for details. While we don't fully understand how intelligence works, we do have some current working hypotheses, subject to revision given new data or theories.
For animal intelligence, our current conjecture is that to a first approximation, intelligence arises largely due to communication between disparate entities with specialized properties across multiple scales. In other words, we believe that the key to understanding various animal intelligences is determining which connections between which entities with which properties at which scales are the mechanisms underlying the intelligences. To answer these questions, we collaborate extensively with some of the best experimental neuroscientists in the world at different scales, each of whom design experiments and collect data amenable to answering these questions. We then design, build, study, and apply statistical models and estimators designed to reveal the latent structures in these big connectome datasets. Because the raw data are never in a form amenable to directly study, we also build big data systems to manage, visualize, and wrangle the data.
For machine intelligence we focus on studying the foundations of learning and memory. We are currently pursuing a number of research threads, including learning in non-Euclidean contexts (e.g., populations of networks with vertex and edge attributes), geodesic learning (which is the first step to learning from wide data), and lifelong learning (which means improving performance on many disparate tasks (past, present, and future) to improve other tasks).
What is NeuroData?
What is the Open Connectome Project?
In 2011, we launched Open Connectome Project, which is an open source software stack of Web-services that store, analyze and visualize large imaging datasets. However, as technology changed, features were added, and scale increased, our academic development team and resources became overwhelmed. We overhauled our custom stack into a community-built and maintained software ecosystem deployed in the commercial cloud neurodata.io, integrating multiple open-source projects and extending them for our needs. The ecosystem enables analyses on disparate datasets by re-using components originally designed for other applications.