Gathering dust on what I imagine to be a dimly lit shelf in Widener Library, Cambridge, Massachusetts, is a biochemistry dissertation I’m sure nobody has ever taken off the shelf and read.
It’s a shame. After all these years, I’m not concerned with its overall significance. But I’d bet a causal reader, pulling it off and blowing the detritus from it, would get a hoot out of the dedication page, including this quote (from Tolkien):
He that breaks a thing to find out what it is has left the path of wisdom.
Bit of an inside joke. But here I was, reducing the disulfides that hold together the halves of the insulin receptor molecule so I could deduce something about its structure.* Literally, breaking it to find out what it is.
Any home mechanic or budding IT tech can tell you: Any idiot can take something apart. You have to be on the ball, though, to put it back together.
Which brings us to my belated re-entry into PSC blogging, with a nod to Aaron Dubrow, my opposite number at the Texas Advanced Computing Center, who covered a keynote speech at the XSEDE13 conference by Terrence Sejnowski, Director of the Computational Neurobiology Laboratory at the Salk Institute for Biological Studies, about the NIH’s Brain Initiative.
The Brain Initiative, Sejnowski said (I was there too, though I covered other talks for XSEDE’s External Relations group), is meant to be the moon shot of our generation, harnessing HPC to really understand the brain and how it works. I’ve been around long enough to remember the Moon Shot, capital em ess, and from that tender-age experience I can only say: Go. Go. Go.
But enough of channeling a particular seven-year-old who could name the parts of the Apollo spacecraft (including the Saturn V booster) and just about every dinosaur at the Museum of Natural History in Manhattan. We’ve got a heck of a job ahead of us, if so.
The work began some time ago; hopefully the attention and money of the new program will accelerate it, but it has begun. For PSC’s slice of it, our National Resource for Biomedical SuperComputing has been working with a group at Harvard that’s using HPC resources to knit electron micrograph images of ultra-thin slices of brain tissue together, computationally, into their original structure in the living brain. It’s not easy; the process of slicing the tissues creates wrinkles and other distortions, which have to be corrected in the computations. But that’s not the end of it; they correlated those structures with the activity of the live brain, as shown by two-photon calcium imaging. Pairing the structure with the function gave a new perspective on how nerve cells in the visual cortex process information.
This is the gig for MMBioS,^ the National Center for Multiscale Modeling of Biological Systems, which we’re undertaking with CMU, Pitt, and the Salk Institute (Terry Sejnowski, by the way, is a project co-leader). It aims, ultimately, at nothing less than knitting together biological systems — the brain being an initial focus — from the behavior of individual atoms to brain functions, such as recognizing the orientation of an object.
In my day, we sketched out pictures of arrows between proteins in the cell, showing how signaling pathways were supposed to work. Today we realize it’s a network of interactions that can’t be sketched — it has to be simulated in a computer to be understood. MMBioS’s mission is that phenomenon on steroids: a heck of a putting-back-together problem. Obviously, making the connections between the atomic, molecular, subcellular, cellular, and tissue levels is going to have to be done in steps. But the fact that this task is imaginable with technology that is with us, or soon to be developed — that in 10 years, modeling a mouse brain in its entirety isn’t out of the question — is simply stunning.
Exciting times, my friends.*Wrap your head around the primitiveness of the protein biochemistry I was doing. I dare you. ^Say it with me: “mmmmmmBIos.” (Sorry, Markus.)