DeciBio's Spatial Omics Q&A with Garry Nolan of Stanford University

September 15, 2021
DeciBio Q&A
Research Tools

If there was a Mount Rushmore dedicated to spatial omics, Dr. Garry Nolan would certainly be on it. Dr. Nolan has played a key role in the development of multiple single cell and spatial omics technologies, and I recently had the opportunity to speak with him about the path of an inventor. You can find a transcript of our discussion below.

Garry Nolan, PhD, is the Rachford and Carlota A. Harris Professor in the Department of Pathology at Stanford University School of Medicine. Within the field of spatial omics, Dr. Nolan has played a pivotal role in the development of CyTOF (the mass spectrometry-flow cytometry hybrid device which the Fluidigm Hyperion / IMC is built upon), the Multiparameter Ion Beam Imager (or MIBI) being commercialized by Ionpath, and most recently CODEX, the foundational technology for Akoya Biosciences. Dr. Nolan has also received numerous awards and grants for both his work as an inventor and his work in immunology, oncology, and virology.

Dr. Nolan, really appreciate the opportunity to speak with you today, as people who are familiar with spatial omics know, you need no introduction. Your work and inventions in this space are what originally sparked my interest in this field, so I’m quite excited to be able to chat. Now, I know I just mentioned you need no introduction, but for our readers who are less familiar with spatial omics, can you provide some background on your work at the Stanford University School of Medicine?

I am a professor in the Department of Pathology at the Stanford Medical School, and I’ve been involved in single cell omics since I was a graduate student here at Stanford with Len Herzenberg back in 1983. He was the developer of the flow cytometer, or at least the FACS—Fluorescence-Activated Cell Sorting. So, I got a lot of my original training with Len and Lee Herzenberg, a husband-and-wife team, on how to think about single cells and how to think about populations of them and the heterogeneity which might exist inside these populations of immune cells. As my career progressed and we started to realize we needed more and more parameters to study at a time, that was what led to the development of CyTOF, which allowed us to go beyond the limits of what standard flow cytometry could support with fluorescence.

This then led us to start to think about how we can increase the number of parameters being read out in a spatial context, as we were having to remove the cells in order to use CyTOF. From there Yury Goltsev, a member of my lab, developed what would become CODEX at a time where you could only look at 3 or 4 markers. This then led to the start of Akoya. We had the proof of concept, and it was great to be able to hand that off to a company, such as Akoya, that could serve the global community.

Your background started with this single cell-based research, an academic background in immunology, and you’ve gone on to become a prolific inventor and member of the life sciences startup community. You founded Rigel, which went public, BINA and APPRISE, both of which were later sold to Roche, you were involved with DVS Sciences, which was later acquired by Fluidigm and a relationship which later spawned the Hyperion, you are an inventor of MIBI, which was commercialized by Ionpath, and an inventor of CODEX, which was the founding technology for Akoya Biosciences. Could you help me understand and describe this transition from being an academic researcher to being named one of Stanford’s top 25 inventors? It’s quite a transition.

The best way to think about it is by realizing where government support ends and where commercial support begins. Most of the time in the development of technologies, even if you are successful at publishing and getting other people to adopt it, you realize fairly quickly that other people can very quickly become as expert as you once it’s published with the methodology. Suddenly, you’re competing against the people you trained. I’ve never liked being in situations like that. As an example, when I was a post-doctoral student in David Baltimore’s lab, and I was involved in the cloning of NF-κB p65, every single week I was looking through the journals to see who else might have cloned it and ruined my day.

As my career progressed, I realized a talent of mine was developing methodologies and platforms, and if you have a new methodology that no one else has, you basically get to bring a new technology to a problem that nobody else can do. You can solve the problem in a different way, and other people want to use your technology. Rather than arguing hypotheses with scientists, you get to provide tools. Then, you let other people on all sides of a problem argue with each other using your tools. Other people will come to you to collaborate, and this ended up being a useful approach in my early days when I was an assistant professor and developed the 293T cell retroviral producer system, which everyone around the world uses now for gene therapy applications. We never commercialized it, per se, but everybody wanted to use it, and everybody wanted me on their grants. It brought an incredible amount of money to my lab in the early days, and I saw a pattern. Develop the inevitable—inevitably we need to make retroviruses faster than before. Inevitably, for single cell applications, we needed to measure more biological attributes per cell—or, in other words, more parameters. If you can see what the inevitable is, develop it before somebody else thinks of it, and then excite the field about it, you can get there first and provide a service to the community. Then while everyone else is catching up, you can move onto what’s next.

One other issue there that’s important, is when you have gotten a technology to the 95% point, there’s no use in staying with it much longer. If it’s going to be useful to other people, you’re going to want to hand it off to a commercial entity so that they can deal with all the QC, the fortifying of the technology, and be the outward-facing entity for it. That leaves your mind open to think about what else you might want to accomplish.

CODEX, MIBI, and IMC are foundational technologies in spatial omics. You briefly covered your transition from single-cell focused work to spatial work, but what sparked your initial interest in high-plex imaging and what was the driving force behind these developments?

To be honest, it was rejections of certain articles from journals, or at least the critiques of reviewers. They’d say, “well, I know you’ve made a lot of interesting conclusions about these splenic cancers, but they’re out of their spatial context—how do you know you didn’t disturb them in the process of isolating them?” So, that led me and some of my post-docs to say, “what can we do to actually leave things in their spatial context?” That was it. A light bulb went off, as there’s not only reading the spatial context, as we had already developed a lot of the math around CyTOF and high-plex imaging, but the communities and neighborhoods of cells are going to be extremely important, not just for working out mechanisms, but also for thinking about predictions.

People were already talking about predictive biology to determine outcomes. We could already see it when the cells were being taken out of their spatial context, imagine how much more powerful it could be seeing the cells next to each other. That was, at the time, a hypothesis which has since been borne out now with all the papers we have been publishing. The subtle levels of proteins on a cell’s surface, that people would think on a flow cytometer are just a Poisson distribution of expression, in fact, are not. The cells on one side of the distribution, versus the other, predict where they are in the spleen. So, the cell is in a discussion with its neighbors to reorganize and reorient the proteins on its surface so that it either moves to the right place or that once it gets to the right place it achieves the correct levels of expression.

That made us realize, just like a paper I wrote long ago in Nature Biotechnology, an opinion piece well before anybody doing this stuff, that the network is what you want to understand. All things in the body are some levels of influence away from another part of the body which you could follow through a network. That’s really what a diagnostic is. It’s a surrogate of an event. If there’s a way to mathematicise these networks, you can get a better understanding of the body. That’s what we did in both the single cell arena and spatial proteomics. What has really excited me is seeing the whole RNA field come in and use the foundational aspects of what we did and use it for RNA. All power to them.

Spatial genomics has become a major factor in this field over the past couple of years. What you just mentioned with surrogacy of these networks makes the need for more parameters in diagnostics more obvious, it just goes to show that there’s so much opportunity there to move from using one marker / one surrogate to using three or four in relation to each other to produce a better model of what’s going on in the body. You mentioned working on developing these platforms with the post-docs in your lab. Personally, I’ve always wondered how you’ve managed to develop these systems. These platforms can be immensely complex technological marvels—how do you ideate and think through the development process for these platforms? Are these step-by-step, small realizations which build towards the technology or a single, massive aha moment?

There are whole books written about where ideas come from. The whole process of intuition. The idea for one company just came to me in a mental download while at a talk I was watching one day many years ago. That’s a whole different story, though. CODEX was one which took a step-by-step process. This was really the brainchild of Yury Goltsev, and he had an idea and published the first paper in Cell on that. I then realized with the postdocs that there was a faster way to do it, which became the technology commercialized by Akoya. It was just an alternate way of approaching it. As you were alluding to, though, this instrument began with us pasting together Lego blocks, tubes, and things with pumps and Yury writing computer drivers for the pumps. At first, we did it all by hand, and Yury’s a bit of a polymath so he built the pumps, and then we made more robust versions of the instrument. We probably had three or four versions that we went through. Some were funded by the Gates Foundation, then by the Parker Foundation, which then became the commercial version which Akoya developed and designed. One that, unlike our early versions, wouldn’t be likely to burn your lab down when you plugged it in.

Well, I can’t say my own love of Legos has gone so far as building anything of this magnitude or as impactful as something like CODEX. Speaking of CODEX, it’s a rather unique instrument in the sense that it’s effectively a microfluidics device which connects to a microscope. It’s much easier for an academic lab to get this up and running because of this when compared to other platforms. What was the inspiration behind developing this flexibility instead of an entire instrument?

Oh, definitely. Developing an instrument that has within it both optics and fluidics, is a level of expertise which often doesn’t come together in one lab or even one company. And, speaking of Akoya, one of the reasons they purchased the Phenoptics division of PerkinElmer was because the latter had the optics, and the reach into the translational and clinical markets which would give an instant leg up in those applications. There are two steps in the utility of any biomedical technology: first, developing a prototype diagnostic that passes muster with the journals, and then, second, there’s actually getting into the clinic and being accepted and used by doctors—that’s a whole new level of commercialization and expertise that as an academic I didn’t have. Akoya saw the purchase of Phenoptics as a perfect add-on because they already had that reach and use in pathology suites.

It also really helps prime the pump for when CODEX eventually launched. Akoya has also gone on to pen agreements with Nikon, ZEISS, CrestOptics, and Andor as microscopy providers to help continue the rollout of the CODEX platform.

As an addendum to what you were asking, as an academic, you hate being siloed—I’ve already got a microscope, why should I pay for another microscope just because it’s attached to your instrument? The idea, at least, was this is an add-on you can use to expand your current capabilities without burdening yourself with another microscope that, maybe in three or four years, would be out of date when they come out with version two. With CODEX, you can buy a second iteration of the add-on or the microscope without replacing the entire setup.

One thing you began to touch on there is a number of these platforms are extremely expensive—in some cases approaching $1M. That being said, these end-to-end solutions do have their place, but I imagine it is nice for many academics to purchase items as an add-on to instruments they may already have. I mentioned earlier that you trained in immunology and single-cell research, and you mentioned oncology as a key aspect of developing these technologies. These instruments cater to a wide variety of applications, driving novel insights for neuroscience in addition to oncology and immunology. These fields were being limited by the low-plex solutions which were historically available. For you, personally, what have been some of the most exciting developments you’ve seen in these fields which your platforms have helped generate?

The most exciting developments are always the ones you didn’t expect people to do with the platform. For instance, I’ve seen people start to use CODEX as a means for novel encoding systems, new ways to reveal more targets, and one exciting approach is in scarce data. There are ways to fill in the missing information on the notion that everything is sitting together in a network. You can make predictions of the missing information based on your knowledge of the data you have. People in the RNA field use this type of analysis all the time to fill in missing information. Very often what is read out in an RNA-seq is a single RNA per cell, and they make a lot of utility out of that limited readout. Sometimes they don’t have the cells, so they “borrow” it from others. We’ve been doing the same kind of thing now with our CODEX work. One of the other exciting developments has been around AI for understanding tissue organization. There have been some great developments there with the concepts of neighborhoods. We’re going to be coming out with some data soon around neighborhood schematics, or tissue schematics, where we’ve been able to start decoding what the rules are on why certain cells are near each other for biological purposes.

For example, in lymph nodes, you always find these basic constructs and arrangements of multiple cells to each other across all lymph nodes. However, there are some arrangements which are then more complex and only limited to intestinal lymph nodes. Of the 220 different lymph nodes in the body, some structures are common across all of them, but also tissue-specific constructs depending on their location in the body. Then, as it turns out, you can find some of these structures in cancers as the immune system deploys and creates immune structures inside cancers. Some of the things which we have found in the lymph nodes have now been found in the tumor, and they can be used to disable tumor growth. Learning the basic rules of tissue organization allows you to find entirely new classes of drug targets because now you can enable or disable cellular arrangements present in the cancers. That’s what’s been exciting. Years later, things we couldn’t have predicted we’d be able to see we have found, and we can now mathematicise it.

You definitely see a lot of biopharma companies beginning to implement these technologies because they know it can help them develop the next blockbuster multi-billion-dollar drug like Keytruda or Opdivo. Going back to something you mentioned earlier, I started thinking about the high-plex proteomics approaches that have been developed without spatial context, things like Somalogic and Olink. What caused you to originally develop these spatial platforms was pushback around a lack of spatial context. Thinking about the approaches I just mentioned or even things like liquid biopsies, will we always need spatial platforms to pair with these?

In a clinical setting, you can only get whatever tissues the ethics board and patient will tolerate, so, to the extent that you can use spatial omics in a tissue, where you’re doing a boil-the-ocean approach, you’ll never do that for a diagnostic. It’s too expensive. You use these approaches to find the minimum number of markers that are sufficient to create a diagnostic that is cheap enough and easy enough to run. The Phenoptics platforms can already cover this with the 8-10 markers that they can run. But it becomes a springboard as well when you can say, “If we know that this is the arrangement of cells, is there a blood-based or simple biopsy which can reflect this?” There’s a lot of information coming from the body, and you need to have a filter to see the information you need so that you can target it without the spatial context. Maybe, you can instead use mass spec and identify a protein within a narrow band that you’ve identified with spatial platforms. Without the ability to isolate to that narrow band, the mass spec readout is just noise. You need to filter and narrow the focus onto the things you can use that still provide you with the information that a doctor needs. A doctor doesn’t care about the spatial context, they just need to know if it will help their patient.

Shifting focus a bit and coming back to the platforms, how do you expect the use of these instruments to change in the future? Spatial omics got its start in hyper-plexed proteomics, and as we’ve discussed spatial genomics has been all the rage the past year and likely will be for the near-term as well. What do you expect from the next generation of spatial omics platforms?

Well, there’s certainly going to be a merging of the omics. People are already doing it as reasonably as they can. We already published an RNA amplification technology in Science with Karl Deisseroth’s group called STARmap, which allows you to map 1,000 genes at a time in a slice of mouse brain. So, you can imagine that is something which could be applicable to CODEX, and it’s just a matter of developing it enough for that to be compatible and enabling it for broader use. You could also imagine this for ATAC-seq or anything that is amplifiable and readable through a signature, particularly fluorescence in the case of CODEX. If it’s a mass spec signal, then a platform like MIBI could do it. That’s what’s coming.

What’s missing, unfortunately, is the limit of spatial resolution. Most of these amplification technologies require an area much larger than the marker they are detecting, so, unless you go down the road of super-resolution imaging across a whole tissue—which would take you a century—, you’re going to be limited by the size of the nanoball or whatever you have amplified. Even for super-resolution there’s a limit. They rely on low-level quantum fluctuations to center where the fluorescence came from. Maybe there will be new detection approaches. One of the more interesting ones I’ve seen has made use of quantum entanglement to reduce noise and send information into a cell with a quantum entangled photon, and the information you get back can be used to increase your signal-to-noise ratio. There have been a couple of papers which have used it for measuring proteins already. The problem is—do you know how difficult it is to create an entangled photon? Someday it will be as simple as a diode laser, and it can be easily used to increase signal-to-noise and allow us to detect things we aren’t able to detect today.

Even if you just go back 10 years ago, the capabilities of these platforms we have today would be unimaginable. To wrap us up, do you have any other final thoughts to conclude our discussion?

Part of the reason why you asked me to talk about this was about how does one develop a new technology. This is probably more of a message to all the post-docs and graduate students, but don’t let older professors tell you something can’t be done if you’re sure about it. If you really have a mental path to accomplishing it and you think it will be better, sometimes it’s more advantageous to just do that than it is to fight everybody else with the same set of tools. If you can make a better tool that can then go on to help others, if you can see the Gantt chart in your head towards making it, then do it. Read the methods sections of papers because you need those mental tools at your disposal to put all the pieces together for a new technology. Your intuition needs the raw materials—so never ignore the methods sections. That’s one of the things that I think, at least people who come to my lab, come because they want to learn how to do that. It’s not hard. You can train intuition—you just have to believe in what ideas you might come up with and that they’re possible. The old adage is if an old professor tells you something can’t be done, there’s a good chance they’re wrong, and if they tell you something can be done, there’s a good chance you’re right.

Note: While Gary Nolan played a pivotal role in the development of CyTOF applications, the technology was originally developed by Scott Tanner's group at the University of Toronto and at DVS Sciences. Additionally, the Hyperion was developed in collaboration between DVS Sciences, Bernd Bodenmiller (University of Zurich), and Detlef Günther (ETH Zurich).

No items found.
Precision Medicine is evolving at a rapid pace

Discover how we can help

Get in Touch