Following Seer’s first earnings call as a public company, we had the opportunity to sit down and discuss Seer’s technology and the proteomics market with Omead Ostadan. You can find a transcript of our discussion below.
Omead Ostadan, is the President and Chief Operating Officer of Seer and currently oversees the company’s commercial, product development, and operations functions. Prior to joining Seer in 2020, Omead served as the Chief Product and Marketing Officer at Illumina, where he had worked for 13 years after initially joining the company as the Vice President of Marketing.
Omead, thank you for joining us today and congratulations on Seer’s first earnings call a few weeks back! For our readers less familiar with Seer, can you tell us a few words about the company and its vision?
Absolutely. Seer was founded in 2017 by Dr. Omid Farokhzad, Dr. Robert Langer, and Dr. Philip Ma and the core premise of the technology came from work Omid had been doing for nearly 20 years working with nanoparticles for a range of end uses including in therapeutics and drug delivery fields. One of the things he and his colleagues observed was that nanoparticles have these incredibly reproduceable binding affinities. So, what was, in the world of therapeutics delivery, a bug, could very well be leveraged and turned into a feature in the world of discovery and ultimately clinical diagnostics. By leveraging the specific physiochemical properties of these nanoparticles, conceivably one could target different classes of biomolecules very precisely and very reproducibly. With that came the idea to use this capability to enable unbiased, deep proteomics at scale. This solves one of the most significant technological bottlenecks in science, which is how do we access the proteome, particularly in plasma with the same ease, reproducibility, and scale that one can access the genome or transcriptome.Seer is a focused life sciences tools company. Our goal is to empower scientists to make great discoveries by eliminating technology bottlenecks. It’s really about putting tools in the hands of people who can accomplish great things. We’re very clean in our business strategy as a tools provider. We’re helping people remove technology bottlenecks that have prevented them from tackling some of biology’s most complex and important problems, especially those that can be explored using unbiased, deep proteomics at scale.
I’d love to dive in a bit on some of the traditional barriers of proteomics and how Seer is poised to alleviate those, but, first, speaking holistically and at a higher level here, the last decade has really seen a revolution in genomics. Do you believe the next decade can be the decade of proteomics, and how might it differ from the advancements we’ve witnessed with genomics?
Before coming and taking an operational role at Seer, I had spent the previous 20 years on the genomics side. I started at Applied Biosystems managing a range of capillary sequencing platforms when capillary sequencing was just hitting the scene. Leaving Applied Biosystems, I was fortunate to join Solexa, which was subsequently acquired by Illumina. So, to be able to go from capillary electrophoresis, the technology which allowed our first pass at the human genome, to then the genome analyzer and witnessing it fundamentally shift the trajectory of genomics, transcriptomics, and epigenomics, it became very clear to me that revolutionary technologies can fundamentally change the shape of research, discovery, and industries. What was amazing and enabling about next generation sequencing (NGS), which ultimately led to my “Aha” moment of leaving Illumina to join a small startup, was the parallels between what Seer’s technology can do today and what Illumina’s technology did 15 years ago.The key enabling aspect of Illumina’s technology was massively parallel sampling. To be able to access hundreds of thousands to tens of millions of fragments of RNA and DNA simultaneously in a highly miniaturized format. You could now go from generating a 1,000 bases per sample to generating gigabases by accessing many millions of fragments, and now even billions of them. Providing the access to the sample in an easy to use, automated, scalable platform pivoted the way people were able to access the genome and transcriptome. Fast forward 15 years, what Seer is poised to do is create a similar pivot for the proteomics market. The science is clear, and all other things being equal, people would love to be able to access the plasma proteome in a dynamic way at scale, with the ease they can interrogate the genome. People want to do this, but the only way they can assess the proteome over a log10 dynamic range in plasma is through extensive depletion and fractionation workflows.Almost 99% of protein mass is made up by approximately 20 proteins, so these extensive workflows are the only way you can look at low abundance proteins, which comprise the majority of the proteins in any plasma sample at any point in time. This process may work for a dozen or tens of samples, but you cannot scale to thousands or millions of samples to power a discovery pipeline. I believe this is where Seer’s technology can create a fundamental shift. Our nanoparticles obviate the need for depletion and fractionation steps and essentially provide “lab on a nanoparticle” to allow for a reproducible, automated workflow that enables unbiased and deep interrogation of the plasma proteome at scale. Using the Proteograph Product Suite, researchers can prepare16 plasma samples with our automated workflow, which requires less than 30 minutes hands on time to go from sample to peptides in just 7 hours, ready for injection on a Mass Spec. This can eliminate weeks’ worth of highly variable sample prep, and replaces it with a quick, streamlined, and highly reproducible workflow that requires very little user intervention.I firmly believe that by reducing this burden, researchers will have the opportunity to interrogate the proteome the same way they have the genome or transcriptome and by doing so make new discoveries. The proteome is also dynamic, and as we have seen from RNA-seq and gene expression, frequent sampling leads to a more complete picture of the biology. By creating a technology which automates sample prep, is scalable, and attractive for any lab to deploy, I believe Seer has the potential to bring the same type of shift that NGS brought to genomics and transcriptomics, to proteomics.
Thank you for giving us an overview of the technology and its advantages. The value of Seers technology definitely resonates with common pain-points that have come up around proteomics in our work. It’s a joke that proteomics, specifically with mass spec, is a three-headed beast of expertise. You need specialists for sample prep, instrument operation, and data analytics. I think it’s clear how Seer address the sample prep need, but I know from your commercialization partnerships that Seer’s intentions are to create and end-to-end proteomics solution, so could you elaborate further on how Seer may address other pain-points in proteomics?
Step number one is to collaborate deeply with mass spec providers, and we have announced three such agreements with Bruker, Thermo Fisher and SCIEX. These are three of the leaders in the plasma proteomics field. Our visions for the proteome are shared with these mass spec providers. They see the same opportunity we do, that plasma proteomics at scale is a largely untapped opportunity and that one of the primary bottle necks is the workflow. They are committing to working with us to create an improved workflow with greater continuity. On the point of mass spec operation, once you have a standardized workflow and the level of in-process controls that we provide, the actual operation of the instrument becomes easier. Users have to control for less variables upfront. The systems themselves are very robust, of course there is a learning curve, but with improved sample prep the operation is less complex. On the back end, we will continue to work with mass spec providers and third-party developers for data analytics.Increasingly, companies like Google, Microsoft, and Amazon are trying to broaden their activity in life sciences outside of just cloud storage and are looking to providing data analysis and machine learning. We recently hired Serafim Batzoglou who was previously at Illumina and a professor at Stanford. He is one of the experts of machine learning in our domain. The intent is to use machine learning for two purposes. One is certainly on the back end with data analysis. It’s estimated that roughly a million proteoforms are out there and only a very minor fraction of them have been discovered. So, machine learning can help aid in this discovery and make sense of mass spec signals that may not have been seen before, and we want help make these tools available to researchers. The second area machine learning can be applied is connecting proteomic data to other omics data—genomics, transcriptomics, or epigenomics—to create a far more complete picture of biology. So, these are the two avenues we plan on internally developing our machine learning capabilities for, and we will likely be doing this in concert with third parties as well. Machine learning can also help further the development of our nanoparticles. We have seen when you alter the physiochemical properties of the nanoparticles they behave in very reproducible ways. Machine learning can help us get increasingly more intelligent about the way we design nanoparticles so that we can help build custom particles for researcher-defined use cases. There is so much shared interest and global IQ, that collaboration can be used to build an ecosystem around the Proteograph that allows researchers to have a far simpler workflow to go from sample to insight.
Speaking about the nanoparticle development, is there a plan to make particles that are tailored to specific post-translational modifications?
Our nanoparticles detect post-translational modifications and we have shown that in data generated to date. We also have enough proof-of-concept evidence to believe that we can make nanoparticles that are partial to certain post-translational modifications, say phosphorylation as a hypothetical, and will enrich for proteins that either have or do not have the modification of interest. The nanoparticles allow you to cast a broad net for specific traits in protein variance without having to narrow your targets down to a pre-specified set. So, we may be able to enrich for a particular class of post-translational modifications without needing to target specific proteins or PTMs. It’s almost like being able to look at methyl-c on DNA but without having to target every methylation site. There remains a lot of work to be done on this, but we do believe that we can develop nanoparticles to enrich for proteins with specific characteristics and allow you to analyze them against the backdrop of proteins that do not have that characteristic.
That’s great to hear that this is in the works for Seer. It seems like it could be massively enabling for discovery research and understanding the biology at a deeper level. To jump back, we talked about the partnerships Seer has with mass spec providers, but I know that Seer also has high profile research collaborations announced as well—OHSU, The Broad, Discovery Life Sciences, and recently with the Salk Institute. I would be curious in understanding the unique value each of these adds.
At the highest level, biology is a crazy complicated problem. So, to the extent that we can use global IQ to solve this problem, we are going to do that. I believe that these academic partnerships can help exemplify how Seer can solve problems in a unique way but also allows us to learn through the relationship. One of the things I learned from sequencing is that being engaged with academic scientists who are at the forefront of science and technology is critically important. These are the people who are better at seeing around the corner and what is coming next better than anyone. So, engaging with them and understanding how they are using our technology and learning from them is going to be critical to our own internal product development.In the case of the four collaborators we have, what we wanted to do was very intentional. We wanted to have a couple deep collaborators that come from a proteomics background, and also a few who didn’t and who come from deeper multiomic or genomic backgrounds. The Broad Institute and the OHSU’s Knight Cancer Institute both have deep proteomics experience, but even those two collaborations are differentiated. The Broad has a deep and wide multiomics background, but is more focused on translational and therapeutic research, whereas the Knight Institute is more focused on translational and diagnostic research, such as early detection and detection in complex disease. So, by partnering with these two leading proteomic research institutes, we are covering therapeutics research and biomarker discovery, as well as how this translational research may develop into diagnostics.In the case of Discovery Life Sciences (DLS) and the Salk Institute, these are two highly established multiomic institutes, but in the case of DLS they didn’t have a proteomics core and Salk’s core was used nominally with cell lysates and some tissue samples. Salk was hardly researching plasma proteomics and certainly not in model organisms. So, with DLS we hope to couple deep, unbiased proteomics with other large-scale omics and catalyze proteogenomics in a way that has not been done before. DLS will also show that a new adopter of proteomics can get up and running, be super effective, and perform great research with our technology. With Salk, we hope to exemplify large-scale proteomics in model organisms. Honestly, if you are working on a model organism there really aren’t a whole lot of good, large-scale proteomics tools available. There certainly aren’t aptamers or targeted panels available for you, so a lot of that work is unexplored. Inside a multi-disciplinary institute, you can have a platform that a very broad spectrum of researchers can use, almost like a core facility in an academic institution. Each one of these instances is going to give us a reference customer site on how they incorporate our technology and prove points on the utility of our technology. It allows us to cover a broad range of customers and helps potential customers see the applicability of our technology.
Perfect, that’s great to hear the coverage of a number of different potential research applications where the proteome can be explored. Maybe to begin to wrap our conversation, how do you see use of the Seer Proteograph Product Suite evolving over time? A lot of what we’ve spoken about has hinted towards potential diagnostic applications coming out of the discovery applications being targeted today. How does the current product suite evolve with that shift and how does it interface with Seer’s spinout company PrognomIQ?
I’ll start with the tail end of your question and work my way back up to the beginning. We set up and spun out PrognomIQ for a handful of reasons, some of which I alluded to earlier. We want to have a very clean business model. We, Seer, are a tools provider. We want to bring innovative solutions to researchers who can solve complex problems. That is already hard to do, and we’re building a core skillset around it. When you are laser focused, you can do great things. Now, we also know that there are applied uses for this technology, and that’s why we spun out PrognomIQ. We had done a study with 141 lung cancer samples published in Nature Communications—which, at the time, I believe was the largest unbiased proteomics study ever published—, and what we saw was, even in a relatively small sample of 141, very interesting. This allowed us to set up a company to capitalize on that to target large-scale unbiased proteomics studies for the development of early detection, disease discovery, and proteogenomic diagnostics. We were able to get a great slate of investors behind the company, and both now have clear visions and business models for the goals of the two companies.Coming back to the Proteograph Suite, we’re still at the starting gates. If you had asked me 15 years ago where genome analyzers would be, even in my wildest imaginations, I would not have seen where the world is today with genomics. The value creation of genomics has been massive. So much of what I take for granted on a daily basis goes back to that technology 15 years ago which has continued to evolve. I see a similar trajectory for the Proteograph here. These nanoparticles can be applied to proteins, and they can be applied to other biomolecules like metabolites or lipids. We’re just now starting the development of these nanoparticles. You can make them to go broader and deeper or shallower and faster. We’re starting in the research setting, but we intend to submit for 510(k) clearance and the capabilities which can support the creation of in vitro diagnostics for our partners—we won’t be in the content business ourselves—, but I really do believe that this technology is going to have a role to play in the translational and ultimately the clinical fields. We are building the company, building the capabilities, and building the product to be able to support that entire continuum. Now, this is very forward-looking, and a lot of work remains to be done—we’re at step one of many—, but that is the goal and the ambition, to span everything from discovery and biomarker development to triaging and monitoring clinical populations.Why do we believe we can do that? Because proteins are an essential element of life. I can’t think of an application where the biology isn’t mediated, or someone doesn’t want to know how the proteins are being expressed. If there is biological analysis, there is room for proteomic analysis. If there is room for protein analysis, why wouldn’t I want Seer’s unbiased approach? That’s been our belief and our approach, and how we will continue to build out the company.
There’s certainly a lot of exciting room for growth ahead and we’ll definitely be watching for all the developments coming out of Seer in the future. Is there anything else you’d like to add as a closing note for our readers?
At the highest level, having been in this industry for 20-some odd years and having seen the transformation, just imagine if we were dealing with this pandemic without the pervasive use of sequencing or real-time PCR. Where would we be in either testing or vaccine development? It’s inconceivable, and we take it a bit for granted, but sequencing has made all of this possible. As amazing as all of this progress is, we are early. We are so early. This like the internet in the 1980’s. That’s where we are. Think about it, how much of biology do we understand, whether it’s genomics, transcriptomics, epigenomics, or proteomics? Hardly any, and that’s just in humans, so now think about the rest of the living organisms on Earth. This can have a global impact, because, ultimately, you can trace it back to biology. We really are at the early stages of what I expect to be a biological revolution. Technology has been and will continue to play a prominent role in how that future is shaped, and I only hope that Seer has the capability to contribute to that over the course of the company’s decades. That is, quite frankly, why I show up to work and why my colleagues show up to work. To have a chance to be a part of that and to have an impact on how we manage human health is a pretty awesome thing to do. That’s what excites me the most.