Proteomics Tools: A Resurrection
By Hannah Glazier and Miguel Edwards, DeciBio

While the genome serves as the immutable blueprint of biological life, the proteome represents the dynamic execution of that code. Understanding this functional landscape requires tools capable of navigating a biochemical environment of profound complexity.
For decades, the evolution of proteomics has lagged that of genomics, constrained by a fundamental technological asymmetry. DNA, with its capacity for amplification, could be interrogated at scale through PCR and sequencing, turning a handful of molecules into billions of identical copies. Proteins, however, offer no such amplification mechanism; they must be detected in their native abundance, across a dynamic range spanning orders of magnitude in biological samples. This constraint confined the field to relatively low-throughput methods, which still have crucial applications today. The discipline did not die, but it plateaued, particularly in clinical adoption, awaiting a technological resurrection akin to the one next-generation sequencing (NGS) brought to genomics.
Today, that resurrection is underway. The definition of a “proteomics tool” has expanded beyond the foundational bedrock of mass spectrometry (MS) or immunohistochemistry (IHC) to encompass high-plex affinity assays, aptamer-based platforms, spatial proteomics, and emerging single-molecule protein sequencing. These modern instruments are characterized not only by their ability to identify and quantify proteins but also by their capacity to do so with unprecedented sensitivity, scale, or even spatial resolution. Together, they are transforming the proteome from an opaque feature into a digitally readable layer of biology.
Traditional Proteomics: The Bedrock Of Discovery
For decades, mass spectrometry has served as the foundational engine of proteomics, earning its reputation as the gold standard for unbiased protein identification. In bottom-up, or “shotgun,” proteomics, proteins are enzymatically digested into peptides, separated by liquid chromatography (LC), ionized, and analyzed by their mass-to-charge ratios in tandem mass spectrometers. From these spectra, researchers infer protein identities, relative abundances, and sometimes post-translational modifications.
This approach has fundamentally shaped our understanding of cellular machinery. It has enabled comprehensive analyses, mapping of signaling pathways, and global quantification of protein dynamics under different perturbations. However, the reliance on traditional LC–MS/MS has historically presented a dichotomy of depth versus throughput. Discovery-driven workflows achieve impressive coverage but are time-consuming and technically demanding, limiting their suitability for routine or high-throughput settings.
An additional persistent challenge is the immense dynamic range of the human proteome. Highly abundant proteins can dominate the signal and obscure low-abundance proteins that could carry the most biologically relevant information. Depletion, fractionation, and extensive sample preparation can partially mitigate this problem but add complexity and variability. While automation improves robustness, the technique remains challenging to deploy broadly in clinical laboratories that prioritize speed, standardization, and cost-effectiveness. Despite these hurdles, mass spectrometry remains indispensable for de novo discovery, structural characterization, and hypothesis generation and is the workhorse that continues to define the “parts list” of biology. It also maintains strong penetration in select clinical applications such as multiple myeloma.
Alongside mass spectrometry, low-plex immunoassays such as ELISA and IHC have long anchored traditional protein measurement in both research and clinical laboratories, where each enable sensitive, relatively low-cost measurement of single analytes or small panels in serum, plasma, and other biofluids. Despite their robustness and regulatory familiarity, both ELISA and IHC are inherently low-plex, requiring one assay per target or a small handful of markers, which limits their ability to capture the high-dimensional complexity of the proteome at scale.
A Second Generation: Spatial Proteomics
If traditional proteomics provided the biological inventory, the second generation of tools offers an architectural point of view. A protein’s mere presence is informative, but its precise location, whether at the cell membrane, in the nucleus, or at the interface of a tumor cell and a T cell, can be determinative of its function. Spatial proteomics represents a pivot from simply cataloging proteins to mapping their organization, neighborhoods, and interactions in intact tissues.
This technological leap is enabled by the convergence of high-resolution microscopy, advanced fluidics, and multiplexed tagging strategies. Imaging mass cytometry (IMC) and related platforms can routinely detect over 40 protein markers on a single tissue section with minimal signal interference, preserving morphological context while dramatically increasing multiplexing.
Cyclic immunofluorescence approaches, such as those implemented by the PhenoCycler and other iterative staining platforms, push plex even higher. They repeatedly stain, image, and chemically erase fluorophores, gradually building up dozens to hundreds of protein measurements per tissue section. This capacity to visualize dozens of proteins simultaneously in situ enables sophisticated phenotyping of cell populations while maintaining tissue architecture.
A highly compelling case study for spatial proteomics lies in oncology, particularly in the characterization of the tumor microenvironment. In the era of immunotherapy, the spatial context of proteins could be more predictive of therapeutic response than total immune cell counts alone. Spatial tools have revealed that the “neighborhood” of a cell shapes its proteomic state. By mapping these relationships, researchers are uncovering unique cellular clues and interaction motifs that drive metastasis, immune evasion, and drug resistance, all insights that were invisible in bulk assays.
Beyond oncology, spatial proteomics is beginning to inform neurodegeneration, autoimmunity, and infectious disease, where local microenvironments and tissue niches play critical roles. As analytical pipelines mature, these tools are transforming the tissue slide into a high-dimensional, quantitative data set linking morphology to molecular function.
The Next Generation: Affinity-Based Profiling
While spatial tools address where proteins are, the next generation of high-plex affinity-based assays addresses how many proteins can be measured with scale and sensitivity. These platforms move beyond the physical separation and ionization constraints of mass spectrometry by translating protein binding events into digital nucleic acid signals.
Olink and SomaLogic are emblematic of this paradigm. Olink’s Proximity Extension Assay (PEA) uses pairs of antibodies, each conjugated to unique DNA oligonucleotides. When both antibodies bind the same protein, their attached oligos are brought into proximity, allowing them to be extended into a unique bar code, which is then quantified by qPCR or NGS. SomaLogic’s SomaScan platform uses chemically modified aptamers (SOMAmers) that bind target proteins with high specificity; the bound aptamers are then quantified as DNA surrogates by hybridization or sequencing.
By decoupling affinity binding from signal detection, these technologies exploit the sensitivity, dynamic range, and multiplexing capacity of genomic readouts. Modern Olink panels can measure thousands of proteins per sample, while SomaScan assays now cover over 10,000 protein targets, with dynamic ranges that span many orders of magnitude via strategies such as serial dilution. These techniques allow for simultaneous quantification of low-abundance cytokines and high-abundance structural proteins in a single assay, without complex fractionation or depletion steps.
However, unlike mass spectrometry, these platforms do not offer de novo discovery; they are constrained by the availability and quality of their antibody or aptamer libraries. However, their strengths are complementary to mass spectrometry in other ways. As their content expands and as cross-platform comparison studies mature, affinity-based tools are increasingly positioned as the industrial workhorses of proteomic quantification.
A Future Frontier: Single-Molecule Protein Sequencing
Beyond affinity-based readouts lies even more ambition: to sequence proteins molecule by molecule, akin to how NGS reads DNA. Single-molecule protein sequencing aims to identify amino acid sequences and post-translational modifications directly, without reliance on antibodies or ensemble averaging.
The promise of protein sequencing is profound. Proteoforms arising from alternative splicing, differential cleavage, and post-translational modifications (PTMs) often define functional states and drug responses in ways that genomic sequences cannot capture. Directly sequencing individual proteoforms could unlock new biomarkers, mechanisms of action, and therapeutic targets that are invisible to bulk protein or RNA measurements.
Still, the challenges are still significant. Proteins cannot be amplified, so single-molecule platforms must operate at the native abundance of analytes, placing immense demands on detector sensitivity and noise suppression. Chemical diversity across 20 amino acids, plus a vast set of PTMs, makes discrimination more complex than distinguishing four nucleotides. Signal processing, error models, and data interpretation are still in their infancy. Despite these obstacles, early-access platforms and proof-of-concept studies continue to hint at the value of these tools.
If these technologies achieve sufficient scale, accuracy, and cost-efficiency, they will not merely complement existing tools; they will redefine the standard of biological observation, turning proteomics into a truly sequence-resolved, information-dense layer on par with genomics.
Proteomics And Drug Discovery: An Ecosystem For All
Drug discovery has historically relied on indirect proxies of biology to infer mechanisms and identify targets. Proteomics offers a more proximal and comprehensive view of the drug–cell interface by directly measuring the proteins that drugs bind, modulate, or displace.
Mass spectrometry–based proteomics has already reshaped target identification and mechanism of action studies. High-plex affinity-based platforms extend this reach into large-scale translational and clinical studies. Olink and SomaScan have been deployed in biomarker discovery programs for cardiometabolic disease, oncology, and neurodegeneration, where hundreds to thousands of proteins are profiled across patient cohorts to identify signatures of drug response, toxicity, and disease progression.
Spatial proteomics adds yet another dimension by linking drug effects to tissue architecture. In oncology, spatially resolved protein measurements before and after treatment can reveal whether a therapy remodels the tumor microenvironment, recruits effector cells, or induces immune exclusion. These insights can guide patient stratification, help design rational drug combinations, and generate hypotheses for resistance mechanisms.
Looking forward, single-molecule protein sequencing may enable even more precise drug discovery. By reading proteoforms at single-molecule resolution, researchers could differentiate between active and inactive enzyme states, characterize isoform-specific drug binding, and detect rare, PTM-defined subpopulations of proteins that drive resistance or toxicity. In this vision, the proteome becomes not just a catalog of targets but a dynamic digitized landscape where modifications can be linked to function and therapeutic intervention.
The Elusive Clinical Integration
Despite this technological renaissance, the translation of proteomic discoveries into routine clinical diagnostics remains a formidable challenge.
Mass spectrometry, although increasingly standardized, still struggles with the speed, automation, and regulatory simplicity desired for high-throughput clinical laboratories. It excels in centralized reference labs and specialized centers but remains difficult to deploy widely at the point of care. As a result, clinical proteomics is anchored in low-plex modalities such as ELISA and IHC, which, while limited in multiplexing, are familiar, inexpensive, and deeply entrenched.
Affinity-based and spatial platforms are beginning to bridge this gap. Their fixed panels, standardized workflows, and digital readouts are more amenable to clinical validation than bespoke discovery pipelines. However, regulatory approval depends not only on analytical performance but also on standardizing pre-analytical variables, harmonizing data analysis, and proving clinical utility in prospective trials. Additionally, the Big Data nature of these tools creates a new challenge: how to translate high-dimensional proteomic signatures into simple interpretable scores that clinicians can trust.
Here, machine learning and AI may prove essential. By integrating proteomic, genomic, and clinical data, AI models could identify robust, multivariate signatures and compress them into actionable outputs. To be clinically embraced, these models will need to be transparent, explainable, and validated across diverse populations.
With appropriate analytical rigor, regulatory adaptation, and computational support, the same features that make proteomics challenging can become its greatest strengths in precision medicine.
Conclusion: The Proteomic Renaissance
The trajectory of proteomic technologies suggests not merely a resurrection but a fundamental renaissance of the field: one that parallels, and may eventually rival, the genomic revolution of prior decades. The limitations of traditional mass spectrometry and low-throughput immunoassays are being systematically dismantled by a diverse arsenal of next-generation tools. From the high-fidelity mapping provided by spatial proteomics to the expansive dynamic range and throughput of platforms like Olink and SomaScan or the emerging Alamar platform, researchers are now equipped to interrogate the proteome at a granularity and scale previously deemed impossible.
At the same time, the emergence of single-molecule protein sequencing promises to democratize access to the proteome much as NGS democratized the genome, offering sequence-level resolution of proteoforms and PTMs. While technical hurdles around sensitivity, dynamic range, and data interpretation remain significant, the pace of innovation suggests that direct protein sequencing will transition from novelty to at least a research staple within the next decade.
The resurrection of proteomics will be fully realized only when these sophisticated tools are translated into robust, cost-effective clinical assays that drive decision-making at the point of care. By leveraging AI, multi-omics integration, and rigorous standardization, the vast data sets generated by modern platforms can be distilled into actionable biomarkers for precision medicine.
Ultimately, the revitalization of proteomic tools signifies a maturation of systems biology. As the comprehensive digitization of the proteome becomes routine, our ability to understand mechanisms of action, stratify patients, and design targeted therapies will be transformed. These tools are not simply resurrecting an old discipline but rather redefining the resolution at which we can view life and transform our clinical activities.
About The Authors
Hannah Glazier is a product manager with a strong foundation in consulting at DeciBio. She has a broad and strategic understanding of therapeutic development within precision medicine. Her expertise includes life science tools and diagnostics, as well as more emerging domains such as next-generation therapeutics, radiomics, and applying AI to enable precision-driven R&D.
Miguel Edwards is a partner at DeciBio and specializes in supporting the development, commercialization, and application of emerging tools across basic research and clinical diagnostics. He has extensive experience guiding clients in refining product specifications, defining target markets and customers, and selecting the most appropriate business models.