Next-generation sequencing: Examining an often oversimplified & misunderstood technology–Part I

In the last decade or so, genetic testing has evolved from single-gene Sanger based assays to much more complex next-generation sequencing (NGS) based assays. This incredible technology has facilitated the rapid and high-throughput evaluation of many genes (hundreds of thousands of DNA strands) all at once. Although NGS is now being used in the clinical setting—to look for pathogenic variants (gene mutations) that cause or predict disease—there exists a tendency for some health care professionals to oversimplify or under-scrutinize test results. Or, testing may be mis-ordered. Hence, the purpose of this article, the first in a series, is to build a greater understanding of the complexities behind NGS technology, which are still being unraveled.

Genetic testing is not a simple blood test

Physicians order genetic tests to diagnose disease, calculate hereditary risk of harboring or being a carrier of a disease, or to help determine whether a patient may respond to a specific drug after diagnosis. Perhaps because we are used to getting a simple positive or negative result from a blood draw test, we expect the same from genetic testing. But the complexities of this technology cannot be underestimated.  

Benjamin Kipp, Ph.D.

“In a simple blood test, you have a reference range, where anything above or below a certain quantity is associated with a phenotype or disease,” says Benjamin Kipp, Ph.D., Consultant at Mayo Clinic’s Department of Medical Genetics and the Division of Anatomic Pathology. “In large-scale genetic testing, you assess billions of individual nucleotides and compare these findings to a reference genome and ask, ‘how does that set of nucleotides differ from the reference?’ This assessment can result in thousands of differences between you and I, like differences in genes associated with height, hair, and eye color, et cetera. Then, you have to assess unique variants individually and try to decipher: does this variant associate with disease, or is this variant just a benign difference between you and me? So we know there are differences, but are those differences pathogenic (i.e., disease causing) or not?”

Dr. Kipp continues: “When there is a long list of unique variants and many of these lack good data surrounding their importance, this can get extremely complex. Multiple tools are now available to assist in determining whether a variant is pathogenic or not, but none of them are perfect. And even if these tools suggest a variant is pathogenic, does the change truly associate with a patient’s phenotype or why that patient is being tested? That's why it's not always a simple task to evaluate a patient’s genome.”

Evolving tools for an evolving technology

Speaking of tools, this adds an additional layer of complexity, which is finding and choosing the best applications available for testing. A genetic test is a blend of extraction, chemistry, analysis, and interpretation, for which companies now offer solutions in each of these distinct areas. Such tools have only become available on the market in the last couple years.

Eric Klee, Ph.D.

“When NGS technology first came out, what didn't come with it were really good applications to make sense of the data,” says Eric Klee, Ph.D., Consultant in Mayo Clinic’s Department of Health Science Research & Bioinformatics, and Director of the Bioinformatics Program. “So the instruments, all they did was create massive outputs of very small, short fragments of DNA sequence information. And to biologically understand what those meant, you had to align those to some kind of reference, or put them together in some way and then call out the differences and understand what they mean. This is bioinformatics.” 

Whereas Sanger-based assays look at only one gene, albeit exhaustively, with maybe two variations, NGS can look at many, many genes and variants at once.  “With NGS, you can now look at 150 genes, and you’re going to potentially have 400 variants,” says Dr. Klee. “But how can you thoroughly and exhaustively look at every one of those? So you need to create some tools to produce efficiency and data aggregation to facilitate interpretation.”

New ways to analyze the data are always arising, and on top of this, there’s a race to figure out how this information can be better used in clinical context to better serve the patient.

“As our understanding of both the technology and the science got better, we were starting to think about things like mutation signatures,” says Dr. Klee. “Now, we're starting to think about fusion events that are maybe identifiable only in the RNA and not the DNA. We're thinking about tumor mutation burden and other sorts of things. And all of that has been pushing the boundaries of the analytics.”

A maturing market

Many lab facilities use commoditized tools right off the shelf to run tests for more common mutations. And there’s nothing wrong with that, since these tools are starting to deliver similar feature sets.

Shawn McClelland, Ph.D.

“Now, market solutions are more mature and have filled out capabilities,” says Shawn McClelland, Ph.D., Director of Clinical Bioinformatics in Mayo Clinic’s Department of Laboratory Medicine and Pathology. “And although these tools each probably have slight differentiations, for the most part they deliver over 80% of what someone might need, whereas before, it might have been below 50%.”

Mayo, however, separates itself from the crowd by always working on and improving standardized tests while innovating its own. This is because the clinicians have a lot of experience working with the rarest variants and most challenging patient cases from all over the world. 

“Our clinicians are very vocal about not missing certain variants that they know are important to detect,” says Dr. McClelland. “So if we take an off-the-shelf solution and it misses some of these, then we might want to build our own solution. I think our consultants and labs and the pathologists are always pushing the limits of detection in these assays, so that constant iteration and feedback with them is important.” 

Test development standards may vary

There are also differences in development standards between labs, from the bare minimum required to validate a test for the market, to the highest standards self-imposed by an institution. Mayo Clinic falls into the latter category, following College of American Pathologists (CAP) as well as New York state guidelines—which are more rigorous with very prescriptive standards (e.g., a certain number of samples must be run to validate a test).

Jesse Voss, CT(ASCP), MB(ASCP)

“We tend to clinically validate our panels before they even go live,” says Jesse Voss, Research and Development Technologist Coordinator in the Office of Translational Research, Innovation, and Test Development. “So we run them on samples that we've run previously, or we have clinicians who may have an interesting cohort of patients that we’ll run just to get the performance characteristics of that. We run these through the process before they go live."          

Another part of test development is what Voss calls a “bake off,” whereby his lab picks the best chemistry and the best sequencers for running a test. “We really want to get the best chemistry,” he says. “And if the best chemistry can be improved on, we'll do that as well. We're very focused on being able to analyze as many samples as possible and we've pushed the limits to use as little tissue as possible.”

Voss continues: “In development, what we really want to do is break the chemistry. We want to know its limits, how little of a tissue sample do we need, how little nucleic acid can be going into it. And I would hope that other labs do that, but they may not have the resources or the samples to be able to do that. What's unique about Mayo is that we have a huge repository of samples on hold that we can validate. We can take advantage of that." 


More from the series

Chris Bahnsen

Chris J. Bahnsen covers emerging research and discovery for Mayo Clinic Laboratories. His writing has also appeared in The New York Times, Los Angeles Times, and Smithsonian Air & Space. He divides his time between Southern California and Northwest Ohio.