Scientists are driving new discoveries about the role of genetic variation in specific human disorders at an exciting and unprecedented pace, and physicians are increasingly incorporating genetic tests into pediatric clinics as a diagnostic tool. But with high-throughput sequencing methods continuously yielding floods of new information, how can clinicians keep up with updated data for patients who have already received genetic test results?
At Children’s Hospital of Philadelphia, Mahdi Sarmady, PhD, assistant professor of Clinical Pathology, has an innovative and game-changing proposal: In a JAMA Pediatrics Viewpoint published in December, the director of bioinformatics in the Division of Genomic Diagnostics, alongside co-author Ahmad Abou Tayoun, PhD, a former CHOP geneticist, describe a new model for genomic test interpretation and continuous reanalysis that can enable faster, systematic, and more effective use of new evidence and discoveries. Dr. Sarmady believes that implementation of the proposed model would address the bottleneck between discovery and diagnostics and position CHOP as a leader in using informatics to drive genomics. Implementation will be made possible by extension of the tools that Dr. Sarmady is currently putting into place in the Division of Genomic Diagnostics.
“I think this is a natural progression for the genomic diagnostics community,” Dr. Sarmady said. “If you go to PubMed, there are increasingly more papers on the importance of reanalysis, but everybody complains that there is no way to do such reanalysis systematically. Once we implement this new model, I believe CHOP will be one of the frontiers in this space and be in a position to lead other institutions.”
In this Q&A, we sat down with Dr. Sarmady to find out more about the model he proposed in JAMA Pediatrics and learn how such an innovation would improve outcomes for children.
What inspired you to develop this new model for genetic tests?
With the advent of high-throughput sequencing methods (aka next-generation sequencing), we now have the ability to read the entire genome and test all genes potentially associated with a disorder in a person’s DNA. The challenge, however, becomes interpretation and re-interpretation. Our knowledge of genomic underpinnings of various diseases is not perfect and remains limited. Right now, we only know about 5,000 disease genes out of about 21,000 genes. But even within a gene, there can be an unlimited number of variants, and we cannot predict or functionally validate the pathogenicity of every single variant in clinical labs. When we get testing done for a patient, about 70 percent of the time we do not report a disease-causing variant, or the test is inconclusive, or we identify a change in the DNA, but we don’t know if it is disease causing. These questionable variants are termed “variants of uncertain significance” (VOUS). But with scientists learning more and more about genes, variants, and their disease associations every day, the challenge is deciding when and how to go back to previous cases and re-interpret them based on new knowledge.
The New York Times cites a big report from Myriad Genetics, one of the largest genetic testing databases for cancer, in which the company re-analyzed data from 1.5 million patients for the past 10 years and issued new reports for 60,000 of them. The main reason for that? New knowledge. But currently, there is no systematic way established in the field to decide when and how to reanalyze previous cases in this way.
How does your new model approach this challenge?
What we propose in the JAMA Viewpoint is coming up with a new systematic approach using a software platform that conceptually sits in between the lab and the clinician and has a user interface for each.
Scientists and laboratory directors can add new data to the platform’s knowledgebase as new evidence becomes available, for example when we report and classify new variants. The software also knows how each of those variants were initially reported for the patient, so it can notify the clinician and suggest requests for reanalysis. Basically, it can notify the physician that, ‘The lab has new evidence for a variant that was reported as benign when you ordered it for your patient. Now is a good time for reanalysis.’
This platform is bi-directional: The system will be able to automatically pick up updates from the clinician’s side, too. This is important because when the clinical lab interprets a genomic test data, we use patient phenotypes. We use the physician’s notes and chart information about a patient and of course, that information evolves over time, too. But right now, when the test is ordered, it’s more like a snapshot. We get a snapshot of the phenotype at that time, and then we report back with what we know about genomic results at the time of the report. And information on both sides evolves, but there is no communication.
We suggest that this platform can determine if the new information is likely to result in a classification change by calculating a risk score, and if the risk is over a set threshold, the clinician can be advised to order reanalysis.
How exciting! What are the next steps to bringing this closer to use in a clinic?
Currently, labs and all of their genetic data and information systems are not equipped to do something like this. Even up until a few years ago, the electronic health record system (EHR) was not open, meaning that you couldn’t build a system that interacts with it programmatically. Now, however, things have changed, and EHR systems are making programming interfaces available to build apps. Similarly, on the lab side, most labs do not have flexible information systems that can be extended to support the new model. At CHOP, in the Division of Genomic Diagnostics, we’re creating an in-house software for variant interpretation, reporting, and to serve as a comprehensive knowledgebase. This will be the foundational platform on which we can layer the new genomic interpretation model by tying it in with the EHR. So that’s next. Once the in-house software is live in production in coming months, it will enable us to create this model first as a proof-of-concept in the research setting, and once we’ve showed the validity, especially on the clinician’s side, then we can consider launching it officially.
What are some of the current challenges in developing an effective automated reanalysis tool across institutions?
Knowledgebases are often very siloed between labs. It’s useful for laboratory scientists to know if some other lab has seen the same variant for a similar patient and how they reported it because they may have access to information that we don’t have, especially since most variants under question occur in patients with rare diseases. Though there are efforts in this field to address this, there’s still the challenge of a lack of standards: Even with guidelines, there are so many different pieces of evidence and classification that there are discrepancies in the way that different labs report the same thing.
Another challenge is the different ways that a variant can be described, different nomenclature, ways you can define the same variant. So, if you go simply by matching the text, you might not find the same variant. Again, these are the problems that the field is actively working on — more and more standards. This will help with that.
You recently published a paper in the European Journal of Human Genetics about a computational phenotype-driven tool. Can you explain how that tool relates to the new model you are developing?
In the JAMA Pediatrics Viewpoint, we talk about the need for automated reanalysis. The tool, which was recently presented in the EJHG paper, allows clinical laboratories to prioritize potentially causal variant and ultimately automate most of the review process. It takes patients’ phenotypes and raw variants from sequencing and uses a computer model to predict which variants are most likely to be causal for the given phenotypes. It will be an essential component of the ultimate tool we’d like to build to implement the viewpoint’s vision on automated reanalysis. You can imagine we can feed this tool with updated phenotypes and see how it changes the prioritization of specific variants compared with the initial phenotypes, which will guide if reanalysis is necessary.
Now, speaking more generally about clinical consequences, can you comment on how the new model for genetic tests will improve outcomes for patients?
Absolutely. First of all, making the right diagnosis is the first step for effective management, treatment, and family counseling. The literature on reanalysis reports a 15 to 20 percent increased diagnostic rate, providing answers and improved management and treatment for a significant number of families. This is especially true for the rare diseases, including mitochondrial diseases. Rare diseases are more of a mystery in terms of genetic diagnoses, though theoretically, the proposed platform will apply to all kinds of diseases.
What excites you the most about moving forward with your proposed model?
I’m lucky to have been at CHOP since 2013, when next-generation sequencing technology for DNA sequencing and genetic testing was becoming available. Working with a multidisciplinary team, we dealt with many challenges, as early adopters. The first challenge was just implementing the sequencing: the data quality wasn’t good enough initially. We had to deal with a lot of inherent noise informatically by removing artifacts and making sure we had good quality data. Then the problem became how to interpret data more and more efficiently, and now the problem is how to re-interpret the data in an efficient way. I’m lucky that I’m part of this whole journey. For me, coming from a bioinformatics background, solving these problems is exciting. Many of the projects we’ve worked so far are solving problems on the lab , but as we start implementing this project, we’ll have the opportunity to work with how the data touches the clinical side and that will be great.