Skip to main content
Monday, October 31, 2016 9:00 - Thursday, November 3, 2016 5:00

Summary
Written by Andreas Laner

The biennial Mutation Detection Training Courses were initiated by Prof. Richard Cotton in 1998, alternating with the biennial International Workshops on Mutation Detection. These training courses were more practical, and focused on the laboratory and the technologies used to detect changes in DNA. After a break between 2010 and 2014, the Training Course was revived from October 31 to November 2, 2016 in Heraklion, Crete, Greece.

Expected participants were people working in DNA laboratories including clinical diagnostic labs involved in DNA sequencing data analysis. 81 delegates from 21 countries registered for the course. Compared to the previous courses, there were two major changes. First, we changed the name of the course from "mutation" to "variant" detection. Second, as a consequence of the NGS revolution that shifted the focus from technologies to detect variants in DNA to evaluating possible consequences of the variants, we shifted the focus of the course to "Variant Effect Prediction".

The course was grouped into four sessions (The Basics, Gathering Information, Predicting Consequences & Variant Classification, and Functional Testing & Reporting) and the lectures were covered by experts from universities, research institutions, private diagnostic laboratories, and biotech companies. The sessions were interspersed by presentations from selected abstracts and seven repeated Concurrent Practicals, the topics of which were Alamut (Interactive Biosoftware), VarAFT & UMD Predictor, Ensemble Genome Browser, The RD-Connect Platform, Sophia Genetics, UCSC Genome Browser and HPO (Phenomizer), and WES/WGS analysis using Exomiser). Slides of all presentations and the practicals can be found here: http://vep.variome.org/page/display/id/19

The course was opened by Johan den Dunnen, who felicitously defined the very topic of the course as "everything that I can do to help decide whether a variant has deleterious consequences or not." That is actually quite a challenge, given that there is a plethora of different sources that need to be taken into account in assessing the pathogenicity of variants, such as various databases, genome browsers, functional tests, prediction tools, prioritization tools, etc. One primary goal of the course was to provide an overview of which tools and methods can be used to interpret variants.

The first session essentially addressed the fact that no single NGS platform is suitable for detection of the wide variety of variants at the DNA level (SNVs, CNVs, large indels, rearrangements, etc.) and that different biases, some common to all platforms and some unique to specific platforms, have to be considered. Consequently, each currently available technology has its advantages and disadvantages (Henk Buermans).

On the technical side there are large discrepancies in variant calling between different NGS platforms, strategies and pipelines, but independent of these factors accurate communication (e.g. a detailed description and communication of the relevant problems) between biologists, computer scientists and clinicians is essential to avoid additional obstacles. When implementing or running a NGS platform this "human" factor must also be considered (Steve Laurie).

Several studies over the last years demonstrated that rare "unusual" variants (e.g. silent or synonymous variants being pathogenic and apparently loss of function/ truncating variants being benign) and particular disease mechanisms (e.g. trans genomic-contextual variants, deep intronic variants affecting regulatory elements, variants causing unexpected splicing aberrations, etc.) are an important source of variant misinterpretation and in the process of pathogenicity assessment care must be taken to not jump to conclusions. Although probably rare, taken together these "unusual" variants/ mechanisms might account for a notable fraction of misinterpreted variants. This also applies to the various potential consequences of DNA variants at the RNA-level (Jan Traeger-Synodinos, Andreas Laner and André Blavier).

Gathering information is the key to robust variant interpretation, but identifying the right tools and approaches can be arduous. The UCSC Genome Browser and the Ensembl genome browser are admittedly among the most fundamental and versatile tools; Robert Kuhn and Helen Sparrow provided an excellent overview of the multiple possibilities of extracting information and collecting evidence and they demonstrated how to use the browsers more efficiently.

Standardization is an eminent issue in the highly complex process of NGS-based diagnostics and it was demonstrated in different sessions that a lack of definition in certain levels could result in erroneous interpretations and wrong clinical reports. For example a standardized phenotypic description of patients using HPO (Human Phenotype Ontology) terms paved the way for a phenotypic based prioritization and filtering of exome and genome data, thereby increasing the chance of identifying a causative variant (Sebastian Köhler). On the other hand the application of a standardized variant description nomenclature according to the HGVS recommendations is absolutely key to compare variants and clinical reports across labs and to relocate variants in databases. Not only humans are confused by usage of diverse variant nomenclatures but also computer algorithms working in our variant analysis pipelines have a hard time to annotate/ interpret a variant which is listed in varying descriptions in different databases (Johan den Dunnen). Last but not least the standardization of variant classification based on defined quantitative and qualitative criteria as suggested by the ACMG classification guidelines improves the intra- and inter-laboratory concordance and facilitates variant interpretation (Andreas Laner).

NGS pipeline performance crucially depends on the applied variant annotation and filtering steps of the pipeline and, even in the absence of a gold standard, some general parameters can be defined and used reliably by software tools (Christoph Béroud). Implementing NGS in routine diagnostics requires rigorous validation procedures and, given the high complexity of an NGS workflow, many things can go wrong-and eventually will go wrong-if thorough quality control is not established for each step in the process (Anna Benet Pagès). This is also exemplified in the EuroGentest NGS Guidelines: "The one thing that should prevent people from prematurely offering NGS diagnostics is poor quality".

We all depend on public databases for information gathering, yet few of us are willing to share the data collected in our labs, and Johan den Dunnen correctly addressed this by stating that "without sharing, no DNA diagnostics" - it's just that simple.

In addition to the lectures, there were three presentations from selected abstracts showing the potential of NGS-based panel diagnostics for inherited cardiac disorders (Sara Benedetti), the accurate analysis of full-length CYP2D6 diplotyping using the PacBio RSII platform (Henk Buermans), and targeted RNA-sequencing for improved diagnostics in hereditary breast and ovarian cancer (HBOC) patients (Bernd Dworniczak).

Although the collective term "NGS" is now commonly used to denote Illumina sequencing, many new technologies are waiting in the wings. The field of genetic diagnostics will surely continue to change, at least as swiftly as it has in the last decade. Continued development depends on continued education and training for professionals and beginners alike, and from the feedback of many of the participants and the inspiring discussions during the course and in the evening events we are inclined to conclude that this training course was a great success. The HVP will be running another course in 2017 so look out for the announcement.