Maneuvering through the world of clinical bioinformatics tools for next generation sequencing (NGS) is complicated on its own but also because of its dependence on upstream processes such as wet lab and sequencing. When it comes to using variant interpretation pipelines for NGS under the In Vitro Diagnostic Regulation (IVDR), things get even trickier. There are two distinct phases in NGS analysis: handling the measurements (aka secondary analysis) and interpreting their outcome (aka tertiary analysis). Most companies providing tertiary analysis also provide secondary, since from the user perspective it is part of the same journey from the NGS instrument raw data output to the clinical report.

Several companies have already secured an IVDR certification, but in general its scope is not exposed publicly. Therefore it is unclear how far the certification fulfils the spirit of the regulation and provides customers’ confidence in a clinical diagnostics solution’s usefulness and reliability. A central challenge lies in defining an intended purpose and defining the scope of applicability. Sticking to the defined purpose in a world of constantly evolving genome, biomedical knowledge, chemistry, analytical technologies, and secondary analysis pipelines is then antinomous to product evolution. This article skims through some core principles for achieving IVDR compliance in secondary as well as tertiary analysis.

Ashwini Nagaraj, Quality Manager at Euformatics, points out that: “From the manufacturer’s perspective, defining the intended use for variant classification software presents several complexities because this type of software operates in a rapidly evolving field. Key challenges include: Diverse Applications: The software may support various patient management decisions, such as those related to inherited disorders, oncology, or pharmacogenomics. Clearly specifying or limiting the intended use to a specific condition is particularly challenging when the software is designed to interpret genetic variants, as such tools often possess broad capabilities spanning multiple clinical and research contexts

Why is IVDR Compliance Challenging? 

Clinical diagnostics in NGS relies on a long chain of minutely organised, interdependent, and fragile procedures, whether performed by humans, laboratory robotics and complex instruments, or software. Firstly, procedures involve patient sample handling, targeted DNA capture, molecule library construction, and sample indexing and pooling. Secondly, it is about sequencing, which means collecting a trustworthy signal from molecular events, recognising the true signal in a background of signal noise, handling measurement error models, and transforming the signal to DNA sequence information. This sequence has to be compared to a reference genome, and differences identified – the variants. Thirdly, it is a matter of associating the hundreds, thousands or even millions of variants from each patient sample with annotations, or information, from the scientific biomedical literature and other knowledge sources. This gathered information will allow an analyst to then query the landscape of the patient’s variants with the aim of identifying and classifying relevant variants in the context of the condition of the patient to help in the treatment.

Maneuvering through IVDR compliance along that long chain of highly interdependent steps requires a multitude of very different expertise areas and is a challenge of its own. The general approach is to compartmentalize the NGS chain of actions and to consider each of these roughly three sections as distinct entities and to define regulatory conformity independently. Indeed, we have IVDR compliant kits for capturing defined regions of the genome, IVDR compliant sequencing instruments, IVDR compliant variant calling pipelines and IVDR compliant interpretation tools. Next we concentrate on the two last steps which are the bioinformatics tools. Even here the regulatory landscape remains complex, and software companies face multiple challenges in ensuring continuous compliance through the lifecycle of their solutions.

The intended purpose of the tool

The intended purpose is a key concept in IVDR compliance. Let’s for the sake of simplification disregard the fact that IVDR guidelines are still evolving and that full implementation has again in 2024 been reported to a later date. IVDR compliance comes on different levels, from A to D, the last one being the most demanding because of the device having a direct bearing on the patient treatment. In our case, D is excluded, and typically the bioinformatics tools will be assessed on level C. On this level, intended purpose is defined along several dimensions including, but not limited to what is detected, overall function such as screening, diagnosing, predicting and the information that is intended to be provided in the context of a physiological or pathological state. Performance evaluation under IVDR is a demanding compulsory process covering both scientific and analytical performance of the bioinformatics tools and this is where the knot becomes more apparent.

Euformatics Quality Manager, Ashwini Nagaraj notes that: “From a regulatory perspective, defining the intended use for a variant classification software requires a structured approach to ensure clarity, compliance, and alignment with regulatory expectations. Manufacturers must conduct a comprehensive assessment of the software’s capabilities, carefully delineating its scope, target population, intended users, regulatory classification, and its applications in clinical and real-world settings”.

As seen above, the bioinformatics pipelines are highly dependent on the quality of their input, which can be disturbed by events in the wet lab or during sequencing to put it simply. There are several other factors also influencing the performance of the pipelines. The genome is uneven, with regions of high DNA content complexity making them easier to align and analyse, and regions of low complexity or with pseudogenes. There are certain types of variants that span large regions and that can be more confidently called from long read sequencing than from the standard and most frequently used short read sequencing. There are genes with high internal variability and a structure of repeated modules that are also hard to sequence, not to talk about the ensuing alignment process to the reference genome afterwards. The quality of the biological input material is also of cardinal importance: fresh DNA, frozen DNA, formaldehyde fixated DNA, cellular DNA, fragmented cell free circulating DNA all have different characteristics that will have significant bearing on the quality of the sequencing libraries and the sequence output even before the data hits the bioinformatic pipeline.

On a higher conceptual level, conditions and diseases are dependent on anything from one to several tens of genes, some of which are fully penetrant, others only partially so. This means that identifying a variant in a gene can in the case of monogenic mendelian conditions provide sufficient information to support a treatment decision. However, in the case of quantitative traits, the contribution from different variants will vary a lot. While most monogenic mendelian conditions are known, it is not the case with quantitative traits as exemplified by the concept of polygenic risk score or conditions such as diabetes or autism spectrum disorder. Clinical genetics and genomics is thus about understanding the contribution of genetic components in a clinical treatment context. Of course most biological processes, and hence disease conditions will depend on more than one gene, some of them as of today still not identified, and often on genes with variable expressivity following from epigenetic and other influence. Therefore, clinically assessing the role of genetic variants is in some medical conditions straightforward while  in other conditions it is several orders of magnitudes more complex than measuring and recognising an abnormal electrocardiogram. What does scientific and analytical performance evaluation under IVDR look like under these conditions?

High granularity of evidence is required by IVDR in a context where many bioinformatic tools operate on datasets with millions of data points. Pipelines that rely on advanced algorithms face difficulties in generating the level of evidence necessary for approval, both in terms of high granularity and repeatability as new information keeps on changing the knowledge landscape on which the tools operate. Demonstrating reliability when big datasets, statistical models, and machine learning techniques are involved often requires exhaustive testing and documentation—far beyond what is typically needed for traditional IVDs. Compounding these issues is the dynamic nature of bioinformatic pipelines where variant calling, annotating and classification tools are frequently updated to incorporate new data, improve algorithms, or fix bugs. Under IVDR, each significant – however that is defined – update might trigger the need for re-validation and re-certification. This has a strong detrimental effect, slowing down innovation. Ensuring continuous compliance while maintaining the agility to update tools as needed is like living on the edge. 

Resolving the Knot: How Euformatics Solves IVDR Compliance Challenges

Euformatics provides comprehensive tools and solutions to address the challenges in clinical next-generation sequencing (NGS) workflows in compliance with the EU IVD regulation. By embedding IVDR compliance processes early in the development of bioinformatic pipelines for variant calling and ensuring rigorous validation of the pipeline output, Euformatics helps laboratories to assess part of wet lab and sequencing processes as failures there will leave traces in the various outputs of the secondary analysis tools. 

Early Integration of Compliance Requirements 

Euformatics supports laboratories in transitioning smoothly from IVDD to IVDR by embedding compliance into the architecture of their products from the start. The Genomics Hub integrates essential components of quality control and assay validation ensuring that all processes meet regulatory standards such as ISO 13485:2016, which aligns with IVDR requirements. This approach helps to automate and systematise testing and to streamline the approval process by meeting safety and quality management expectations from the outset. 

Comprehensive Validation and Quality Control 

The omnomicsQ tool enables labs to maintain high-quality outputs by monitoring over 75 quality metrics, automating quality checks based on use case requirements, and by providing tools for daily quality control management. This continuous quality status monitoring, combined with participation in External Quality Assessment (EQA) rounds with organisations like EMQN and GenQA, ensures high standards of clinical diagnostic quality and demonstrates vigilance and compliance with IVDR requirements.

Proactive Notified Body Engagement 

Euformatics provides support in maintaining comprehensive documentation for NGS bioinformatic pipelines for variant calling, which facilitates smoother interactions with notified bodies. The integration of quality management into their products, along with frequent validation runs, helps customers confidently present their workflows for certification and maintain compliance throughout the product lifecycle. 

By offering modular, automated, and validated solutions tailored for clinical genomics, Euformatics ensures that laboratories can effectively manage the transition to IVDR compliance, minimize disruptions, and maintain a high level of operational quality while advancing towards precision medicine goals. 

Learn More 

Interested in learning how Euformatics can support your lab’s journey towards compliance and precision medicine? Explore more about the Genomics Hub and its capabilities to see how it can help streamline your workflows and maintain regulatory compliance.

Back to news listing