Skip to main content

Retrospective non-target analysis to support regulatory water monitoring: from masses of interest to recommendations via in silico workflows



Applying non-target analysis (NTA) in regulatory environmental monitoring remains challenging—instead of having exploratory questions, regulators usually already have specific questions related to environmental protection aims. Additionally, data analysis can seem overwhelming because of the large data volumes and many steps required. This work aimed to establish an open in silico workflow to identify environmental chemical unknowns via retrospective NTA within the scope of a pre-existing Swiss environmental monitoring campaign focusing on industrial chemicals. The research question addressed immediate regulatory priorities: identify pollutants with industrial point sources occurring at the highest intensities over two time points. Samples from 22 wastewater treatment plants obtained in 2018 and measured using liquid chromatography–high resolution mass spectrometry were retrospectively analysed by (i) performing peak-picking to identify masses of interest; (ii) prescreening and quality-controlling spectra, and (iii) tentatively identifying priority “known unknown” pollutants by leveraging environmentally relevant chemical information provided by Swiss, Swedish, EU-wide, and American regulators. This regulator-supplied information was incorporated into MetFrag, an in silico identification tool replete with “post-relaunch” features used here. This study’s unique regulatory context posed challenges in data quality and volume that were directly addressed with the prescreening, quality control, and identification workflow developed.


One confirmed and 21 tentative identifications were achieved, suggesting the presence of compounds as diverse as manufacturing reagents, adhesives, pesticides, and pharmaceuticals in the samples. More importantly, an in-depth interpretation of the results in the context of environmental regulation and actionable next steps are discussed. The prescreening and quality control workflow is openly accessible within the R package Shinyscreen, and adaptable to any (retrospective) analysis requiring automated quality control of mass spectra and non-target identification, with potential applications in environmental and metabolomics analyses.


NTA in regulatory monitoring is critical for environmental protection, but bottlenecks in data analysis and results interpretation remain. The prescreening and quality control workflow, and interpretation work performed here are crucial steps towards scaling up NTA for environmental monitoring.


Organic pollutants are well-documented in aquatic environments [59]. Traditionally, target strategies that look for chemicals known in advance have been used to identify these compounds [27]. In contrast, non-target analysis (NTA) helps discover previously undetected, unexpected and/or unknown substances. NTA has been under intense development in recent years, aided by advances in instrumentation and computational approaches [17, 27]. Considering the vast chemical space of possible environmental pollutants [65], the need for NTA is becoming more pressing in order to tackle the growing challenge of identifying chemical unknowns in samples. Yet, data analysis in NTA remains a formidable challenge. To ease the “identification burden” in NTA, simplifying approaches like Suspect Screening, where chemicals on discrete lists suspected to be present in the sample are screened, are being taken in the interim [17].

Various successful examples of NTA [1, 4, 5, 19, 28, 50, 53, 60] have inevitably encouraged interest in its potential role to monitor and manage chemical pollutants in the environment [17]. As the field matures, there is some consensus that NTA is “Ready to Go”, with calls for it to be applied more widely within the regulatory frameworks of local, regional, and national authorities [17, 18]. Data-mining routines like enviMass have contributed to such initiatives [34]; enviMass facilitates NTA by peak-picking and prioritising unknown features of interest worthy of further identification efforts. It does so by connecting mass spectral features based on criteria such as having signals of sufficient intensity, grouping together isotopologues and adducts of the same component, and detecting temporal trends, ultimately giving as output a list of m/z-retention time pairs, plus accompanying information for further identification efforts.

However, challenges for regulators to perform NTA persist, particularly with respect to high-throughput data analysis and identification following the mass prioritisation and peak-picking steps described above. For example, regulators may lack specific NTA expertise and/or resources to apply the potentially many and complicated computational workflows [15, 33] available for analysing the copious amounts of data. In addition to the time-consuming and complex nature of data interpretation, issues related to standardisation and reproducibility exist, as there is currently no ‘one size fits all’ approach to identifying compounds using NTA [16]. As a result, NTA is currently often considered by regulators as “too much effort for too little sound evidence”.

Another more systemic obstacle to applying NTA in a regulatory context relates to the divergent interests of scientists in academia, who are (currently) responsible for driving most NTA developments, and scientists in regulatory practice, who would implement these developments towards regulatory compliance and environmental protection. While the former aim often to develop and publish novel work, the primary mandate of the latter is regulatory compliance towards environmental protection. One possible consequence of this reality is that academic research outcomes resulting from NTA may not be directly relevant or in a form that is readily usable for regulators. In other words, researchers’ questions may not be regulators’ questions—what is possibly scientifically interesting may not be of priority or directly useful to regulators.

Despite these aforementioned challenges, it is possible (and important) to navigate both research and regulatory needs in NTA. The present work is an example of academic research driven primarily by regulatory priorities. In this “top-down” approach, pre-existing data were used to generate results of direct environmental relevance and with immediate implications for environmental management.

Three practical challenges characteristic of applying NTA in a regulatory environmental monitoring context arose in this study: (i) the study was framed by superlative questions that required a large volume of data to be analysed, i.e. identify unknown compounds occurring at the highest intensities and highest temporal frequency with point sources across all the samples of the sampling campaign; (ii) there was a strict and limited timeframe allowed for the study following project management procedures of the regulatory body, and (iii) the data originally collected had been repurposed for this NTA study as there was no capacity nor further resources available within the scope of the project to do additional measurements. The latter point was all the more critical as preliminary manual inspection of the available data revealed that not all measurements were fully suitable for the intended non-target identification. These challenges called for a high-throughput approach capable of processing large volumes of data of variable quality in a fast and reproducible way that would be compatible with identification approaches downstream. Additionally, unlike the seemingly increasing complexity of existing workflows [33], an uncomplicated and ‘minimal, bare-bones’ but fully functional approach that is transparent and easily explainable is critical given the regulatory context.

MetFrag, used in this work to support identification efforts, is an example of an open in silico identification approach which satisfies the aforementioned criteria. Released in 2010 [68], it first retrieves potential candidates with matching mass from compound databases such as PubChem [23] (111 million chemical structures, August 2020), ChemSpider [7, 48] (103 million chemical structures, February 2021), or smaller biological databases like the Human Metabolome Database [67], 20) (114,304 metabolites, February 2021). These candidates are then scored according to how well the experimental spectrum matches the in silico fragments generated per candidate using a bond dissociation approach [68], and subsequently ranked according to this FragmenterScore (sometimes referred to as the Fragmentation Score or FragScore, or simply the MetFrag Score when it is the only component thereof). For the identification of environmental “known unknowns”, using fragmentation information alone in this way can give mediocre results (e.g., ~ 22 and 6% of 473 environmentally relevant standards ranked first with ChemSpider and PubChem, respectively [51]). This outcome may have various causes: (i) the search databases used are too large and/or do not contain only environmentally relevant compounds, therefore resulting in too many candidates that are not meaningful, and/or (ii) there is simply not enough information to distinguish candidates when considering their fragmentation alone.

To address these limitations, MetFrag was ‘relaunched’ in 2016 to incorporate further identification strategies beyond fragmentation, such as retention time information, substructure in/exclusion, availability of literature and patent information, presence/absence in suspect lists, and user-defined scoring terms [51]. Over time, spectral similarity comparison with spectra from the MassBank of North America (MoNA) (Fiehn [12] with and without a MetFusion approach [14] was also integrated into MetFrag. Since then, two further open-science/environmental chemistry developments have contributed significantly to MetFrag’s extended capabilities for identifying environmental unknowns. Firstly, the release and integration of the United States Environmental Protection Agency’s CompTox Chemicals Dashboard [66] (hereafter, “CompTox”) into MetFrag provides a search database of > 850,000 compounds of environmental and toxicological relevance [54], while allowing users to leverage the “MS-Ready” concept [37] and various forms of chemical metadata availability in CompTox as user-defined scoring terms. Secondly, critical information from international regulatory bodies can now be exploited through MetFrag towards identifying environmental chemicals. Beyond (i) the US EPA’s Chemicals and Products database (CPDat) ([62, [10] and other CompTox-related metadata terms that are already integrated via CompTox, MetFrag’s user-defined scoring terms can also be configured to incorporate information such as (ii) hazard and exposure from the Swedish Chemicals Agency KEMI [13], (iii) European chemicals registration, i.e. REACH [2], and (iv) the NORMAN Network’s merged suspect list of chemicals of emerging concern known as SusDat (NORMAN [43] representing knowledge gathered from NORMAN members, which include > 70 regulatory and academic reference laboratories throughout the world, as well as external contributions. Used in this way, MetFrag connects disparate resources from various regulatory agencies and academic researchers towards identifying environmental unknowns, practically ‘helping researchers and regulators help each other’ by providing an interconnected information platform with identification functionality.

Since MetFrag’s relaunch in 2016, work on the identification of environmental unknowns has used MetFrag’s post-relaunch functionality to varying extents. Some research simply uses MetFrag purely for its in silico fragmentation capabilities, i.e. not paired with any compound database [9, 40, 49]. Many examples use only the FragmenterScore to rank candidates retrieved from ChemSpider alone [3, 31, 35], PubChem alone [29, 61, 64], or a combination of either or both with other databases [8, 25, 45, 47] like KEGG [22], FOR-IDENT [30] and MassBank [36]. Several studies have begun to use one or more of MetFrag’s post-relaunch capabilities such as data source, patent, and/or reference counts for the respective compound database used [4, 5, 11, 39, 41, 42, 63], spectral library similarity [4, 5, 11, 21, 63], and presence in suspect lists [5, 28, 41]. Albergamo and colleagues [1] were amongst the first to use MetFrag’s post-relaunch capabilities heavily, in particular those provided via CompTox and by international regulators and scientists.

The present work aimed to exploit “post-relaunch” MetFrag and Open Science developments towards retrospectively identifying non-target environmental pollutants in a regulatory context, as summarised in Fig. 1. Here, pollutants determined to be of regulatory concern by regulators originating from industrial activities found in Swiss wastewater treatment plant (WWTP) effluents were the main subjects of this study, which focused on developing the open in silico workflow to identify them. A prescreening and quality control workflow for high-throughput automated data processing was developed to analyse a provided list of unknown m/z prioritised by enviMass. The use of MetFrag in this work leverages the state-of-the-art open resources mentioned above, chief among them, regulatory information from multiple international sources, in addition to exploiting many of MetFrag’s post-relaunch capabilities. The identifications provided by MetFrag were analysed with respect to the specific environmental regulatory context of this study and communicated using an established system of confidence levels, discussed in detail in the next section.

Fig. 1

Visual project overview showing analytical and computational steps. Analytical “wet lab” steps are indicated in yellow, while “in silico” computational steps are indicated in green. The current study focuses on Retrospective Non-target Analysis, shown in dark green. Dotted arrows and boxes indicate possible future work based on the results of the current study, highlighted in blue to represent decisions to be made based on regulatory priorities


Daily water samples were collected from 25 sites based at 22 WWTPs distributed across Switzerland within sampling campaigns focusing on point sources of industrial chemicals. Of these 25 sampling sites, 19 correspond to WWTP effluents (i.e., 1 site per WWTP), while 6 constitute paired influent and effluent sampling sites of 3 WWTPs (i.e., 2 sites per WWTP) which employ ozonation. The effluent from these 3 WWTPs employing ozonation came from secondary clarifiers. Five sites were sampled twice each (in June and October 2018, respectively), while 20 were sampled only once (June 2018), giving a total of 30 samples.

During each sampling campaign, 2 L of the 24-h flow-proportional composite samples were collected daily at each sampling site over seven consecutive days. The sample was filled into two 1-L glass bottles and kept closed at 4 °C until the last day of the respective sampling campaign. That day, all samples were transported cooled to an analytical laboratory and were filtered, flow-proportionally mixed, and sent cooled for MS-analysis. The final samples used for measurement were flow-proportional 7-day composites.

Sample measurement

Prior to analysis, samples were filtered through a glass fibre filter and isotopically labelled internal standards were added (26 for positive and 7 for negative ionisation mode, respectively). Samples were analysed without enrichment by direct injection of 100 μl into the chromatographic system. Chromatographic separation of the analytes was performed using a Waters Atlantis T3 column (150 × 3 mm, 3 μm particle size) connected to a Thermo Scientific Accela liquid chromatography system equipped with a 1250 pump, open autosampler, and Thermo Scientific Column Oven 300. The mobile phase eluent A consisted of ultrapure water (ELGA LabWater Purelab Ultra from Labtec Services AG, 5 mM ammonium formate), while eluent B consisted of LC–MS grade methanol (Scharlau Chemie S.A, 5 mM ammonium formate). The gradient programme started with 10% B, which was kept for 1 min before a linear ramp to 95% B for 12 min. This condition was kept for 5 min before returning to starting mobile phase conditions at 18.5 min. The column was re-equilibrated for 4.5 min giving a total run time of 23 min with a flow rate of 300 μl/min.

A full-scan single MS measurement was performed using a Thermo Scientific QExactive Orbitrap LC/MS system with resolving power of 70,000 (at m/z = 200) within 7 days of sample collection and preparation. A scan range of 100 to 1000 was used in both positive and negative electrospray ionisation modes. A heated electrospray ionisation (HESI) source with a vapouriser temperature of 350 °C, sheath gas flow of 35 arbitrary units (au), auxiliary gas flow of 10 au, spray voltage of 3400 V (positive) and 3000 V (negative), S-lens level of 50, and capillary temperature of 270 °C was used. The samples were then stored at 4 °C.

Following the prioritisation of non-target masses (described in Part 1 of the prescreening workflow of the next section), the resulting list of non-target masses formed the inclusion list for MS2 measurements of the same samples in data-dependent acquisition mode in February 2019. Normalised collision energy of 35 was used. The same measurement protocol as described above was applied with resolving power of 17,500 (at m/z = 200).

Computational methods

Part 1—enviMass prioritisation of masses of interest

enviMass (v.3.5, [34]) was used to prioritise non-target masses of interest based on the following criteria: high-intensity MS1 peaks (used as a proxy for high concentration), presumed point source (occurring at one or only a few sampling sites), multiple temporal occurrences across the sampling campaign, i.e. high-frequency occurrences, and existing isotopologue and adduct linkages. Initially, a list of 300 non-target masses of interest was identified and used as an inclusion list for MS2 acquisition in the second round of measurements in February 2019 using the same samples that had been stored at 4 °C as described above. Of these 300 masses, 125 masses with associated [M + H]+ and [M-H] information from enviMass (117 and 8, respectively) were considered for further processing in the next step and constituted “List A”. A further 60 masses with associated [M + H]+ and [M-H] information (28 and 32, respectively) were also considered for the next step (“List B”), but had not been measured as part of the inclusion list. The enviMass parameters used to derive Lists A and B are detailed in the SI. These lists were the starting point for the workflows described here.

Part 2—prescreening and quality control workflow

Data files in .RAW format were first converted to .mzML format using MSConvert from Proteowizard (v.3.0.19182-51f676fbe, [6]), with full settings available in the SI (Additional file 1: Figure S1). The data were preliminarily inspected manually using XCalibur Qual Browser (v., Thermo Fisher Scientific, Waltham MA, USA). Then, a workflow to extract, prescreen, and quality control the spectra of the precursor masses in Lists A and B was developed and performed prior to further identification efforts.

The prescreening workflow first extracts all MS1 and MS2 ion chromatograms of each m/z from each mzML file supplied to it as input. No post-processing of mass spectral features such as peak removal, filtering, or scaling is performed whatsoever during the extraction of spectra. Extracted MS1 precursors whose retention times are within 2 min of the mean retention time given by enviMass were deemed as matching the original list entries, considering possible drifts caused by wastewater matrix effects and normal variations in the LC analytical set-up, unless specified otherwise.

A ‘case’ was defined as a measurement whose chromatograms and corresponding spectra have the same m/z, retention time, and file source (essentially, a single unique measurement). As part of the prescreening, each case was subject to quality control: the MS1 and MS2 ion chromatograms were checked automatically by an algorithm within the workflow in a stepwise fashion as per checks and thresholds 1–5 listed in Table 1. Failure to meet any of the criteria in the checks caused the case to be rejected from further identification efforts.

Table 1 Quality control checks within the prescreening workflow applied to the MS1 and MS2 spectral data for each case

Cases that passed quality control checks 1–6 were manually inspected for peak shape and width (check 7, Table 1). Only cases that passed all quality control checks 1–7 were used as input for MetFrag identification in the next part of the workflow.

This prescreening workflow developed and used as part of this work has been embedded into the openly available R package Shinyscreen (v.0.1.1-paper, [24]).

Part 3—identification using MetFrag

Tentative identification was performed using MetFrag (command line v.2.4.5, [51, 68]). CompTox was used as the candidate database in the form of a local.csv file [54]. R scripts, building on the code bases of ReSOLUTION (v.0.1.8, [55]) and RChemMass (v.0.1.27, [56]), were written to accomplish the following steps.

First, the neutral monoisotopic mass corresponding to the [M + H]+ or [M − H] adducts indicated by enviMass in positive and negative mode, respectively, was calculated. Then, candidates of matching mass with a relative deviation of 5 ppm (selected to reflect the analytical mass error, also known as “Search ppm”) were retrieved from CompTox. Subsequently, candidates were fragmented in silico using the following fragmentation settings: Absolute Fragment Peak Match Deviation 0.001 Da (“Mzabs”), Relative Fragment Peak Match Deviation 5 ppm (“Mzppm”), and Maximum Tree Depth 2. Then, candidates were ranked according to the MetFrag Score, calculated as the sum of ten weighted scoring terms summarised in Table 2 and explained in detail below. These terms are either already built-in, or can easily be configured within MetFrag since its relaunch [51]. Candidates with identical first block InChIKeys (i.e., stereoisomers, with the same structural skeleton) were grouped together.

Table 2 MetFrag scoring terms and weights used in tentative identification

Three scoring terms within the MetFrag Score reflect the contribution of the fragmentation spectra to the proposed identification: the FragmenterScore (in silico fragments explaining measured peaks, a function of peak count and bond dissociation energy), OfflineMetFusion (spectral similarity to entries in MassBank of North America (MoNA) using a MetFusion approach [14], and OfflineIndivMoNA (maximum spectral similarity with MoNA entries having exact InChIKey match). Four scoring terms relate to the availability of the chemical’s metadata: CPDAT_COUNT [66] (number of entries within US EPA’s Chemicals and Products database), DATA_SOURCES [66] (number of data sources underlying CompTox, which performs similarly to the reference count), KEMIMARKET_HAZ (v.S17.0.1.3, [13]) (scaled and normalised hazard score calculated by the Swedish Chemicals Agency), and KEMIMARKET_EXPO (v.S17.0.1.3, [13]) (scaled and normalised exposure score calculated by the Swedish Chemicals Agency KEMI). The remaining three terms account for the candidate’s presence or absence in suspect lists, another form of metadata availability: INDACT (Industrial Activity chemicals known to be used near the sampling sites, supplied by the regulator), REACH2017 (v.S32.0.1.3, [2]) (chemicals registered under the European legislation framework REACH), and NORMANSUSDAT (vS0.0.2.0, NORMAN [43] (chemicals in the merged NORMAN Suspect List Exchange). All metadata scoring terms were weighted 1 except for REACH2017 and NORMANSUSDAT, which were both weighted 0.5 due to the high redundancy across the two databases.

To calculate the maximum possible MetFrag Score, all the scoring terms except NORMANSUSDAT, REACH2017, INDACT, and OfflineIndivMoNA are first normalised to their respective largest values among the candidate set and scaled between 0–1. These normalised and scaled values are then summed together with the presence/absence scores of NORMANSUSDAT, REACH2017, and INDACT (0.5, 0.5, 1.0 if present, 0, 0, 0, if absent, respectively), and the similarity score from OfflineIndivMoNA (which is not scaled as it is already defined between 0 and 1).

Tentative identifications by MetFrag were communicated using an established system of levels [57], reiterated here with study-specific context for clarity: as MetFrag is an in silico method, it generally gives identifications of Level 3 confidence based on evidence for possible chemical structure using MS1, MS2 and experimental data/context. These identifications are tentative and require further validation before achieving higher confidence levels, as do Level 2a identifications of probable structure based on a library spectrum match, corresponding to a high MoNA individual similarity score (> 0.9) in the present work. Level 1 identifications require confirmation of the structure using a reference standard and includes target compounds.


Prescreening and quality control

Preliminary manual inspection of the data using XCalibur Qual Browser (v., Thermo Fisher Scientific, Waltham MA, USA) indicated that not all measurements of each individual m/z were suitable for non-target identification because, e.g., MS1 precursors were often at low intensity, some MS2 spectra were absent, and spikes and/or noise were observed in the MS1 extracted ion chromatogram instead of actual peaks. Therefore, the prescreening workflow consisting of 7 quality control checks (Table 1) was implemented to isolate measurements that were suitable for non-target identification. Figure 2 provides examples of measurements visualised using Shinyscreen which passed all quality control checks (Panel A) and failed either one or more checks (Panels B-E), respectively. The latter were automatically eliminated from further consideration by the workflow because they were deemed unsuitable for use in non-target identification.

Fig. 2

Examples of cases which pass and fail quality control within the prescreening workflow. Quality control helped isolate measurements which were suitable for non-target identification and discarded those which are not. Panel A shows Shinyscreen’s graphical user interface and an example of a case whose MS1–MS2 measurement is suitable for non-target identification—its extracted ion chromatogram shows a MS1 peak of sufficiently high intensity, a corresponding MS2 event that is temporally well-aligned, and its MS2 spectrum. The remaining panels show examples of cases that were eliminated from further identification efforts by the workflow as they were deemed unsuitable due to an excessively noisy MS1 spectrum (B; check 3 in Table 1), the absence of an MS2 event, (C; check 4) misaligned MS1 and MS2 events (D; check 5), and poor MS1 peak shape and width (E; check 7)

For identification, a total of 185 non-target m/z from both List A and List B were prescreened in each of the 30 mzML files, resulting in 5,550 cases possible for identification. For List A containing 117 m/z measured in positive mode, the prescreening workflow runtime was approximately 8 h on a laptop machine with 8 GB RAM and 2 physical cores over all 30 mzML files. Runtime was estimated based on timestamps from results file generation.

Of the 5,550 cases, 899 cases satisfied checks 1–5 listed in Table 1. Duplicate cases by m/z (e.g., if it was detected at more than one site) were eliminated by prioritising those with the highest MS1 intensity (check 6), leaving 157 cases (approximately 0.03% of total cases) to be manually inspected for peak width and shape (check 7, Fig. 2e). Of these 157 cases, only 22 passed manual inspection and qualified for further identification efforts using MetFrag (listed in full in Additional file 1: Table S2). Figure 3 summarises this data reduction outcome as a result of quality control within the prescreening workflow.

Fig. 3

Quality control checks within prescreening resulted in data reduction prior to identification using MetFrag. Each check is represented by a bar whose height indicates the number of cases which passed that check. Going from left to right within each group of bars reflects the sequence of quality control checks (checks 1–7, Table 1)

Tentative identification using MetFrag

Tentative identifications for the 22 m/z that passed quality control checks were obtained using MetFrag. Candidates for each m/z were proposed as ranked lists according to their respective MetFrag Scores comprising the ten scoring terms described in Table 2 (full MetFrag results with lists of ranked candidates available in MassIVE). Figure 4 shows the distribution of MetFrag Scores classified into tertiles for the top-ranked candidate for each of the 22 m/z.

Fig. 4

Distribution of MetFrag Scores for the top candidate of each m/z (n = 22). Shaded regions indicate distribution tertiles corresponding to Low, Moderate, and High MetFrag Scores, respectively. A rug plot is included along the x-axis to give an indication of the actual MetFrag Score values within each histogram bin

Interpretation of MetFrag results

Given the background and context of this work (i.e. NTA in environmental monitoring to identify high-priority unknowns), the MetFrag results described above do not represent a satisfactory end-point/end-product of this study. In other words, it does not suffice to present MetFrag’s outputs (lists of ranked candidates, one list per m/z) alone, as these results alone do not provide sufficient direction for the next regulatory steps. Rather, it is crucial that these scientific outcomes are translated into transparent and actionable information for regulatory scientists to aid their future decision-making with respect to the following questions:

  1. 1.

    What does the distribution of MetFrag Scores mean and what are the implications?

  2. 2.

    How can this information guide evidence-based decision-making regarding further identification efforts? (e.g., by adding candidates to suspect lists for future Suspect Screenings, purchasing reference standards for confirmation, etc.)

The following section addresses these two questions through in-depth interpretation of MetFrag’s results at two levels: at a global level across all 22 m/z studied, and at a candidate level per m/z, respectively. The aim of these interpretations is to deliver information based on scientific premises that is actionable from a regulatory point of view and in doing so, present ‘complex’ MetFrag results in an interpretable way using Scenario Analysis.

Regarding the MetFrag Scores of the top candidates for each m/z (Fig. 4), this distribution arises as a result of four possible combinations of Spectral and Metadata Score components contributing toward the final MetFrag Score (Table 3). The distribution is split into tertiles based on the range of MetFrag Scores possible (0–9), and each tertile is assigned an associated scenario, as explained below.

Table 3 Four different scenarios corresponding to the four possible combinations of Spectral and Metadata scores

Scenario 1 features both strong spectral and metadata evidence supporting a given candidate, resulting in a High MetFrag Score. Moderate MetFrag Scores result when one of these two scoring components, Spectral or Metadata, is low and the other is high, leading to Scenarios 2 and 3. Finally, Scenario 4 describes situations where both Spectral and Metadata scores are low, resulting in Low MetFrag Scores. Table 4 shows the breakdown of the MetFrag Score into its component Spectral and Metadata terms for four illustrative examples, one for each scenario. These representative examples were selected from the distribution (Fig. 4) and are the respective top-ranked candidates for 4 m/z.

Table 4 MetFrag Score breakdown for the top candidates of four m/z

The implications of this distribution (Fig. 4) can guide future actions depending on whether depth or breadth of the NTA study is more important. For example, if the ultimate goal is to fully identify one or two high-priority non-target unknowns to Level 1 confidence, pursuing candidates with High MetFrag Scores (3rd tertile, dark red region in Fig. 4, Scenario 1 in Table 3) is recommended. Alternatively, if gaining a wide survey of the possibly relevant but as yet unknown environmental pollutants throughout the sampling campaign is preferred (akin to a ‘first-approximation' of the situation), then even candidates with moderate and/or low scores can also be considered further depending on the relevance of the scoring terms to the context. Additionally, further decisions on future actions can be made based on possible limitations of the study which may be known from the outset (see Discussion).

Close inspection of the MetFrag Score, namely its component spectral and metadata scoring terms, enables results interpretation on the individual candidate level for each m/z. Irrespective of whether a breadth or depth strategy is chosen, the lists of ranked candidates should always be scrutinised for plausibility because although each identification has a top candidate ranked first by MetFrag, the top candidate may not be the only candidate worth considering (if at all) given the context of the study. Below, an in-depth analysis and results interpretation of the top 4 candidates for selected m/z is presented in the following tables as examples of each of the scenarios (Table 3). Distributed Structure-Searchable Toxicity Substance Identifiers from CompTox, known as DTXSIDs are given as identifiers. The choice to use DTXSID as candidate identifiers and not their compound names is addressed in the Discussion.

m/z 278.1062

Scenario 1: high Spectral and Metadata scores (high MetFrag Score; > 6)

Thirty-three compounds with matching mass were retrieved from CompTox and scored by MetFrag using the ten scoring terms (Table 2). The top-ranked candidate, DTXSID4058156, has the highest total MetFrag Score out of all the candidates proposed (Table 5). In terms of spectral information, it has the highest FragmenterScore and OfflineMetFusion score of all the candidates, as well as a MoNA library match of 0.998, while all other candidates had a MoNA library match of 0.

Table 5 MetFrag Score breakdown by scoring term for the top 4 candidates for m/z 278.1062 (ultimately identified as metazachlor with Level 1 confidence)

In terms of metadata and presence in suspect lists, DTXSID4058156 has abundant metadata, is present on many suspect lists compiled by the NORMAN Network (REACH2017, SusDat and KEMIMARKET), and has 47 underlying data sources in CompTox. Based on this aforementioned evidence, this identification has confidence level 2a.

Overall, both the spectral and metadata evidence strongly support Candidate 1 over the others, as seen in the large difference between the candidates’ MetFrag Scores.

Candidate recommendation: Candidate 1 should be strongly considered for further identification efforts.

A reference standard of DTXSID4058156 (metazachlor) provided a retention time match within 0.03 min, thereby confirming the identification of this unknown as metazachlor with Level 1 confidence.

m/z 187.0938

Scenario 2: low Spectral but high Metadata scores (moderate MetFrag Score; 3–6)

For m/z 187.0938, identified as a [M + H]+ adduct by enviMass, the top candidate scored poorly in the Spectral terms compared to subsequent candidates. However, its strong scoring in the metadata terms ultimately drove its high MetFrag Score (Table 6).

Table 6 MetFrag Score breakdown by scoring term for the top 4 candidates for m/z 187.0938

The distribution of MetFrag Scores in Table 6 indicates that the top 3 (or even 4) candidates have relatively similar scores. Although the spectral data rather support Candidates 2 or 3 as better matching the experimental data, the high KEMIMARKET_EXPO score for Candidate 1 indicates that it may be of greater concern in a regulatory context due to the potentially large exposure volumes, and could be considered for further confirmation efforts to eliminate this from consideration in future campaigns.

Candidate recommendation: All top four candidates should be considered for further identification efforts due to high exposure and hazard scores.

m/z 249.0728

Additional example for Scenario 2: low Spectral but high Metadata scores (moderate MetFrag Score; 3–6)

The information provided by high Metadata scores can serve as the discriminating factor between candidates when their Spectral scores yield little/poor information which in turn gives little indication of how to rank the candidates if only spectral evidence had been considered. In this sense, Metadata scoring terms contribute an extra layer of information beyond spectral evidence towards identifying potentially relevant unknowns.

For example, the top four candidates of m/z 249.0728 (Table 7) have comparably poor Spectral scores meaning there is overall little spectral evidence supporting these identifications. However, Candidate 1 distinguishes itself significantly from the other candidates because of its relatively high Metadata scores, in particular its KEMIMARKET_EXPO, KEMIMARKET_HAZ, and presence in REACH2017. Therefore, it has higher environmental relevance than subsequent candidates, which explains its top ranking.

Table 7 MetFrag Score breakdown by scoring term for the top 4 candidates for m/z 249.0728

Candidate recommendation: Candidate 1 should be considered for further identification efforts given the moderate KEMI exposure and hazard scores, indicating potential environmental relevance in Europe.

m/z 142.0975

Additional example for Scenario 2: low Spectral but high Metadata scores (moderate MetFrag Score; 3–6)

Similar to the previous example, candidates for have m/z 142.0975 have comparable performance in the Spectral scores and would be practically indistinguishable from each other if not for the large difference in their Metadata scores (Table 8). Candidate 1 differs strongly from subsequent candidates because of its relatively high KEMIMARKET_EXPO, KEMIMARKET_HAZ and REACH2017 scores that support its top ranking.

Table 8 MetFrag Score breakdown by scoring term for the top 4 candidates for m/z 142.0975

Candidate Recommendation: Candidate 1 should be considered for further identification efforts given high Europe-relevant Metadata scores.

m/z 152.0198

Scenario 3: high Spectral scores but low Metadata scores (moderate MetFrag Score; 3–6)

For the top candidates of m/z 152.0198, practically no metadata exists except for DATA_SOURCES—each candidate has 1, indicating that these are not particularly well-known chemicals (or, potentially newly discovered and not well documented in public databases yet). However, the FragmenterScores of the candidates differed sufficiently to discriminate between them and indicate that Candidate 1 may be the best match in this case (Table 9).

Table 9 MetFrag Score breakdown by scoring term for the top 4 candidates for m/z 152.0198

Candidate recommendation: Candidate 1 may be considered for further identification efforts, but candidates for other masses are more promising in the regulatory context (Table 10).

Table 10 MetFrag Score breakdown by scoring term for the top 4 candidates for m/z 199.1050

m/z 199.1050

Scenario 4: low Spectral scores, low Metadata scores (low MetFrag Score; < 3)

Candidates proposed for m/z 199.1050 had neither particularly strong spectral nor metadata information, resulting in low overall MetFrag Scores. In this case, there is no strong evidence that any of the candidates available in CompTox are of particular interest in the context of the investigation.

Candidate recommendation: Candidate 1 may be considered for further identification efforts, but candidates for other masses are more promising.

Information for regulatory decision-making on further identification efforts/next steps

Table 11 summarises the candidate recommendations presented above, where 7–9 candidates are recommended for further identification efforts for the 6 m/z presented here.

Table 11 Candidates for six m/z meriting further identification efforts based on individual evaluations

The top four candidates for each of the remaining 16 m/z were analysed in the same way as discussed above, and candidates were evaluated based on the same criteria as described: prioritisation according to tertile, scenario, and Spectral and Metadata scores, including potential exposure and hazards (Additional file 1: Tables S3–S18). For these 16 m/z, a total of 25–49 candidates (out of possible 16 times 4 = 64) are recommended for further identification efforts (Additional file 1: Table S19). Thus, for all the 22 m/z which underwent MetFrag identification in this study, an overall total of 32–58 candidates (out of possible 22 times 4 = 88) are recommended for further identification efforts. These candidate numbers are provided as ranges to allow for flexibility in project management and future steps, which may depend on available resources (see Discussion).


In this study, non-target analysis was performed retrospectively on samples from Swiss WWTP effluents that had been collected as part of an existing regulatory environmental monitoring campaign. Instead of an exploratory approach that is still common amongst NTA studies, the research questions that directed this study were derived from regulatory priorities, thereby ensuring outcomes of direct and immediate relevance for environmental monitoring and protection.

Unknowns of regulatory interest were defined as those with the highest intensities and highest temporal frequency with point sources across all the samples of the sampling campaign. These criteria had been predefined by the regulatory coauthors of this study, and resulted in a list of m/z of interest that were manually selected after filtering and sorting the masses using enviMass. In the current work, the mass spectra of the m/z of interest from the given list were subjected to pre-screening and quality control (Fig. 2) to ensure their suitability for use in non-target identification. Quality control isolated measurements worthy of further identification efforts and eliminated those of poor standard, effectively resulting in data reduction (Fig. 3). The prescreening workflow was written in R and is now openly available within the package Shinyscreen [24].

Then, MetFrag [51, 68] was employed to provide tentative identifications for these unknowns, leveraging its extensive metadata capabilities “post-relaunch”, as well as several open resources/information sources, including chemical information from regulators around the world. MetFrag analysis was performed via the command line using scripts based on ReSOLUTION [55] and RChemMass [56].

Tentative identifications for 22 m/z were obtained using MetFrag (21 at Level 3, 1 at Level 2a, whose identity was eventually confirmed to Level 1). These identifications were evaluated in terms of (i) a score distribution for the top candidates (Fig. 4) and (ii) Scenario Analysis (Table 3) according to the regulatory context and research questions underlying this work. Final candidate recommendations were given based on MetFrag Score breakdowns, thereby providing in-depth and transparent analyses of the spectral and metadata evidence for proposed candidates. For the 22 m/z analysed, 32–58 candidates were recommended for further identification efforts.

Regarding the analytical method, direct injection without enrichment was used here, as non-target compounds of high intensity were of primary interest and enrichment was not considered necessary. Additionally, Mechelke et al. recently found that direct injection is comparatively better suited to capturing a broader range of compounds, including highly polar compounds that would otherwise experience poor recovery during enrichment [38]. The spectral data were recorded using data-dependent acquisition mode with an inclusion list in this study. While future NTA work could explore the use of data-independent acquisition (DIA), omitting the necessity for an inclusion list, this adds other complexities, as lower intensity precursors may not yield fragments of sufficient intensity and data interpretation inevitably becomes more complicated, especially if complex matrices like wastewater with many co-eluting compounds are being studied.

Quality control was a critical element in the prescreening workflow, as preliminary manual inspection of the data using XCalibur revealed variable data quality. In fact, most data (> 80% cases) were not fully suitable for the intended non-target identification. R scripts (now embedded within Shinyscreen package) were written to automate most of the quality control checks (Table 1, checks 1–5). Automated quality control allowed for quick and reproducible processing of the large quantity of data needed to answer the superlative research questions guiding this work. The variable quality of the data had several likely causes: (i) List B masses were not in the inclusion list; (ii) MS2 were not measured immediately after MS1, therefore sample degradation over long storage time between MS1 and MS2 measurements could have occurred, and (iii) possibly over-restrictive enviMass prioritisation criteria. Thus, the small number of cases (~ 0.03% of total) passing all quality control checks and qualifying for MetFrag identification was not unexpected.

MetFrag was configured to comprise both Spectral and Metadata scoring terms, including chemical suspect lists and scoring terms from international regulators within the latter such as KEMIMARKET_EXPO, KEMIMARKET_HAZ, REACH2017, NORMANSUSDAT, and CPDAT_COUNT. Paired with CompTox as its candidate database, MetFrag was thus specifically customised to perform non-target identification of environmental unknowns in WWTP samples within a regulatory context in this work. Beyond using fragmentation information alone, using metadata to inform MetFrag’s identifications proved to be especially important in certain situations, e.g., when Spectral scores based on fragmentation were not informative enough to distinguish candidates from each other (Tables 7 and 8). Crucially, the information provided by metadata can serve as guidance for future regulatory actions in the context of the environmental protection aims of this study. For example, although certain candidate(s) may not be top-ranked or have strong spectral evidence (Table 6), potentially concerning hazard and exposure scores may qualify a certain candidate for serious consideration in future work in the spirit of applying the Precautionary Principle.

Regarding the components of the MetFrag Score, a total of ten scoring terms, three Spectral and seven Metadata, were used to score candidates. Compared to most previous studies which used MetFrag as mentioned in the Introduction, this number may seem large. However, adding extra scoring terms does not appear to compromise MetFrag’s identification capabilities. In fact, the additional scoring terms were beneficial because further bases for differentiating between candidates became available. In other words, using more scoring terms can provide more granularity when distinguishing candidates, which is important for candidate evaluation and recommendation. Further scoring terms based on physical–chemical properties could be integrated in the future such as correlation of the partitioning coefficient logKow (or log P) with retention time as already available in MetFrag [51]. While such scoring criteria would help filter out any unrealistic candidates based on objective criteria like ionisability and polarity, insufficient information was available to perform retention time correlation via MetFrag in this study.

With respect to the individual terms, CPDAT_COUNT, INDACT, and OfflineIndividualMoNA proved to be relatively uninformative in this particular study, evidenced by their frequent zero-value scores. As a database containing consumer chemical products ranging from those used in home maintenance (paints, sealants, lubricants, cleaners, etc.) to personal care products (hair gel, nail polish, face cream, makeup, etc.), CPDAT’s limited applicability in wastewater studies such as the present one is unsurprising, and it instead may be more suitable for exposomics studies involving, e.g., household dust. INDACT, the list of industrial activity chemicals known to be used in the vicinity of the WWTPs as disclosed to the regulator, had the strongest potential to improve the identification results. However, not a single candidate across all the MetFrag results was present on this suspect list, which could suggest that the chemical disclosures made by the industries were either incomplete, unsuitable for identification purposes (e.g., parent compounds were disclosed but possibly only transformation products are present in the environment/are detectable, UVCBs with unspecific chemical identities, etc.), and/or inherently do not end up in wastewater if the compounds themselves are used in closed circuits, are recycled, or partition into sludge if they are very non-polar. Lastly, while mass spectral libraries are inherently incomplete [44], a low OfflineIndividualMoNA score does not necessarily indicate poor spectral library matches. Rather, low OfflineIndividualMoNA scores could also signify that the candidate is not present within MoNA to begin with, or result from noisy experimental spectra even if the match would otherwise be good. Therefore, evaluating candidates on this scoring term alone must be done with these factors in mind, and improvements to its design to avoid possible faulty interpretations could constitute future work. Other future work on MetFrag itself could involve the addition of new Spectral scoring terms which do not require scaling via normalisation of the maximum value, as this maximum value is highly dependent on the candidate database chosen. For instance, a simple spectral similarity metric such as cosine similarity would evaluate how well the in silico and experimental fragmentation spectra align, independent of those of other candidates.

CompTox, the candidate database chosen here, remains one of the most environmentally-focused open databases of chemical compounds as it exclusively contains chemicals of environmental and toxicological relevance. Compared to other open databases like PubChem (111 million chemical structures, August 2020), CompTox is also smaller in size (883,000 chemicals, February 2021). Therefore, MetFrag paired with CompTox is likely to suggest smaller lists of candidates which are de facto environmentally-meaningful, making workflow runtimes shorter and candidate evaluation relatively easier. However, using CompTox has drawbacks, principally stemming from its lack of comprehensiveness when compared to PubChem. In some cases, there may be a lack of candidates matching the identification criteria when using CompTox with MetFrag simply because they may not exist within CompTox itself to begin with due to its limited size and scope. PubChemLite [55, 56, 58] represents one complementary alternative to these issues, as it is by design essentially a subset of environmentally relevant compounds based on compound classifications. Overall, the ability to subset databases based on usage and classification information of chemicals can be beneficial, as different regulatory bodies may have different mandates, and studies can be designed to align with those mandates accordingly, e.g., focus only on chemicals with (i) known usage in industrial manufacturing, or (ii) agricultural chemicals, or (iii) pharmaceuticals, etc.

Using scenarios as a framework to interpret MetFrag’s results was critical considering the specific regulatory aims of this work: tentatively identify pollutants of high priority (with minimum Level 3 confidence) to guide further monitoring and identification efforts.

Scenario Analysis revealed in detail whether Spectral, Metadata, or both contributed to a given MetFrag Score and in turn provided the rationale behind proposed candidates. As our evaluation has shown, multiple candidates are worth considering especially if they have very similar scores (e.g., Table 6), or have more compelling evidence represented by individual scoring terms as described above. In this way, Scenario Analysis as used here is highly suitable for transparently communicating scientific results in a regulatory context. On a larger scale, such analyses address a key weakness common to NTA studies: the current lack of ability to perform detailed data interpretation – especially in a high-throughput, automatable and reproducible manner.

Furthermore, Scenario Analysis as used here can inform decision-making regarding the next steps. Besides addressing study priorities based on “depth vs. breadth” as discussed in the Results, the scenarios can be used to devise a prioritisation scheme for future work. For example, if authentic standards can only be purchased/analysed for 10 compounds due to resource limitations, those compounds should be the recommended candidates with MetFrag Scores from Scenario 1 > Scenarios 2/3 >  >  > Scenario 4. Alternatively, if it is known from the outset that spectral data may be poor quality, Scenario 2 candidates may take precedence over Scenario 3 candidates, as the former rely on high Metadata scores and not high Spectral scores for their high MetFrag Scores. Additionally, applying the precautionary principle may motivate prioritising identity confirmations of candidates with concerning metadata like high toxicity and/or exposure (corresponding to KEMIMARKET_HAZ and KEMIMARKET_EXPO scores), even if those candidates are not necessarily ranked highly by MetFrag.

Practically speaking, next steps in environmental monitoring based on the results here (besides identity confirmation using authentic standards) could include expanding suspect lists using the recommended candidates to improve future suspect screening activities. These new suspects could in turn be added to the inclusion lists of future measurements, thereby already gaining an analytical ‘upper-hand’ for future NTA studies. Expanding suspect and inclusion lists in this way, possibly in combination with using a rarity score [26] that prioritises high intensity, infrequently occurring peaks, represents an evidence-based approach towards more meaningful environmental monitoring in the long-run, as these candidate compounds were tentatively ‘observed’ and are therefore site-specific. Otherwise, suspect lists are typically expanded based on information from national or international chemical registration lists, whose applicability may be limited depending on the actual usage/exposure in the region of concern. Therefore, an additional outcome of this study is a means to bridge target and non-target analysis by supplying meaningful candidates for suspect screening.

This work is one contribution to a much larger discussion surrounding (i) how NTA can support regulatory environmental monitoring, and (ii) the practical feasibility of applying NTA in routine environmental monitoring. (For an example of current discourse, see Germany’s guidelines for non-target screening in water analysis [52].) Regarding the former, this work demonstrates that NTA can be used to address the concerns of regulators by translating research questions arising from regulatory priorities into peak-picking/mass prioritisation criteria: in this case, high concentration unknown pollutants with point sources that occurred persistently were taken to be high-intensity precursors found at one or few sampling sites at both sampling time points. Without the ability to perform quantification, the assumption that high ion intensity represents high concentration could be validated by using different chromatographic solvent systems as a test of ionisation efficiency in future work, or implementing ionisation efficiency models [32, 46].

On the feasibility of performing NTA as part of routine regulatory environmental monitoring, the overall method described here offers a highly automated approach via (i) feature prioritisation via enviMass, (ii) prescreening and quality control (plus a manual step), and (iii) in silico identification, of which (ii) and (iii) were developed in this work. The results interpretation and candidate recommendation processes performed manually in this work form the basis of future efforts towards automated reporting based on Scenario Analysis, MetFrag Score distributions, and evaluation of critical parameters like thresholds for potential toxicities and exposure levels. Such automated reporting would not only allow scalability of future regulatory NTA studies, but could also eliminate potential biases in unknown identification—analysts would not be able to ‘cherry-pick’ candidates based on their familiarity with certain compounds because undescriptive identifiers, e.g., DTXSIDs would be used up until the final results are delivered at the end of the entire method. Furthermore, while the prescreening, quality control, and identification workflow was applied retrospectively, the improvements to workflow automation detailed here could allow for quicker data analysis turnaround in the future, which would help guide future sampling and measurements planned in the short–medium term and prevent the long delays between remeasurements still commonly observed in NTA investigations—effectively, moving towards ‘real-time’ instead of retrospective NTA approaches. Two concrete follow-up initiatives are foreseen: (i) build an interface connecting Shinyscreen and MetFrag, including automated reporting features as previously described, and (ii) develop a set of ‘default’ scoring terms and settings tailored for NTA of wastewater samples. Further collaborations involving non-target wastewater studies and database hosts will help augment expert knowledge on more use cases, which would be leveraged to develop this approach further.

On a community level, standardisation would play a role in increasing the feasibility of NTA as part of routine regulatory environmental monitoring. As previously mentioned, there exist considerable, albeit nascent, efforts towards standardising analytical protocols for non-target screening on a national level in, e.g., Germany in the form of guidelines [52]. Such activities suggest that standardisation is certainly of priority to the community and may be achievable over time. However, NTA may not be widely adopted by regulators in the short- to medium-term until analytical protocols are successfully standardised. In turn, it continues to be challenging from a data analysis perspective to implement standardised workflows if the analytical parameters used for measuring data are not themselves standardised. Thus, the status quo demands that current data processing methods remain flexible to accommodate the variety of analytical parameters used, as is the case with the method presented here.


A prescreening and identification workflow for analysing non-target compounds was developed in this study to retrospectively identify unknowns detected in WWTP sites in the context of directly supporting regulatory decision-making for environmental monitoring. Using Open data and Open tools including the US EPA CompTox Chemicals Dashboard, NORMAN Network resources such as SusDat and the Suspect List Exchange, and MetFrag, tentative identifications for 21 unknown compounds were provided at Level 3 confidence, and 1 compound’s identity was confirmed using a reference standard giving a Level 1 identification. These results were achieved despite limited data quality.

This study heavily emphasised results interpretation on two levels: on a global level across the chemical unknowns investigated, and on an individual candidate level. Through these analyses, specific candidates were recommended for further identification efforts, and transparent justifications were provided based on the MetFrag score breakdown (i.e., spectral vs. metadata evidence). These recommendations, and not just MetFrag’s outputs, represent the final results in the regulatory and environmental monitoring context of this study, and may serve as a template to drive future developments in NTA.

The prescreening and quality control workflow developed here is embedded in the open R package Shinyscreen [24], which is freely available online, as is code from ReSOLUTION [55] and RChemMass [56] used for performing command-line MetFrag identification. The CompTox database version with the metadata terms used here is likewise also publicly available [54].

Availability of data and materials

The mass spectrometry dataset generated and analysed during the current study, including the complete MetFrag results for the 22 m/z that were tentatively identified, are available as an open MassIVE dataset (MSV000086631) via Software Project name: Shinyscreen. Project home page: Archived version used in this study: Shinyscreen v.0.1.1-paper ( Operating system(s): Windows, Mac OSX, Linux. Programming language: R Other requirements: OpenJDK and other R package dependencies listed in Shinyscreen’s README. License: Apache Version 2.0 (

Code availability

All codes used to run the prescreening and quality control workflow and MetFrag command-line analysis is open/publicly available via,, and Shinyscreen (see below). All other datasets and databases used as part of MetFrag identification are open/publicly available (links available in References).



Non-target analysis


Wastewater treatment plant


United States Environmental Protection Agency


US EPA CompTox Chemicals Dashboard


DSSTox Substance Identifier (from CompTox)


Chemicals and Products Database


Registration, Evaluation, Authorisation and Restriction of Chemicals


MassBank of North America


Chemical substances of Unknown or Variable composition, Complex reaction products, and Biological materials


  1. 1.

    Albergamo V, Schollée JE, Schymanski EL et al (2019) Nontarget screening reveals time trends of polar micropollutants in a riverbank filtration system. Environ Sci Technol.

    Article  Google Scholar 

  2. 2.

    Alygizakis N, Slobodnik J (2018) S32 | REACH2017|>68,600 REACH Chemicals (Version NORMAN-SLE-S32013). Zenodo. Accessed 16 Aug 2020

  3. 3.

    Anliker S, Loos M, Comte R et al (2020) Assessing emissions from pharmaceutical manufacturing based on temporal high-resolution mass spectrometry data. Environ Sci Technol 54:4110–4120.

    CAS  Article  Google Scholar 

  4. 4.

    Beckers L-M, Brack W, Dann JP et al (2020) Unraveling longitudinal pollution patterns of organic micropollutants in a river by non-target screening and cluster analysis. Sci Total Environ 727:138388.

    CAS  Article  Google Scholar 

  5. 5.

    Carpenter CMG, Wong LYJ, Johnson CA, Helbling DE (2019) Fall creek monitoring station: highly resolved temporal sampling to prioritize the identification of nontarget micropollutants in a small stream. Environ Sci Technol 53:77–87.

    CAS  Article  Google Scholar 

  6. 6.

    Chambers MC, Maclean B, Burke R et al (2012) A cross-platform toolkit for mass spectrometry and proteomics. Nat Biotechnol 30:918–920.

    CAS  Article  Google Scholar 

  7. 7.

    ChemSpider | Search and share chemistry (2020). Accessed 13 Aug 2020

  8. 8.

    Chiaia-Hernández AC, Günthardt BF, Frey MP, Hollender J (2017) Unravelling contaminants in the Anthropocene using statistical analysis of liquid chromatography–high-resolution mass spectrometry nontarget screening data recorded in lake sediments. Environ Sci Technol 51:12547–12556.

    CAS  Article  Google Scholar 

  9. 9.

    Choi Y, Kim K, Kim D et al (2020) Ny-Ålesund-oriented organic pollutants in sewage effluent and receiving seawater in the Arctic region of Kongsfjorden. Environ Pollut 258:113792.

    CAS  Article  Google Scholar 

  10. 10.

    Dionisio KL, Phillips K, Price PS et al (2018) The Chemical and Products Database, a resource for exposure-relevant data on chemicals in consumer products. Sci Data 5:180125.

    CAS  Article  Google Scholar 

  11. 11.

    Faber A-H, Annevelink MPJA, Schot PP et al (2019) Chemical and bioassay assessment of waters related to hydraulic fracturing at a tight gas production site. Sci Total Environ 690:636–646.

    CAS  Article  Google Scholar 

  12. 12.

    Fiehn Lab (2020) MassBank of North America. Accessed 3 Jun 2020

  13. 13.

    Fischer S (2017) S17 | KEMIMARKET | KEMI Market List (Version NORMAN-SLE-S17013). Zenodo. Accessed 8 May 2020

  14. 14.

    Gerlich M, Neumann S (2013) MetFusion: integration of compound identification strategies. J Mass Spectrom 48:291–298.

    CAS  Article  Google Scholar 

  15. 15.

    Helmus R, ter Laak TL, van Wezel AP et al (2021) patRoon: open source software platform for environmental mass spectrometry based non-target screening. J Cheminf 13:1.

    Article  Google Scholar 

  16. 16.

    Hites RA, Jobst KJ (2018) Is nontargeted screening reproducible? Environ Sci Technol 52:11975–11976.

    CAS  Article  Google Scholar 

  17. 17.

    Hollender J, Schymanski EL, Singer HP, Ferguson PL (2017) Nontarget screening with high resolution mass spectrometry in the environment: ready to go? Environ Sci Technol 51:11505–11512.

    CAS  Article  Google Scholar 

  18. 18.

    Hollender J, van Bavel B, Dulio V et al (2019) High resolution mass spectrometry-based non-target screening can support regulatory environmental monitoring and chemicals management. Environ Sci Eur 31:42.

    CAS  Article  Google Scholar 

  19. 19.

    Hug C, Ulrich N, Schulze T et al (2014) Identification of novel micropollutants in wastewater by a combination of suspect and nontarget screening. Environ Pollut 184:25–32.

    CAS  Article  Google Scholar 

  20. 20.

    Human Metabolome Database (2020). Accessed 13 Aug 2020

  21. 21.

    Kandie FJ, Krauss M, Beckers L-M et al (2020) Occurrence and risk assessment of organic micropollutants in freshwater systems within the Lake Victoria South Basin, Kenya. Sci Total Environ 714:136748.

    CAS  Article  Google Scholar 

  22. 22.

    Kanehisa M, Goto S (2000) KEGG: kyoto encyclopedia of genes and genomes. Nucleic Acids Res 28:27–30.

    CAS  Article  Google Scholar 

  23. 23.

    Kim S, Chen J, Cheng T et al (2021) PubChem in 2021: new data content and improved web interfaces. Nucleic Acids Res 49:D1388–D1395.

    CAS  Article  Google Scholar 

  24. 24.

    Kondić T, Lai A, Schymanski E, et al (2020) Environmental cheminformatics/shinyscreen. Accessed 16 Aug 2020

  25. 25.

    Köppe T, Jewell KS, Dietrich C et al (2020) Application of a non-target workflow for the identification of specific contaminants using the example of the Nidda river basin. Water Res 178:115703.

    CAS  Article  Google Scholar 

  26. 26.

    Krauss M, Hug C, Bloch R et al (2019) Prioritising site-specific micropollutants in surface water from LC–HRMS non-target screening data using a rarity score. Environ Sci Eur 31:45.

    CAS  Article  Google Scholar 

  27. 27.

    Krauss M, Singer H, Hollender J (2010) LC–high resolution MS in environmental analysis: from target screening to the identification of unknowns. Anal Bioanal Chem 397:943–951.

    CAS  Article  Google Scholar 

  28. 28.

    Lara-Martín PA, Chiaia-Hernández AC, Biel-Maeso M et al (2020) Tracing urban wastewater contaminants into the Atlantic ocean by nontarget screening. Environ Sci Technol 54:3996–4005.

    CAS  Article  Google Scholar 

  29. 29.

    Lege S, Eisenhofer A, Heras JEY, Zwiener C (2019) Identification of transformation products of denatonium—occurrence in wastewater treatment plants and surface waters. Sci Total Environ 686:140–150.

    CAS  Article  Google Scholar 

  30. 30.

    Letzel T (2021) FOR-IDENT—Fortschritte in der Identifizierung organischer Spurenstoffe: Zusammenführen der Hilfsmittel und Standardisierung der Suspected- und Non-Target Analytik. (Advances in the Identification of Organic Trace Pollutants: Merging Tools and Standardisation of Suspect and Non-target Analytics.) Accessed 28 Feb 2021

  31. 31.

    Li Z, Kaserzon SL, Plassmann MM et al (2017) A strategic screening approach to identify transformation products of organic micropollutants formed in natural waters. Environ Sci Processes Impacts 19:488–498.

    CAS  Article  Google Scholar 

  32. 32.

    Liigand J, Wang T, Kellogg J et al (2020) Quantification for non-targeted LC/MS screening without standard substances. Sci Rep 10:5808.

    CAS  Article  Google Scholar 

  33. 33.

    Ljoncheva M, Stepišnik T, Džeroski S, Kosjek T (2020) Cheminformatics in MS-based environmental exposomics: current achievements and future directions. Trends Environ Anal Chem 28:e00099.

    CAS  Article  Google Scholar 

  34. 34.

    Loos M, Schmitt U, Schollée JE (2018) blosloos/enviMass: enviMass version 3.5. Accessed 13 Oct 2020

  35. 35.

    Luft A, Bröder K, Kunkel U et al (2017) Nontarget analysis via LC–QTOF-MS to assess the release of organic substances from polyurethane coating. Environ Sci Technol 51:9979–9988.

    CAS  Article  Google Scholar 

  36. 36.

    MassBank Consortium, NORMAN Association (2021) MassBank | MassBank Europe Mass Spectral DataBase. Accessed 28 Feb 2021

  37. 37.

    McEachran AD, Mansouri K, Grulke C et al (2018) “MS-Ready” structures for non-targeted high-resolution mass spectrometry screening studies. J Cheminf.

    Article  Google Scholar 

  38. 38.

    Mechelke J, Longrée P, Singer H, Hollender J (2019) Vacuum-assisted evaporative concentration combined with LC-HRMS/MS for ultra-trace-level screening of organic micropollutants in environmental water samples. Anal Bioanal Chem 411:2555–2567.

    CAS  Article  Google Scholar 

  39. 39.

    Menger F, Ahrens L, Wiberg K, Gago-Ferrero P (2021) Suspect screening based on market data of polar halogenated micropollutants in river water affected by wastewater. J Hazard Mater 401:123377.

    CAS  Article  Google Scholar 

  40. 40.

    Miaz LT, Plassmann MM, Gyllenhammar I et al (2020) Temporal trends of suspect- and target-per/polyfluoroalkyl substances (PFAS), extractable organic fluorine (EOF) and total fluorine (TF) in pooled serum from first-time mothers in Uppsala, Sweden, 1996–2017. Environ Sci Processes Impacts 22:1071–1083.

    CAS  Article  Google Scholar 

  41. 41.

    Moschet C, Anumol T, Lew BM et al (2018) Household dust as a repository of chemical accumulation: new insights from a comprehensive high-resolution mass spectrometric study. Environ Sci Technol 52:2878–2887.

    CAS  Article  Google Scholar 

  42. 42.

    Muz M, Dann JP, Jäger F et al (2017) Identification of mutagenic aromatic amines in river samples with industrial wastewater impact. Environ Sci Technol 51:4681–4688.

    CAS  Article  Google Scholar 

  43. 43.

    NORMAN Network, Aalizadeh R, Alygizakis N, et al (2019) S0 | SUSDAT | Merged NORMAN Suspect List: SusDat (Version NORMAN-SLE-S0.0.2.0) [Data set]. Zenodo. Accessed 8 May 2020

  44. 44.

    Oberacher H, Sasse M, Antignac J-P et al (2020) A European proposal for quality control and quality assurance of tandem mass spectral libraries. Environ Sci Eur 32:43.

    CAS  Article  Google Scholar 

  45. 45.

    Oetjen K, Blotevogel J, Borch T et al (2018) Simulation of a hydraulic fracturing wastewater surface spill on agricultural soil. Sci Total Environ 645:229–234.

    CAS  Article  Google Scholar 

  46. 46.

    Panagopoulos Abrahamsson D, Park J-S, Singh RR et al (2020) Applications of machine learning to in silico quantification of chemicals without analytical standards. J Chem Inf Model.

    Article  Google Scholar 

  47. 47.

    Park N, Choi Y, Kim D et al (2018) Prioritization of highly exposable pharmaceuticals via a suspect/non-target screening approach: a case study for Yeongsan River, Korea. Sci Total Environ 639:570–579.

    CAS  Article  Google Scholar 

  48. 48.

    Pence HE, Williams A (2010) ChemSpider: an online chemical information resource. J Chem Educ 87:1123–1124.

    CAS  Article  Google Scholar 

  49. 49.

    Purschke K, Zoell C, Leonhardt J et al (2020) Identification of unknowns in industrial wastewater using offline 2D chromatography and non-target screening. Sci Total Environ 706:135835.

    CAS  Article  Google Scholar 

  50. 50.

    Ruff M, Mueller MS, Loos M, Singer HP (2015) Quantitative target and systematic non-target analysis of polar organic micro-pollutants along the river Rhine using high-resolution mass-spectrometry—Identification of unknown sources and compounds. Water Res 87:145–154.

    CAS  Article  Google Scholar 

  51. 51.

    Ruttkies C, Schymanski EL, Wolf S et al (2016) MetFrag relaunched: incorporating strategies beyond in silico fragmentation. J Cheminform 8:3.

    CAS  Article  Google Scholar 

  52. 52.

    Schulz W, Lucke T, et al. (2019) Non-target screening in water analysis—Guideline for the application of LC-ESI-HRMS for screening. Accessed 27 Feb 2021

  53. 53.

    Schwarzbauer J, Ricking M (2010) Non-target screening analysis of river water as compound-related base for monitoring measures. Environ Sci Pollut Res 17:934–947.

    CAS  Article  Google Scholar 

  54. 54.

    Schymanski E (2019) MetFrag Local CSV: CompTox (7 March 2019 release) Wastewater MetaData File (Version WWMetaData_4Oct2019). Zenodo. Accessed 8 May 2020

  55. 55.

    Schymanski E (2020a) schymane/ReSOLUTION. Version 0.1.8 Accessed 16 Aug 2020

  56. 56.

    Schymanski E (2020b) schymane/RChemMass. Version 0.1.27 Accessed 16 Aug 2020

  57. 57.

    Schymanski EL, Jeon J, Gulde R et al (2014) Identifying small molecules via high resolution mass spectrometry: communicating confidence. Environ Sci Technol 48:2097–2098.

    CAS  Article  Google Scholar 

  58. 58.

    Schymanski EL, Kondic T, Neumann S et al (2021) Empowering large chemical knowledge bases for exposomics: PubChemLite Meets MetFrag. J Cheminform 13:19.

    Article  Google Scholar 

  59. 59.

    Sousa JCG, Ribeiro AR, Barbosa MO et al (2018) A review on environmental monitoring of water organic pollutants identified by EU guidelines. J Hazard Mater 344:146–162.

    CAS  Article  Google Scholar 

  60. 60.

    Sun C, Zhang Y, Alessi DS, Martin JW (2019) Nontarget profiling of organic compounds in a temporal series of hydraulic fracturing flowback and produced waters. Environ Int 131:104944.

    CAS  Article  Google Scholar 

  61. 61.

    Tian Z, Peter KT, Gipe AD et al (2020) Suspect and nontarget screening for contaminants of emerging concern in an urban estuary. Environ Sci Technol 54:889–901.

    CAS  Article  Google Scholar 

  62. 62.

    US EPA (2016) Chemical and Products Database (CPDat). US EPA. Accessed 8 May 2020

  63. 63.

    Veenaas C, Bignert A, Liljelind P, Haglund P (2018) Nontarget Screening and time-trend analysis of sewage sludge contaminants via two-dimensional gas chromatography-high resolution mass spectrometry. Environ Sci Technol 52:7813–7822.

    CAS  Article  Google Scholar 

  64. 64.

    Wagner TV, Helmus R, Quiton Tapia S et al (2020) Non-target screening reveals the mechanisms responsible for the antagonistic inhibiting effect of the biocides DBNPA and glutaraldehyde on benzoic acid biodegradation. J Hazard Mater 386:121661.

    CAS  Article  Google Scholar 

  65. 65.

    Wang Z, Walker GW, Muir DCG, Nagatani-Yoshida K (2020) Toward a Global understanding of chemical pollution: a first comprehensive analysis of national and regional chemical inventories. Environ Sci Technol.

    Article  Google Scholar 

  66. 66.

    Williams AJ, Grulke CM, Edwards J et al (2017) The CompTox Chemistry Dashboard: a community data resource for environmental chemistry. J Cheminf 9:61.

    CAS  Article  Google Scholar 

  67. 67.

    Wishart DS, Feunang YD, Marcu A et al (2018) HMDB 4.0: the human metabolome database for 2018. Nucleic Acids Res 46:D608–D617.

    CAS  Article  Google Scholar 

  68. 68.

    Wolf S, Schmidt S, Müller-Hannemann M, Neumann S (2010) In silico fragmentation for computer assisted identification of metabolite mass spectra. BMC Bioinf 11:148.

    CAS  Article  Google Scholar 

Download references


The authors acknowledge Dr. Martin Loos (enviBee GmbH) for his technical support with enviMass analyses. Contributors to CompTox, MetFrag, the suspect lists on the NORMAN Suspect List Exchange, the Open software used here, and Open Science in general are gratefully appreciated.


Open Access funding enabled and organized by Projekt DEAL. ELS, AL, and TK are supported by the Luxembourg National Research Fund (FNR) for project A18/BM/12341006.

Author information




LK conceived the study and set up the sampling campaigns; OJ measured the data; AL, ELS, RRS designed the workflow presented; AL, ELS, TK wrote the software; AL interpreted the data, AL drafted the manuscript with inputs from ELS, RRS, LK, and OJ; AL, LK, OJ, RRS, TK, and ELS revised the submitted version. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Adelene Lai or Emma L. Schymanski.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1

: Table S1. enviMass Parameters used for Orbitrap measurements in this study. Figure S1. Screenshot of the MSConvert (v.3.0.19182-51f676fbe) Graphical User Interface showing settings used to convert the .RAW mass spectrometry data to .mzML format. Table S2. List of 22 m/z which had been prioritised by enviMass and passed Quality Control to qualify for MetFrag identification. Table S3. m/z 216.0930. Table S4. m/z 177.1126. Table S5. m/z 212.0889. Table S6. m/z 173.1649. Table S7. m/z 301.1396. Table S8. m/z 218.1040. Table S9. m/z 176.0707. Table S10. m/z 193.0721Table S10: m/z 193.0721. Table S11. m/z 249.1848. Table S12. m/z 184.0427. Table S13. m/z 171.1492. Table S14. m/z 199.1190. Table S15. m/z 185.1033. Table S16. m/z 251.1491. Table S17. m/z 211.0285. Table S18. m/z 546.2622. Table S19: Candidate Recommendations for all 22 m/z.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Lai, A., Singh, R.R., Kovalova, L. et al. Retrospective non-target analysis to support regulatory water monitoring: from masses of interest to recommendations via in silico workflows. Environ Sci Eur 33, 43 (2021).

Download citation


  • Non-target analysis
  • Suspect screening
  • Retrospective
  • Wastewater
  • Micropollutants
  • Cheminformatics
  • Identification
  • Monitoring
  • Regulation