e-Delphi Study

A Delphi technique will be used to develop an internationally accepted evaluation framework for public health environmental surveillance. The Delphi method is an iterative multi-round approach that uses a series of sequential surveys, interspersed by controlled feedback, to elicit consensus among a group of individuals while maintaining anonymity (1,2). An electronic Delphi (e-Delphi) method will be used to overcome geographic barriers and allow us to engage panellists internationally across various time zones.

The study protocol is currently in publication with PLOS One. However, the pre-print is available on MedRxiv.

Recruitment of Panelists

We will recruit a multinational, multidisciplinary panel of wastewater-based surveillance experts, knowledge users, and engaged members of the public to complete the e-Delphi survey. Panellists will be selected to capture the multiple perspectives of those that influence the design, implementation, evaluation, use, and reporting of wastewater surveillance activities, including the following subdisciplines groups: public health, infectious disease, epidemiology; environmental and physical sciences; mathematical sciences; social sciences; and communication, knowledge translation and exchange.

we will aim to recruit at least 50 panellists. Preferably, with at least 8-10 people per discipline subgroup. The study working group will monitor the distribution of registered panellists based on their demographic information and will try to distribute appropriately across discipline subgroups and other demographic markers.

Eligibility Criteria for e-Delphi Survey Panellists

Discipline subgroup Inclusion criteria
Public health, infectious disease, epidemiology

Environmental and physical science

Mathematical sciences

Social sciences

Communication, knowledge translation and Exchange
An adult1 who is proficient in English and has a graduate degree in one of the listed specializations2, or ≥ 3 years of professional experience, or ≥ 2 peer-reviewed publications relating to wastewater surveillance.
Knowledge users An adult1 who is proficient in English and who is a professional who does not have specialized training or qualifications in wastewater-based surveillance, but who uses surveillance to inform policy and action in their workplace.
Engaged public An adult1 who is proficient in English and has relevant lived experience.
1 Adult: ≥ 18 years of age.
2 The list of specializations associated with each discipline can be found in the S2 Table of the study protocol once published.

Definitions

Professional experience: Paid employment or professional practice in a listed specialization (current or former). Graduate or professional degree: Master or PhD in a listed discipline/specialization.

Procedure

We will conduct a two-round e-Delphi survey to generate consensus on evaluation criteria (Figure 2). Summaries of Round 1 will be compiled for the subsequent round. Custom survey pathways will be generated for each stakeholder group (i.e., panellists from different stakeholder groups will be shown a different collection of candidate items). Within each custom stakeholder survey pathway, panellists will also have the option to skip items or self-declare that they are not qualified to assess certain candidate items.

Round 1

Panellists will be invited to rate their level of agreement with candidate items generated from scoping review results and consultation with the study executive group. Free-text boxes will be included for panellists to provide feedback or identify additional candidate items to be included in the next e-Delphi survey round.

Round 2

Regardless of whether they participated in the previous round, panellists will be invited to participate in Round 2 of the e-Delphi survey. All panellists will be invited to rate new items and re-rate previous items that did not reach consensus. When re-rating their level of agreement, panellists will be presented with their previous round scores alongside the aggregate group results. Anonymous feedback from Round 1 will also be compiled and presented during Round 2. Any newly suggested items during Round 2 will be deliberated on during the consensus meeting.

Defining Consensus

Candidate items will be assessed on the following four rating properties: (1) relevance and practical utility; (2) scientific rigor, validity and reliability; (3) feasibility, adaptability and resource implications; and (4) equity, inclusiveness, and mitigation of bias.

A 7-point Likert scale (1 = highly irrelevant to 7 = highly relevant) will be used to rate each property for all candidate items. A summary score for each candidate item will be created by calculating the median of the four property ratings. Panellist summary scores will then be categorised as excluded item (irrelevant: 1–2), further discussion (equivocal: 3–5), or core item (relevant: 6–7). Consensus for each item is defined as ≥ 70% of the panellist votes falling within the same category (1–2, 3–5, or 6–7).

The approach to use a 7-point fully-labeled Likert scale – a higher number of categories compared to many studies – is informed by the consideration that panellists are professionals and engaged members of the public with good cognitive skills; therefore, more categories and labels will be more discriminating and reproducible (36). The ≥ 70% consensus cut-off – a lower cut-off compared to many studies – is informed by the rapidly evolving nature of environmental surveillance for public health and the wide range of disciplines invited to the panel; therefore, a high level of agreement may not occur (36).

Figure 2. E-Delphi onsensus process for the Public Health Environmental Surveillance Evaluation Framework (PHES-EF)

References

1.
Shang Z. Use of Delphi in health sciences research: A narrative review. Medicine [Internet]. 2023 Feb 17;102(7). Available from: http://dx.doi.org/10.1097/MD.0000000000032829
2.
Niederberger M, Spranger JS. Delphi Technique in Health Sciences: A Map. Frontiers in Public Health [Internet]. 2020 Sep 22;8. Available from: http://dx.doi.org/10.3389/fpubh.2020.00457
3.
Diamond IR, Grant RC, Feldman BM, Pencharz PB, Ling SC, Moore AM, et al. Defining consensus: A systematic review recommends methodologic criteria for reporting of Delphi studies. Journal of Clinical Epidemiology [Internet]. 2014 Apr;67(4). Available from: http://dx.doi.org/10.1016/j.jclinepi.2013.12.002
4.
Belton I, MacDonald A, Wright G, Hamlin I. Improving the practical application of the Delphi method in group-based judgment: A six-step prescription for a well-founded and defensible process. Technological Forecasting and Social Change [Internet]. 2019 Oct;147(N/A). Available from: http://dx.doi.org/10.1016/j.techfore.2019.07.002
5.
Weijters B, Cabooter E, Schillewaert N. The effect of rating scale format on response styles: The number of response categories and response category labels. International Journal of Research in Marketing [Internet]. 2010 Sep 1;27(3). Available from: http://dx.doi.org/10.1016/j.ijresmar.2010.02.004
6.
Lange T, Kopkow C, Lützner J, Günther KP, Gravius S, Scharf HP, et al. Comparison of different rating scales for the use in Delphi studies: different scales lead to different consensus and show different test-retest reliability. BMC Medical Research Methodology [Internet]. 2020 Feb 10;20(1). Available from: http://dx.doi.org/10.1186/s12874-020-0912-8