In part one, we explored factors that impact signal detection in schizophrenia clinical trials and training methods that improve rating reliability. Now, we’ll dive into real-world examples of additional data quality indicators and strategies to detect and remediate them effectively.
Many data quality issues are linked to remediable rating practices
Data quality issues detected early in schizophrenia clinical trials can predict future issues after randomization [13]. Data quality aberrations, such as increased or decreased variability, can impact placebo response and drug response differently, negatively affecting drug-placebo separation [14].
For instance, high within-subject visit-to-visit variability, like erratic changes, has been associated with increased placebo response and reduced signal detection in both acute schizophrenia and prominent negative symptom clinical trials [14,15]. Sites showing a high frequency of erratic ratings can often be identified through blinded data analysis. These sites may indicate issues like frequent rater changes, inconsistency in interviewing techniques, patient selection anomalies, medication non-compliance, or unstable ward environments [14-17].
Centralized analysis for early detection
Centralized analysis of blinded data for rating anomalies and patient selection patterns, paired with audio/video recording of subject interviews, is an efficient and cost-effective way to identify sites at risk of signal degradation [18-19]. Audio/video recording coupled with external expert review of site PANSS interviews appears to reduce identical ratings of 30/30 PANSS items across consecutive visits (a putative measure of non-independent PANSS assessments) by over 50% [20].
Combining eCOA with audio/video recording further reduces the frequency of scoring errors [21]. Audio recording is often viewed as less intrusive and more conducive to patient confidentiality compared to video recording. Audio recorded site-based interviews may have further utility in avoiding detection of “functional” treatment emergent adverse events that may bias ratings [22-23].
Patient-reported outcomes (PRO) are increasingly incorporated into outpatient schizophrenia clinical trials [24]. Quality concerns also occur at a high frequency in patient-reported outcomes data. Concerning patterns can be easily detected in blinded electronic (ePRO) data either by visual inspection or programmed quality indicator alerts. Examples include implausible values, repetitive responses, unexpected variability and unusual administration times and time stamps [25].
Machine learning has opened new doors for proactive data quality monitoring in schizophrenia clinical trials. It can help identify raters and sites at risk for developing quality concerns, allowing for early remediation or limitations on enrollment. However, it's essential to use only highly accurate and clinically relevant models, as inaccurate predictions can deteriorate data quality [26]. [26].
Machine learning also offers the opportunity to seamlessly assess subjects’ suitability for a clinical trial or monitor rater performance and other, currently unforeseen, applications are likely to emerge as the methodologies further evolve.
Age matters: Including adolescents in schizophrenia trials
Regulatory initiatives are increasingly mandating pharmaceutical sponsors to include adolescent patients (ages 13-17) in their schizophrenia trials [27]. Schizophrenia is less common in adolescents, and diagnosis can be challenging due to diagnostic ambiguity and a reluctance among practitioners to diagnose schizophrenia in this age group [28].
Additionally, many trial measures, such as the PANSS, were designed for adults but are used in adolescent schizophrenia trials, which adds complexity for investigators unfamiliar with this population [29].
Another challenge comes from the measures themselves – such as the PANSS -- designed for adults but used ubiquitously as the primary efficacy measure in adolescent schizophrenia trials [30]. Conventions have emerged over the years for interviewing the parent/caregiver, as well as the patient, on each of the 30 PANSS items in adolescent trials. This is different from what is done in adults and adds another layer of complexity for investigators not experienced or skilled in working with this population.
In addition to the learnings relative to adult patients with schizophrenia as discussed throughout this paper, are learnings unique to the adolescent population. These include:
Diagnosis in Pediatric Trials: Following focused expert training on the symptomatic presentation and differential diagnostic considerations of the disorder, we recommend external expert review of diagnostic interviews and outside verification of the diagnostic eligibility of each selected participant.
Efficacy Assessment in Pediatric Schizophrenia Trials: As true for studies with adults, we recommend external review of PANSS interviews for interview adequacy and scoring appropriateness. Regulators often allow an allotted number of adolescents into adult trials, and it is not uncommon for sponsors to allow adult-trained investigators to enroll adolescents into their ongoing schizophrenia trials. Investigators who have worked in adult studies may not adhere to the special PANSS conventions for adolescents and are often not versed in probing/following up/scoring PANSS items in the adolescent age group. Our group has shown there to be high variability amongst PANSS items when raters attempt to score standardized adolescent patients with schizophrenia using the PANSS [31-32].
In an effort to improve signal detection and reduce burden, much research has been devoted to shortening the PANSS for specific use in the 13-17 year old population; a 10 item psychometrically derived version has been developed from a government funded trial of schizophrenic adolescents, and findings have now been replicated in 2 large independent industry sponsored pivotal trials with schizophrenia adolescents [33-35].
In addition, a structured interview that assists raters in appropriately querying, probing, and scoring the 10 items is in the final stages of development, as is an eCOA version that will provide independent quality assurance metrics to help identify potential rating errors [36].
There appears to be a limited, but measurable benefit to endpoint data quality from many of the rater centered procedures described. Moreover, certain putatively detrimental data quality indicators (e.g., erratic ratings) appear to be associated with increased placebo response and diminished placebo-drug separation. While these results are consistent with a beneficial effect of rigorous training and data monitoring, interpretation is limited by the post-hoc nature of the analyses and the often uncontrolled or inadequately controlled nature of the comparisons. Among salient future directions of research are how much training and data quality monitoring is enough; the extent to which high quality data and placebo-drug separation at a site are state vs. trait phenomena; and how accurately a site’s pattern of quality indicators in blinded data predicts drug-placebo separation.
The content of this blog was derived our recent article in the Journal of Schizophrenia Research.
Want to discuss your data quality monitoring strategy with experts? Contact us to get started!
ABOUT THE AUTHORS