Back to Blog Data Analytics

Inclusionary Issues in Early Alzheimer's Disease Trials: The Challenge of Baseline Shifts

Part 3 of our 3-part series on blinded data analytics in CNS Clinical Trials. 

Clinical trials for Alzheimer's disease therapeutics face numerous challenges, from patient recruitment to measuring cognitive decline accurately.  

Among these challenges, one often-overlooked issue can significantly impact trial outcomes: the phenomenon of baseline shifts in cognitive assessment scores between screening and baseline visits.  

This phenomenon deserves closer attention, as it can potentially undermine trial validity and complicate the interpretation of results. 

The Curious Case of the Changing MMSE Score 

Consider this scenario from one of our recent analyses: A patient enters an early Alzheimer's disease trial with a Mini-Mental State Examination (MMSE) score of 24 at screening, which comfortably qualifies them for inclusion in a trial targeting mild cognitive impairment or early Alzheimer's disease.

Eligible subject into early Alzheimer's trial with MMSE score of 24 at screening

However, when the same patient returns for their baseline visit—often just weeks later—their MMSE score has dropped to 16. 

What happened? Did the patient's cognition genuinely decline by 8 points in such a short period? From a clinical perspective, this rapid deterioration seems implausible, as Alzheimer's disease typically progresses gradually over months and years, not weeks. 

Even more puzzling, after this dramatic drop, the patient's subsequent MMSE scores stabilize around 18 for the remainder of the trial. This pattern—a significant drop followed by stabilization—raises important questions about data quality and trial methodology. 

How Common Is This Phenomenon? 

Our analysis of over 8,300 patients across multiple Alzheimer's disease trials revealed that this isn't an isolated incident. When examining the distribution of MMSE scores relative to inclusionary cutoffs, we found distinct patterns depending on the trial design: 

Impact of protocol on baseline MMSE

Trials Requiring Only Screening to Meet Inclusion Criteria 

In contrast, these trials (N = 5,271 patients) showed a markedly different pattern. While screening scores naturally clustered above the cutoff, almost 20% of baseline scores drifted below the inclusionary threshold. 

Trials Requiring Both Screening and Baseline to Meet Inclusion Criteria 

In these trials (N = 3,044 patients), MMSE scores clustered predominantly above the inclusionary cutoff, with only very few scores falling below. This is surprising, one would expect that comparable proportion of patients would drop below the baseline cutoffs and be screen-failed, but this did not happen. 

This creates a situation where trials potentially include patients who no longer meet the originally defined criteria or include patients with possibly substantially inflated baseline scores. 

The Impact on Trial Outcomes

This phenomenon has several important implications:

Risk factors of trial requirements

Inconsistent Patient Populations

When patients' baseline cognitive scores differ substantially from their screening scores, the trial may end up including a more heterogeneous population than intended. This heightened variability can reduce statistical power and make treatment effects harder to detect.

Risk of Regression to the Mean

For patients with particularly high or low screening scores, subsequent assessments may naturally trend toward the population mean. However, the patterns we observed go beyond this statistical phenomenon, suggesting other factors are at play.

Inflated Placebo Responses

Patients who show substantial declines between screening and baseline may appear to "improve" in subsequent visits simply by returning to their true baseline state. This can artificially inflate placebo response rates and mask true treatment effects.

Wasted Resources and Ethical Concerns

For sponsors, enrolling patients who no longer meet entry criteria represents wasted resources. For patients and families, participation in potentially inappropriate trials raises ethical questions about exposure to experimental treatments without meeting the intended clinical profile. 

Predictors of Baseline Shifts: Risk Factors and Mitigation Strategies 

Our analysis identified several factors associated with screening-to-baseline cognitive score shifts: 

Predicting baseline MMSE score

MMSE Performance Metrics 

We found that both the raw MMSE score at screening and the assessment duration predicted baseline shifts. Patients with scores closer to the inclusionary cutoff and those who took significantly longer to conduct the assessment showed significantly higher risks of substantial baseline shifts. 

Screening Score as a Predictor 

Logistic regression models revealed that screening MMSE scores strongly predicted the probability of failing inclusion criteria at baseline: 

  • A patient with an MMSE of 22 at screening had a nearly 50% chance of scoring below inclusion threshold at baseline in screening-only trials
  • The risk decreased substantially with higher screening scores, approaching zero for scores above 28  

Assessment Quality Indicators 

Combining score, duration, and other metrics created a powerful predictive model for identifying at-risk subjects. Our data showed that a combined model incorporating all elements provided significantly better predictive accuracy than any single metric alone. 

Practical Solutions for Sponsors and Researchers 

Based on these findings, we recommend several approaches to address this challenge:

Implement Blinded Inclusion Criteria

Concealing the criteria from sites and subjects substantially reduces the risk of including patients who don't match the intended population.  

While this may increase screen failure rates, it ultimately creates a more homogeneous and appropriate study population.

Utilize Cognitive Assessment Quality Indicators

By monitoring not just the raw scores but also assessment duration and other quality metrics, sites can identify potentially problematic assessments in real-time. 

Apply Predictive Analytics to Screening Data

Using predictive models that incorporate multiple assessment metrics can help identify patients at high risk for baseline shifts. This allows for additional monitoring, training, or assessment validation before proceeding with enrollment.

Implement Blinded Data Analytics

Continuous monitoring of cognitive assessment data through blinded data analytics can identify sites or raters with unusual patterns of baseline shifts, enabling targeted intervention without compromising trial blinding. 

Conclusion 

The phenomenon of screening-to-baseline shifts in cognitive assessments represents a significant challenge in early Alzheimer's disease trials. By understanding the prevalence, patterns, and predictors of these shifts, sponsors and researchers can implement effective strategies to mitigate their impact. 

Ultimately, addressing these inclusionary issues will lead to more homogeneous trial populations, increased statistical power, and more reliable outcomes. For a field that has faced numerous setbacks in the quest for effective treatments, improving methodological rigor in this manner could contribute significantly to eventual success. 

As we continue to refine our understanding of these patterns and develop more sophisticated predictive models, we move closer to the goal that matters most: developing effective treatments for patients with Alzheimer's disease. 

This article is the final installment in our three-part series exploring blinded data analytics in CNS Clinical Trials: 

Part 1: Impact of Data Concerns in Neurological Clinical Trials: Why Quality Matters 
Part 2: Using Clinical Insights to Design Analytical Solutions in Parkinson's Disease Trials

About the Author

Headshot of Alan Kott, MUDr

Dr. Alan Kott is the Practice Leader for Data Analytics at Signant Health, with both academic and industry experience in clinical trials. He has led the development of Signant’s Data Analytics Program, overseeing data analytics in over 200 clinical trials across multiple indications. Prior to joining Signant, Dr. Kott was an Assistant Professor at Charles University and a house officer in psychiatry at General Teaching Hospital in Prague. He holds a Medicinae Universae Doctor (MUDr.) from Charles University.

Dr. Alan Kott's presentation, "Application of BDA Methodologies in Clinical Trials Focus on Neurology," was supported by Petra Reksoprodjo (Director of Clinical Program & Performance, Operations) and Chris Murphy (Associate Director Clinical Service, Operations), and presented at ISCTM in Washington, D.C.

Similar posts

Get notified on new marketing insights

Here mention the benefits of subscribing