Data is the single most important thing in a clinical trial.
The data serves as the cornerstone for critical decisions determining the fate of therapies and the performance of trials. The data highlights fast or slow recruitment and how the drug supply is managed.
Study data is always subject to scrutiny by regulators, which further highlights its importance. The thing about data is that it can be used in many ways and many different circumstances, it all depends on the user who is consuming that data.
Given the industry is a highly regulated one, there are always new regulations being issued. But there are also existing regulations or guidance that move from existing to being a point of focus as the industry adapts and evolves in this ever-changing world we live in.
The Medicines and Healthcare products Regulatory Agency (MHRA) in the UK, the European Medicines Agency (EMA) in the EU, and the Food and Drug Administration (FDA) in the US all have regulations that require a documented audit trail review. They all say that audit trails must be available, human-readable (or variations of this phrase), and regularly reviewed.
Notably, the EMA guidance from March 2023 highlights the investigator’s responsibility in an audit trail review, and ownership of data. While this makes sense, as ultimately the investigator is responsible for trial conduct, it makes for a potentially complex problem to solve for providers of this data. Given these regulations are not new, it is strange that industry has not prompted a complete solution to this already.
Thus, the question arises: How can proprietary audit trail data, stored in proprietary formats, be effectively accessed, and utilized?
In its purest form, an audit trail provides a record for data of who did what, where, and (ideally) why an electronic record was created, edited, or removed. One way to think about an audit trail is that it’s the narrative of how the final data in a trial was arrived at. Simply put, it’s the story of the trial.
We can say that because, per the regulations “Record changes shall not obscure previously recorded information” (from 21 CFR Part 11). This does not mean an audit trail must capture individual keystrokes, rather any data created, updated, or deleted within the system used to store data.
There is nothing within the regulations that requires paper records to form part of the audit trail, but there is a case for their inclusion, especially in the event of manual amendments. In these instances, paper records serve as evidence of the rationale behind changes, with the expectation that the actual results of manual amendments are traceable back to the initial request.
So, if an audit trail is a complete electronic record of the data within a clinical trial, how it was created, edited, and deleted, we then need to turn our focus to the systems housing this information.
There is a myriad of systems involved in a clinical trial, including CTMS, EDC, LIMS, RTSM, eCOA/ePRO, and eConsent — and they all have their respective audit trails. These systems, developed by multiple vendors, operate under individual requirements and functions, storing data in diverse formats tailored to their respective needs.
A complete record of data, stored in various formats to meet differing functional requirements yet with a requirement to provide accessible data to non-technical stakeholders at any time. It’s a complicated problem that demands an effective solution. When we factor in manual changes performed by a vendor as part of a helpdesk request, tracing the resulting electronic records back to the original request is a further complication to consider. Furthermore, given the data it holds, RTSM specifically has to consider that a full and complete audit trail would be unblinding in nature as all data points would need to be included. Creating blinded versions of an audit trail would mean that the resulting file is not full and complete, and what is blinded or unblinded varies per study, so there is an additional and important complexity to deal with where blinding is concerned.
The regulations governing clinical data and its review emphasize the need for completeness, accuracy, and human readability, aligning with ALCOA principles. However, when data is stored in proprietary formats, releasing it in an understandable format can present challenges.
Historically across the RTSM industry, audit trail data release was definitely not as efficient a process as it could have been. Requests are reviewed and processed by a data management group, where the data was often manually collated and transformed per the requestor’s requirements. It was then given to that requestor, manually, via perhaps an sFTP upload, released via email, or transferred by physical media.
While a manual approach satisfies the requirements and regulations, it is less than ideal. It could also be argued that this process is not as compliant as it should be, given the data is often not available through the system (RTSM for example) itself, but from something loosely connected at best.
Manual data collation should lead to questions about its completeness. If, for whatever reason, a manual change is not included as part of the released data, then having to include those manual actions separately will make it harder for a recipient to fully understand the story of the data in the study.
Manual distribution of the data also poses challenges. There is always the risk that the information is not sent to the correct recipient. There are a few ways of confirming receipt outside of manual means. sFTP transfer means that credentials and permissions need to be managed.
Based on potential file sizes, email transmission may trigger inbox limits, causing additional work on both sides of the request. Assuming the recipient is a site user, primarily a non–technical clinical person, these potential challenges will distract that site user from their primary role – to see and treat their patients.
Assuming there is agreement that releasing audit trail data carries risks, such as premature or unauthorized access, then the release process becomes as critical as the data itself.
It necessitates strict controls on who can release and receive data, and tracking of those releases to ensure that there is a log of activity in this area. There’s also a pressing need to have control over the data’s lifespan, as it may be irrelevant by the time a recipient looks at their email to retrieve a file for example.
Given that this entire process thus needs management, these controls must be present in the RTSM or any other system. Given it is a requirement to release this data at the end of a study, where time is critical as the study needs to be closed out to allow for the next steps, it gives even more of a reason to need any solution to be self-service by the particular study teams. By having a systematic solution, reducing data collation and distribution timings, while increasing the data quality and the management of the process, the system concerned should help the team perform the required process.
It’s no longer an option to rely on manual methods. Any platform, and study solution, should empower its users to perform their jobs effectively, with an end-of-study data release seamlessly integrated into their workflow.
Over the last year, our Signant SmartSignals® RTSM module has been updated to include industry-leading features to make the process of study data release easier for our customers. It was this design and development effort that led me to write this blog, not as self-promotion, but to remind everyone that we need to reduce cycle times and increase quality across all trial needs. I’m proud to represent an RTSM that manages ongoing trial conduct as a key competence, not just the core randomization and supply management.
End-of-study data release, given it’s a requirement that the regulators are paying more attention to now, must be part of the platform and study solution. Embracing a systematic approach is non-negotiable in meeting regulatory requirements while maintaining operational efficiency.