If you missed Signant’s September 20 webinar on artificial intelligence (AI) use cases and practical applications for clinical trial data management, it’s not too late. You can watch the recorded webinar here.
We’ve also summarized the key takeaways and expert insights for you.
Our speakers agreed AI can provide a solid foundation for data aggregation, which can be helpful in several stages of a study. There were many additional webinar takeaways, including:
Jan Breemans, Signant’s Senior Director of Global Solutions, kicked things off by discussing how the industry has been talking about the benefits of AI and machine learning for a while. However, he noted, we haven’t always seen examples that bring practical value in settings that fit our needs. As a result, AI could be seen as a buzzword, instead of a legitimate solution, which is why we wanted to bring experts to present you with practical AI use cases.
Pankaj Manon, Thoughtsphere Chief Technology Officer, explained how AI is applied to the clinical data chain. Beginning with ingestion from many data sources to aggregation, harmonization, and standardization, AI automates and streamlines these processes while simultaneously applying validation and conformance checks aligned with study design. The AI model also learns based on how humans update the configurations, adjusting subsequent outputs. AI-driven data ingestion, mapping, and validation results in faster startups and reviews.
Divya Ravishankar, Director of Product Strategy for Thoughtsphere, shared an example where machine learning was applied for monitoring of quality tolerance limits. In this case, the algorithm generated a projection line contrasted against the actual and expected for that study. So, the users saw a holistic picture and could take preventative measures when needed.
Alan Kott, MUDr., Signant’s Practice Leader for Data Analytics, pointed out that with machine learning, we have, for the first time, the opportunity not only to identify data quality concerns as we have been doing for years now using traditional analytical methods, but we also now have the ability to predict future data quality concerns and address them before they propagate throughout a study.