CASE STUDY
A global pharma client needed a data solution built within a quick turnaround time to ensure uninterrupted data flow to their central clinical risk monitoring team.
Previously, all the data from the various sites and the CROs was stored in a legacy data warehouse. An in-house data aggregator tool pulled the data from the data lake and uploaded it to the third-party SaaS tool, which was used by central monitoring team for their periodic checks.
The data aggregator tool pulled the data from the data lake, grouped them into sas7bdat files – each representing one data set for a particular study. This zip file was then uploaded to the third-party SaaS tool for further analysis of the monitoring team.
The company wanted to retire the legacy data lake to move to a unified workspace which will give them consistent and reliable insights across the organization. The new central data lake was built on top of the Data bricks platform which made the previous data aggregation tool incompatible. For certain datasets, this tool also required manual collation for some data requested by the monitoring team that fell outside the purview of the data available in the legacy data lake.
The client was looking to build a new and upgraded data aggregation tool, while also exploring avenues to reduce manual interventions still being needed by the older tool.
They were looking for a technology partner who not only understands technology but also understands the clinical trials data management ecosystem. They were looking for a quick turnaround time, as the deprecation of the legacy data warehouse leaves the central monitoring team with no data to review for the ongoing clinical trials.
Our data experts examined the current data systems being used, and after a thorough analysis of the client’s requirements they recommended the following approach.