Evaluation of the Champion Programme
- To gather evidence that can be used by research sponsors, platform service providers and data providers (e.g. hospitals) to demonstrate benefits and organisational value from reusing health data for research
- To identify issues, obstacles and success factors for scaling up the reuse of health data for research, to inform future i~HD initiatives and other stakeholders about possible mitigations and incentives.
- Starting from the Champion Programme, to eventually develop and promote a standard set of evaluation indicators across clinical research platform networks, to enable findings to be pooled and shared.
The i~HD Evaluation Task Force
- Rainer Thiel, empirica, Germany
- Veli Stroetmann, empirica, Germany
- Dipak Kalra, i~HD, Belgium
- Pascal Coorevits, EuroRec, Belgium
- Christel Daniel, AP-HP, France
- Mats Sundgren, AstraZeneca, Sweden
- Nadir Ammour, Sanofi, France
- Philippe Rocolle, Sanofi, France
- Aurèle N’Dja, Sanofi, France
- Tine Lewi, Janssen Pharmaceuticals, Belgium
- Nicole Trewarthe, ICON, Ireland
The research sponsors of the Champion Programme (CP), and Custodix, agreed at the outset to collaborate on a joint and shared evaluation that would help generate evidence of the value of this innovative platform and of this novel method for optimising protocols and facilitating recruitment into studies. It is intended that this evidence will be used internally by each of the participating research sponsors to scale up future investments in this platform, will be used by Custodix (with agreed levels of de-identification and aggregation) to support their promotion of InSite to sponsors and hospitals, and will be used by i~HD to promote the positive value of this kind of clinical research ecosystem and also to better understand its strengths and weaknesses. The evaluation method and metrics will be generic to any clinical research platform. None of the criteria and indicators within the evaluation framework are specific to one platform provider. Methodologically, the evaluation of the CP consists of two tracks: a programme evaluation – to understand better drivers, incentives and barriers to join and implement the CP; a technology assessment – to capture usage and technical implementation of the platform. The evaluation framework comprises costs and benefits, monetary and non-monetary, for two stakeholder groups: research sponsors and hospitals.
Achievements during 2016
- Establishing an evaluation task force comprising industry representatives and academic representatives, developing a consensus set of evaluation indicators and progressing this towards a formal evaluation methodology.
Plans for 2017
- Finalising the evaluation methodology, employing a dedicated person part time at empirica to undertake much of the fieldwork throughout 2017-18, starting with baseline data gathering early in 2017
- Periodically releasing interim findings to champion programme decision-makers.