by Maria Christofidou
As the regulatory landscape around the governance of artificial intelligence, health data reuse and innovation continues to evolve, our research and innovation community find themselves navigating a transition from one set of known governance expectations to additional challenges.
If we reflect on the impact that GDPR has had on the health care and research sectors, we find that current projects and initiatives are starting to tentatively explore the value of AI for improving care outcomes and making real impact on the sector.
i~HD is pleased to be at the centre of this transition. We are part of several EU IMI and Horizon 2020 projects that are navigating this space. In each case, a solid understanding of existing regulation and a collegiate approach with our project partners is helping us to address the challenges and need for interpretation around legal requirements and expectations when to comes to protecting data, assuring its quality and supporting the development of its tools.
That is not to say that the path is clear or the waters smooth. The glue that binds all our efforts comes down to the human aspects of our work. Whilst ethical concerns around the use of AI (including transparency, autonomy for individuals and bias) are widely known, they may not be well understood in a practical sense and they are certainly not always clear in the context of operation.
How these challenges play out with existing and forthcoming regulations is usually unclear until challenges are uncovered. For example – is AI decision making ever truly a black box in operation or can we do better at articulating how personal data is used and the decisions that are made?
There will always be nuance especially around rare diseases and those that affect a particular demographic, so as savvy as any consortium might be around regulatory affairs, its will all be for naught if that consortium fails to understand the concerns of the community they are trying to benefit, or demonstrate value to those charged with their care.
A safe approach for the development of new tooling is to focus on solutions that are suitable for the majority – a sense of the 80 / 20 rule where you can manage 80% of the cases and develop tools that work for that. But what about the 20%?
We can only be equitable and equivocal if we understand the context within which we work. In each of our projects we apply a data protection by design and default approach to the understanding of risk and engineering requirements in a technical sense. We use this as a basis to allow the assessment of how AI and machine learning can be made transparent, explainable and address biases.
These are the key challenges we are aiming to address and how:
- Whether data used is biased towards particular demographics, including ethnicity, gender and social status
- Is it equitable for patients who may not see benefits directly and do they have a say in whether their data is used?
- Is the AI safe, explainable and transparent in its operation?
- New regulations (AI Regulation and European Health Data Space) are forthcoming and existing ethical principles can help to address these issues but need interpretation. How best can we do this?
- i~HD can help guide in the understanding of the issues, interpretation of the forthcoming regulations, existing ethical principles and navigate the specifics around development and deployment;
- This helps inform and guide AI developers, health providers and researchers to develop machine learning and deploy AI in a safe, ethical, transparent and equitable manner to help enhance health outcomes.
About Maria Christofidou
Data Protection Officer, i~HD
Maria graduated from the University of Kent with a LLB in Law (2012-2015) and followed on to study at the University of Edinburgh where she graduated with a LLM in European Law (2015-2016). In 2020 she successfully applied for the Marie Skłodowska-Curie HELICAL Grant and is currently undertaking her PhD at the University of Ghent on GDPR and health data.