i~HD AI Policy

1. Introduction

This AI Policy outlines the principles, responsibilities, and compliance measures for the ethical and lawful use of artificial intelligence (AI) within i~HD.

The policy aligns with the EU AI Act, General Data Protection Regulation (GDPR), the EU Ethics Guidelines for Trustworthy AI and other relevant European regulations to ensure responsible AI deployment. These regulations include the General Data Protection Regulation and the Network and Information Systems Directive (2), and their implementations in Belgian and other Member State or Third Country law.

At all times this Policy will be read alongside i~HD’s Data Protection and Transparency Notice and Cookies Policy available here.

2. Policy rationale

This policy is designed to guide i~HD employees and Freelance Consultants on the responsible use of AI tools in their work and across all i~HD participating activities including collaborations in research consortia.

While i~HD does not currently license or provide AI software to clients, AI tools may be utilised by employees and Freelance Consultants for routine tasks that support their professional responsibilities when working on behalf of i~HD.

The policy clarifies the organisation’s position on the use of AI Systems as defined in the AI Act 2024. It applies to staff and freelance consultants where specific measures will be clearly articulated.

The deployment of AI in health data innovation must align with i~HD’s commitment to patient safety, fairness, and equitable access to healthcare advancements.

3. Scope

The policy governs the use of AI powered tools and software within i~HD. It does not cover the development of AI systems, which i~HD is not engaged in. However, as an advisory body, i~HD provides guidance on AI development, regulatory compliance, and best practices to external institutions.

Furthermore, this policy must be read alongside any other policies that are developed as part of i~HD’s collaboration in any EU funded projects, collaborations or partnerships.

This policy should be read alongside the i~HD Data Protection and Transparency Policy.

The scope of AI use within i~HD falls into four categories:

For any inquiries or requests related to AI use within these categories, i~HD personnel must consult the DPO and CEO.

4. Policy ownership

The Data Protection Officer (DPO) and CEO oversee this policy. Any updates must be agreed upon between them and the core leadership team.

5. Guiding principles

The following guiding principles should be adhered to at all times. If you are uncertain about them in any way, please speak with the DPO in the first instance.

5.1. Legal compliance

  • Ensure full compliance with the EU AI Act, GDPR, and other relevant laws.
  • Conduct AI risk classification to categorise AI systems according to the EU AI Act (e.g., minimal, limited, high, or unacceptable risk).

5.2. Ethical AI

  • Promote fairness, transparency, accountability, and human oversight in AI usage.
  • Avoid AI systems that may lead to discrimination, bias, or human rights violations.

5.3. Data protection & privacy

  • Enforce privacy-by-design and by-default principles.
  • AI systems must comply with GDPR, ensuring data minimisation and lawful processing.
  • AI systems should not process personal data unless necessary and legally justified.

5.4. Transparency & explainability

  • AI-driven decisions must be understandable and explainable to users.
  • i~HD personnel and stakeholders must have mechanisms to challenge AI-driven decisions and request human intervention where necessary.

5.5. Security & robustness

  • Implement cybersecurity best practices to secure AI systems against threats.
  • Regularly audit AI models for performance, bias detection, and security risks.

6. Prohibited practices

The following AI driven systems are strictly prohibited under the EU AI Act and will not be used within i~HD: 

  • AI that employs subliminal or manipulative techniques beyond an individual’s awareness
  • AI that exploits vulnerabilities of individuals based on age, disability or a socio-economic status
  • AI used for social scoring or evaluating individuals based on inferred or predicted personal or personality characteristics
  • AI systems for predictive risk assessments of criminal behaviours
  • AI systems that expand facial recognition databases through untargeted internet or CCTV footage scraping
  • AI for emotion recognition in workplaces and educational settings
  • AI-driven biometric categorisation that infers sensitive attributes such as race, political opinions, trade union membership, religious or philosophical beliefs, or sexual orientation

7. Permitted uses and conduct

Below is some guidance:

  • When using any AI tools (including Generative AI systems like ChatGPT), only anonymous and non-personal data may be used (for example asking for a report to be structured or the composition of a letter where names, addresses or any personal attributes of any individual will be withheld).
  • AI Systems must not process commercially sensitive information, confidential, or intellectual (IP) information without explicit approval.
  • AI Systems should be used under licence agreements that protect confidentiality.
  • When using AI to conduct research for project related work or academic purposes, please ensure that personal data is never included, especially in using newer, experimental systems.

Where there is a compelling case to require access to addition AI tools or licences or access, please obtain prior approval from the CEO and DPO.

8. Use of transcription and AI in meetings

This section clarifies the position of i~HD on the use of AI-driven transcription and meeting assistance tools. In the current development, i~HD will only use Microsoft Transcription Services and will not yet deploy third party vendor tools. The use of the Microsoft Transcription Services will occur under the i~HD data protection and transparency notice.

In the event that i~HD deploy a third party tool, it will work within the following guidelines:

  • AI transcription may only be used where all participants have provided explicit consent prior to recording or transcription
  • Transcripts generated by AI tools must be securely stored and access limited to authorised personnel only
  • Users must recognise the limitations of AI-generated transcripts, including potential errors or misinterpretations. AI-generated transcripts should not be relied upon as the sole record of a meeting without human verification of the meeting chair and/or their delegate
  • Confidential, commercially sensitive, or personal data must not be shared with external AI transcription services unless they comply with agreed IP, GDPR and company data protection policies
  • AI meeting assistants capable of summarising discussions or taking actions (i.e. invitations or notifications to participants) based on meeting content must be periodically reviewed for accuracy and compliance by the participating i~HD team member or nominated partner

For further clarification or to seek approval for new AI meeting tools, please contact the CEO and DPO.

9. Use of AI for research and clinical research

This section outlines the responsible and ethical use of AI for research and clinical research within i~HD and across activities, partnerships and consortia. This policy must be read alongside any existing policies and procedures as agreed and outlined in the relevant consortium.

  • AI tools must be used in compliance with ethical research standards, GDPR, and applicable health regulations as established and implemented across partners
  • The use of AI in clinical research must be transparent, explainable, and subject to human oversight
  • AI-driven data analysis for research must prioritise data protection, patient rights and confidentiality, bias mitigation and accuracy and reliability of data and results
  • Any AI-based research initiatives involving health data must be reviewed and approved by relevant ethical and regulatory committees
  • AI systems used for research should undergo regular validation and risk assessments to ensure accuracy, reliability, and compliance with medical research guidelines
  • Researchers using AI tools must be trained in AI ethics, data governance, and responsible AI implementation

For approval of new AI research tools or methodologies, contact the CEO and DPO.

10. AI governance & risk management

i~HD will establish an AI Ethics & Compliance Committee to oversee AI activities and will have oversight of the following key areas of governance and risk management.

10.1. AI risk assessment

  • Conduct AI Impact Assessments (AIIA) before deploying high-risk AI systems.
  • Identify and mitigate risks related to AI-driven decisions, especially in regulated sectors such as healthcare, HR, and finance.

10.2. Human oversight

  • Implement human-in-the-loop mechanisms for high-risk AI applications.
  • Clearly assign AI oversight roles within the organisation.

10.3. Responsible AI procurement

  • Assess third-party AI vendors for compliance with EU regulations and ethical standards.
  • Require suppliers to provide transparency reports and risk assessments for their AI solutions.

11. AI training & awareness

  • Conduct mandatory AI ethics training for employees using AI tools.
  • Educate i~HD personnel and stakeholders on AI rights, risks, and responsible use.

12. Monitoring, reporting & compliance

  • Establish an AI Ethics & Compliance Committee to oversee AI activities.
  • Implement monitoring mechanisms to track AI performance, fairness, and compliance.
  • Provide an AI grievance mechanism for i~HD personnel or users affected by AI decisions.

13. Policy review & updates

  • This policy will be reviewed every six months or as required by regulatory changes, new data processing approaches and techniques or response to a breach or investigation.
  • Updates will be communicated to all i~HD personnel and relevant stakeholders.

14. Contact information

For enquiries or AI compliance concerns, contact DPO@i-hd.eu.

Last updated 24 April 2025