Last Updated 10th March 2025
Introduction

This AI Policy outlines the principles, responsibilities, and compliance measures for the ethical and lawful use of artificial intelligence (AI) within i~HD.

The policy aligns with the EU AI Act, General Data Protection Regulation (GDPR), the EU Ethics Guidelines for Trustworthy AI and other relevant European regulations to ensure responsible AI deployment.

Policy Rationale

This policy is designed to guide i~HD employees and Freelance Consultants on the responsible use of AI tools in their work and across all i~HD participating activities including collaborations in research consortia.

While i~HD does not currently license or provide AI software to clients, AI tools may be utilised by employees and Freelance Consultants for routine tasks that support their professional responsibilities when working on behalf of i~HD.

The policy clarifies the organisation’s position on the use of AI Systems as defined in the AI Act 2024. It applies to staff and freelance consultants where specific measures will be clearly articulated.

The deployment of AI in health data innovation must align with i~HD’s commitment to patient safety, fairness, and equitable access to healthcare advancements.

Scope

The policy governs the use of AI-powered tools and software within i~HD. It does not cover the development of AI systems, which i~HD is not engaged in. However, as an advisory body, i~HD provides guidance on AI development, regulatory compliance, and best practices to external institutions.

Furthermore, this policy must be read alongside any other policies that are developed as part of i~HD’s collaboration in any EU-funded projects, collaborations or partnerships.

This policy will form part of the i~HD Data Protection and Transparency Policy.

The scope of AI use within i~HD falls into three categories:

  1. Use of AI by Employees in Daily Work
  2. Employees may use AI-powered tools to support their tasks, provided that only approved AI tools are utilised. These will be available and updated regularly on i~HD’s intranet.
  3. The use of AI must align with organisational data protection policies and ethical AI principles.
  4. Use of AI by Employees in Daily Work
  5. Any AI tool employed for research must be disclosed to the DPO and CEO in line with our existing procedures for recording data processing activity.
  6. AI implementation (as opposed to AI development) within research activities is permitted under compliance guidelines.
  7. Research projects ensure AI tools align with data protection, security, and transparency standards and this policy will work with those standards and policies as outlined.
  8. Use of AI in Educational Activities
  9. AI tools used for educational purposes, including training or knowledge dissemination, must be disclosed to the DPO and CEO.
  10. AI-driven educational tools must adhere to the ethical AI principles outlined in this policy.
  11. Use of AI in Collaborative Projects
  12. Please note that this policy relates to i~HD and its operations. As a collaborator on European and international projects, i~HD recognises that our partners will operate their own policies and will work effectively with them.
  13. i~HD will therefore provide guidance to our employees and consultants on a case-by-case basis in line with the principles as outlined in this policy, when engaging in collaborative projects and subject to collaboration agreements.

For any inquiries or requests related to AI use within these categories, staff must consult the DPO and CEO.

Policy Ownership

The Data Protection Officer (DPO), Chief Business Officer, and CEO oversee this policy. Any updates must be agreed upon between them and the core leadership team.

Guiding Principles

The following guiding principles should be adhered to at all times. If you are uncertain about them in any way, please speak with the DPO in the first instance.

I. Legal Compliance

  • Ensure full compliance with the EU AI Act, GDPR, and other relevant laws.
  • Conduct AI risk classification to categorise AI systems according to the EU AI Act (e.g., minimal, limited, high, or unacceptable risk).

II. Ethical AI

  • Promote fairness, transparency, accountability, and human oversight in AI usage.
  • Avoid AI systems that may lead to discrimination, bias, or human rights violations.

III. Data Protection & Privacy

  • Enforce privacy-by-design and by-default principles.
  • AI systems must comply with GDPR, ensuring data minimisation and lawful processing.
  • AI systems should not process personal data unless necessary and legally justified.

IV. Transparency & Explainability

  • AI-driven decisions must be understandable and explainable to users.
  • Employees and stakeholders must have mechanisms to challenge AI-driven decisions and request human intervention where necessary.

V. Security & Robustness

  • Implement cybersecurity best practices to secure AI systems against threats.
  • Regularly audit AI models for performance, bias detection, and security risks
Prohibited Practices

The following AI driven systems are strictly prohibited under the EU AI Act and will not be used within i~HD:

  • AI that employs subliminal or manipulative techniques beyond an individual’s awareness;
  • AI that exploits vulnerabilities of individuals based on age, disability or a socio-economic status;
  • AI used for social scoring or evaluating individuals based on inferred or predicted personal or personality characteristics;
  • AI systems for predictive risk assessments of criminal behaviours;
  • AI systems that expand facial recognition databases through untargeted the internet or CCTV footage scraping;
  • AI for emotion recognition in workplaces and educational settings;
  • AI-driven biometric categorisation that infers sensitive attributes such as race, political opinions, trade union membership, religious or philosophical beliefs, or sexual orientation.
Permitted Uses and Conduct

Authorised AI Tools and systems can be accessed via this link: <<LINK>>.

Below is some guidance:

  • When using any AI tools (including Generative AI systems like ChatGPT), only anonymous and non-personal data may be used (for example asking for a report to be structured or the composition of a letter where names, addresses or any personal attributes of any individual will be withheld).
  • AI Systems must not process commercially sensitive information, confidential, or intellectual (IP) information without explicit approval.
  • AI Systems should be used under licence agreements that protect confidentiality.
  • When using AI to conduct research for project-related work or academic purposes, please ensure that personal data is never included, especially in using newer, experimental systems.

Where there is a compelling case to require access to addition AI tools or licences or access, please obtain prior approval from the CEO and DPO.

Use of Transcription and AI in Meetings

This section clarifies the position of i~HD on the use of AI-driven transcription and meeting assistance tools. In the current development, i~HD will only use Microsoft Transcription Services and will not yet deploy third-party vendor tools. The use of the Microsoft Transcription Services will occur under the i~HD data protection and transparency notice.

In the event that i~HD deploy a third-party tool, it will work within the following guidelines:

  • AI transcription may only be used where all participants have provided explicit consent prior to recording or transcription.
  • Transcripts generated by AI tools must be securely stored and access limited to authorized personnel only.
  • Users must recognise the limitations of AI-generated transcripts, including potential errors or misinterpretations. AI-generated transcripts should not be relied upon as the sole record of a meeting without human verification of the meeting chair and/or their delegate.
  • Confidential, commercially sensitive, or personal data must not be shared with external AI transcription services unless they comply with agreed IP, GDPR and company data protection policies.
  • AI meeting assistants capable of summarising discussions or taking actions (i.e. invitations or notifications to participants) based on meeting content must be periodically reviewed for accuracy and compliance by the participating i~HD team member or nominated partner.

For further clarification or to seek approval for new AI meeting tools, please contact the CEO and DPO.

Use of AI for Research and Clinical Research

This section outlines the responsible and ethical use of AI for research and clinical research within i~HD and across activities, partnerships and consortia. This policy must be read alongside any existing policies and procedures as agreed and outlined in the relevant consortium.

  • AI tools must be used in compliance with ethical research standards, GDPR, and applicable health regulations as established and implemented across partners.
  • The use of AI in clinical research must be transparent, explainable, and subject to human oversight.
  • AI-driven data analysis for research must prioritize data protection, patient rights and confidentiality, bias mitigation and accuracy and reliability of data and results.
  • Any AI-based research initiatives involving health data must be reviewed and approved by relevant ethical and regulatory committees.
  • AI systems used for research should undergo regular validation and risk assessments to ensure accuracy, reliability, and compliance with medical research guidelines.
  • Researchers using AI tools must be trained in AI ethics, data governance, and responsible AI implementation.

For approval of new AI research tools or methodologies, contact the CEO and DPO.

AI Governance & Risk Management

I. AI Risk Assessment

  • Conduct AI Impact Assessments (AIIA) before deploying high-risk AI systems.
  • Identify and mitigate risks related to AI-driven decisions, especially in regulated sectors such as healthcare, HR, and finance.

II. Human Oversight

  • Implement human-in-the-loop mechanisms for high-risk AI applications.
  • Clearly assign AI oversight roles within the organization.

III. Responsible AI Procurement

  • Assess third-party AI vendors for compliance with EU regulations and ethical standards.
  • Require suppliers to provide transparency reports and risk assessments for their AI solutions.
AI Training & Awareness
  • Conduct mandatory AI ethics training for employees using AI tools.
  • Educate staff and stakeholders on AI rights, risks, and responsible use.
Monitoring, Reporting & Compliance
  • Establish an AI Ethics & Compliance Committee to oversee AI activities.
  • Implement monitoring mechanisms to track AI performance, fairness, and compliance.
  • Provide an AI grievance mechanism for employees or users affected by AI decisions.
Policy Review & Updates
  • This policy will be reviewed every six months or as required by regulatory changes, new data processing approaches and techniques or response to a breach or investigation.
  • Updates will be communicated to all employees and relevant stakeholders.
Contact Information

For enquiries or AI compliance concerns, contact DPO@i-hd.eu

i~HD