AI – With Great Power Comes Great Responsibility

“AI – With Great Power Comes Great Responsibility”


The final text of the new European legal framework on Artificial Intelligence (AI) I known as the EU AI Act (the ‘Act’) was circulated to Member States on the 21st of January 2024 and was unanimously approved on the 2nd of February 2024. The obligations under the Act are phased over a period of 36 months with the key obligations to be put in place within 24 months.


With countries such as the Netherlands setting aside more than €200 million to invest in support, regulation and integration with AI in light of the new framework, it seems AI is set to change the way in which we do business.


As Amara’s Law famously states, we tend to overestimate the long term consequences of new technologies and underestimate the short term.


The Act, which takes on substantially the form of an EU Regulation, serves as a guideline to safeguard the rights, health, and safety of individuals while encouraging ethical innovation in AI systems. Its main goal is to ensure that AI technologies are developed and used responsibly and in an ethical manner.


The Categories of AI are broken down as follows:


  • Unacceptable Risk – Prohibited AI – Cognitive behavioural manipulation of people or specific vulnerable groups; Biometric identification and categorisation of people; Real-time and remote biometric identification systems, such as facial recognition.


  • High Risk – Items falling under product legislation, healthcare, recruiting, and law enforcement, must meet mandatory compliance standards. These systems must adhere to monitoring, development and transparency guidelines. Interestingly, this category also covers “Deep Fakes” which the EU have decided will be dealt with under transparency requirements.


  • Low or minimal risk – anything not falling into the Unacceptable or High-Risk Categories.


The Act sets out strict guidelines for companies developing or utilizing AI within the European Union, and those found to be breaching the regulations can find themselves on the receiving end of hefty penalties. Companies found in violation may face fines of up to €15 million or 3% of their annual global turnover, with even higher penalties of up to €35 million or 7% of annual global revenue for violations involving prohibited AI practices.


If you are in the Medium or low risk categories, you should be looking ahead to protect your business or organisation against the risk associated with the new technology and of course, aiming for compliance. Step one will involve conducting a gap analysis. It is important to identify the areas where existing governance structures, policies, processes, etc. need to be enhanced to meet any regulatory requirements in order to efficiently and accurately address any regulators’ inquiries, to the standard that may be expected.


Step two may then involve some or all of the following:


  1. Appoint and AI Officer or Team: Assign or develop the role of AI compliance officer within your business.


  1. Develop and Implement Programs and Policies:



  1. Governance – Who will get flagged if there is an issue, or ethical issue with AI? Whose role is it to ensure compliance of the systems with regulatory requirements? What is the reporting regime internally?


  1. Compliance: Assess your GDPR Obligations. Will AI breach your GDPR obligations if you are using it for thigs like metric analysis (eg. For analysing outputs or scribing platforms for call recording). How will you ensure inbuilt bias within the AI program doesn’t get you in hot water before the equality tribunal or WRC or other bodies?


  1. Understanding and Training: Understand how your AI is operating and inform yourself about how it is making its decisions. Train staff using these tools so that they understand what to look for, and to ensure that they understand how to work with AI, ensuring proper oversite.


  1. Safeguard your business: Explore how you can anonymise some of your AI created outputs to protect against corporate espionage or other risks – the ability to copyright and obtain other IP rights of generated output is not yet settled within this jurisdiction and it is worth noting that the outlook in other jurisdictions is looking increasing like it may not be possible, as there isn’t sufficient personhood in which the creativity vests and therefore it is not assignable. The Working Group on AI has raised the possibility that on this basis, companies may try to use a false business transaction to gain access to the target companies’ private information via the due diligence process, and isolate those items generated by AI for their own purposes, without the obligation to conclude the transaction. The official response at that time was purported to be that accommodating for this possibility as such an early stage would cause prohibitive delays to AI’s roll out.



Finally, ask questions and get advice. It is always better to be standing on the “can we do this” side of the fence than the “help we have done this!” side.

For further information contact Elaine McGrath

Elaine McGrath
Author: Elaine McGrath