European Parliament passes AI Act: world’s first AI law


Written by Peter Sandkuijl, VP, EMEA Engineering and Evangelist, Check Point Software Technologies.

The rapid proliferation of AI, particularly Generative AI has brought immense opportunity as well as significant risks.

The new EU AI Act aims at establishing controls and gradations for AI usage, as the risk of automatically recognising every face in a room and analysing the emotions, facial expressions and descent is a worry.

It is not about stifling innovation but rather creating a legal framework that aligns with democratic values but also safeguards the rights of EU citizens.

This is the first global law attempting to address the risk that AI may introduce and to mitigate the risk of AI applications infringing upon human rights or perpetuating biases. Whether it is CV scanning with inherent gender bias, or pervasive surveillance in the public space with AI powered cameras, or invasive medical data analysis affecting your health insurance, this EU AI Act seeks to set clear boundaries for AI deployment, so that vendors and developers have some guidelines and guardrails. With that in place, the “good guys” will be able to see the demarcation line and provide access and tools to prosecute the ones who go against it.

Transparency is seen as a central tenet of EU’s approach, especially concerning Generative AI. By mandating transparency in the AI training process, this legislation aims to expose potential bias and AI mistakes made before they are accepted as truth. Let us not forget that AI is not always correct; on the contrary it makes more mistakes than we would allow virtually any technology today to make and thus transparency becomes a critical tool in mitigating its shortcomings.

The initial attention will fall on the hefty fines imposed, however that should not be the main focus; as laws are accepted, they will still be tested and tried in courts of law, setting precedents for future offenders. We need to understand that this will take time to materialise, which may actually be more helpful, though not an end goal.

The EU AI Act has several cybersecurity implications, both directly and indirectly affecting the landscape:

Stricter Development and Deployment Guidelines: AI developers and deployers will need to adhere to strict guidelines, ensuring that AI systems are developed with security by heart. This means incorporating cybersecurity

measures from the ground up, focusing on secure coding practices, and ensuring AI systems are resilient against attacks.

Increased Transparency: The Act mandates transparency in AI operations, especially for high-risk AI applications. This could mean more detailed disclosures about the data used for training AI systems, the decision-making processes of AI, and the measures taken to ensure privacy and security. Transparency aids in identifying vulnerabilities and mitigating potential threats.

Enhanced Data Protection: Given that AI systems often rely on vast datasets, the Act’s emphasis on data governance will necessitate enhanced data protection measures. This includes ensuring the integrity and confidentiality of personal data, a core aspect of cybersecurity.

Accountability for AI Security Incidents: The Act’s provisions likely extend to holding organisations accountable for security breaches involving AI systems. This could mean more rigorous incident response protocols and the necessity for AI systems to have robust mechanisms to detect and respond to cybersecurity incidents.

Mitigation of Bias and Discrimination: By addressing the risks of bias and discrimination in AI systems, the Act indirectly contributes to cybersecurity. Systems that are fair and unbiased are less likely to be exploited through their vulnerabilities. Ensuring AI systems are trained on diverse, representative datasets can reduce the risk of attacks that exploit biased decision-making processes.

Certification and Compliance Audits: High-risk AI systems will need to undergo rigorous testing and certification, ensuring they meet the EU’s standards for safety, including cybersecurity. Compliance audits will further ensure that AI systems continuously adhere to these standards throughout their lifecycle.

Prevention of Malicious AI Use: The Act aims to prevent the use of AI for malicious purposes, such as creating deepfakes or automating cyberattacks. By regulating certain uses of AI, the Act contributes to a broader cybersecurity strategy that mitigates the risk of AI being used as a tool in cyber warfare and crime.

Research and Collaboration: The Act could spur research and collaboration in the field of AI and cybersecurity, encouraging the development of new technologies and strategies to secure AI systems against emerging threats.

The rapid speed of AI adoption demonstrates that legislation alone cannot keep pace and the technology is so powerful that it can and may gravely affect industries, economies and governments. My hope for the EU AI law is that it will

serve as a catalyst for broader societal discussions, prompting stakeholders to consider not only what the technology can achieve but also what the effects may be.

By establishing clear guidelines and fostering ongoing dialogue, it paves the way for a future where AI serves as a force more for good, underpinned by ethical considerations and societal consensus.


Comments are closed.