
The International Security Ligue, through its Technology & Society Working Group, has issued a Statement on the Responsible Use of AI in Security Services, as well as practical implementation resources, including a deployment checklist.
The statement recognises the power, potential, and possibility for misuse of artificial intelligence.
AI is increasingly powering beneficial security applications that serve as a catalyst for greater peace and safety in society, but fully unlocking AI’s transformative potential requires driving innovation while also building and maintaining public trust. The Ligue recognises this imperative and the shared responsibility of governments and the security industry to cooperatively manage it.
With the speed at which AI is developing, there is a need for security firms to quickly establish ethical guardrails to capitalise on AI’s benefits in a responsible manner.
“AI is here to stay—and while global approaches still vary widely, this concise guide gives C-level leaders the core principles for using it responsibly, effectively, and with confidence,” said Director General of the Ligue, Stefan Huber.
Because AI use cases exist within a dynamic security ecosystem and carry varying amounts of risk, a flexible, risk-based approach is necessary to empower the security industry to advance peace and security while ensuring AI advancements are aligned with the highest levels of trustworthiness.
Offered by representatives of the world’s leading security companies, the Ligue’s Responsible Use principles help guide evolving legal frameworks that do not stymie innovation and ensure security providers can clearly understand obligations and easily adapt operations.
You can read the statement here.