Framework to manage AI responsibly

0

The Australian Government has published the Voluntary AI Safety Standard to help organisations develop and deploy AI systems safely and reliably.  For Australian companies planning to trade internationally, where regulatory regimes are already impacting users of AI, demonstrating compliance against an internationally recognised standard will help assure global markets that they are responsible and ethical users of AI.

Asad Rathore, Head of Professional Services for cybersecurity and AI consulting at Excite Cyber, notes that while the Australian Government examines AI regulation frameworks, organisations rushing to integrate AI into their products and services face heightened risks if they don’t follow recognised standards. This can lead to potentially exposing businesses and the public to significant vulnerabilities.

“ISO 42001, the global standard for Artificial Intelligence Management Systems, offers a clear framework to manage AI responsibly, securely and transparently. By adding ISO 42001 to AI governance frameworks, forward-thinking organisations can ensure they are well prepared for future strengthening of AI regulation in our country,” Asad advises.

“Australian organisations should act now to prepare to stay compliant and build trust and resilience in an increasingly AI-powered economy,” Asad says. “As more AI systems are built and deployed, it will become increasingly important for these systems to be built to a minimum acceptable standard to ensure the risk of biases are reduced and that information security is not compromised,” he warns.

While organisations can be audited against ISO standards, it’s not mandatory to do so. Any Australian organisation, large or small, can adopt ISO 42001 to ensure the AI systems they use minimise the risk of security issues.

Australian organisations are encouraged to undertake the following four steps:

1 – Prepare for regulation

Governments across the world are contemplating different regulatory approaches. The European Union is taking a strong lead in regulating AI while the United States of America seems to be taking a more hands off approach. The Australian Government will adopt standards to help organisations ensure stronger AI protections. Complying with ISO 42001 will help organisations prepare for tighter regulation when it comes.

2 – Mitigate risks before they escalate

AI is quickly becoming embedded in many day-to-day business activities. ISO 42001 helps to identify points of risk and recommends safeguards for those risks. It covers areas such as autonomous decision-making, situations where decision-making lacks transparency and explainability, where AI requires specialised administration and oversight beyond typical IT systems, data analysis, insight, and machine learning and where the integration of AI instead of human-coded logic changes the way systems are developed, justified, and deployed.

These controls are critical as the behaviour of AI systems can change over time. As they assimilate more data and their algorithms are refined, AI applications can deliver results that are not only unexpected but potentially false or fake.

3 – Ensure responsible and ethical AI use

There is an increased focus on the responsible use of AI, something ISO 42001 specifically addresses. For example, the standard recommends establishing guidelines and principles for the ethical use of AI covering matters such as the societal impacts of AI applications, so they align with ethical standards and values. This helps build trust among stakeholders and addresses concerns around the ethical implications of AI technologies.

4 – Build and maintain trust in AI use

While AI’s use has grown exponentially over the last couple of years, trust remains a concern. Organisations that adhere to ISO 42001 can build and maintain a positive reputation as the standard signals a commitment to responsible AI practices.

As Australian companies battle against international and domestic competition, they continually seek for new ways to leverage emerging technologies, but ethical and wider impacts remain that should be addressed now before it is too late. Adopting ISO 42001 will also assist Australian organisations seeking to deploy AI use internationally in export markets where trust in AI governance is critical.

ISO 42001 provides a structured framework for AI innovation that encourages the exploration and implementation of AI technologies with guidelines that balance innovation and risk management. This is done through a systematic approach that helps organisations identify and leverage opportunities for improvement and advancement in their AI applications.

Share.

Comments are closed.