Data and analytics company GlobalData has released an artificial intelligence (AI) governance framework to help companies implement AI responsibly and safely, saying companies that fail to adopt the highest standards of AI governance face substantial reputational and financial risk.
“Risk can originate from different sources and multiply as AI systems are implemented,” said GlobalData analyst Laura Petrone. “Companies investing in responsible AI early will have an advantage over their competitors. They can not only show that they are good corporate citizens but also actively prepare for upcoming regulations.”
GlobalData says AI risks can result in potential issues ranging from copyright infringement to data privacy breaches and the risk of actual physical harm. Increased use of AI will also reinforce and exacerbate many of society’s biggest challenges, including bias, discrimination, misinformation, and other online harms.
There are currently no global regulatory standards for AI, so it can be difficult for CEOs to know what constitutes best practice governance for AI systems. Instead, they are left by governments to voluntarily embed responsible AI values and practices into their AI strategy. Responsible AI is an approach to developing AI and managing AI-related risks from an ethical and legal perspective.
“While most corporate executives will outsource AI provision to tech vendors, often Big Tech, they must be mindful that their company’s reputation will suffer if something goes wrong,” said Petrone. “Therefore, if you are a senior executive deploying AI systems designed by a third-party tech vendor, the onus is on you to ensure that your business is using AI responsibly.”