With the formal passage of the EU AI Act earlier this year, coupled with Colorado’s SB 205 being signed into law just last week, AI governance has emerged as arguably the single most critical priority for any organization using AI today. These pieces of legislation are establishing the first significant global precedents for AI regulation, outlining standards for responsible AI usage that, if overlooked, could inflict reputational harm and significant financial penalties for offenders.
At this critical moment in AI’s evolution—where concerns about the technology and emerging regulations are pushing against continued innovation—we sat down with globally renowned AI policy expert Kay Firth-Butterfield to get her view on the evolution of regulations and AI governance as a whole.
A leading advocate for Responsible and Trustworthy AI design, development and use of technology, Firth-Butterfield—a lawyer, professor, TIME Impact honoree, and AI ethics and governance expert—shared her thoughts on the Act, its approach to fostering responsible AI practices, and its far-reaching ramifications for businesses worldwide.
In the past, we’ve observed EU legislation, notably the GDPR, exert significant global influence. The influence stemmed from the necessity for companies to prepare for GDPR compliance. Countries unable to devote extensive resources to privacy considerations often adopted variants of the GDPR or incorporated sections of it into their own regulations. While it remains uncertain whether the EU AI Act will yield a comparable impact, the potential for global reverberations is evident.
Additionally, no matter where they operate, multinational companies must follow the rules of the EU AI Act if they are doing business with EU citizens. This law makes it clear that using AI responsibly is really important, and there’s no getting around it. The EU AI Act is a significant indication that you can no longer ignore responsible AI—you can't protest it. This is a clarion call to everyone using AI: we must attend to AI governance. There’s no feigning ignorance from here on out.
Responsible AI means putting parameters around the use of AI. Just like product legislation mandates safety features such as seatbelts in cars, the EU AI Act establishes regulations to ensure AI systems operate safely and responsibly. It addresses various aspects of AI governance to promote responsible AI practices across different levels.
The Act categorizes AI applications into different levels, each necessitating corresponding measures. It outright prohibits highly risky applications and mandates stringent safety measures for others. Lower-risk applications entail fewer regulatory requirements. For example, employing AI in hiring processes would require more safeguards compared to its use in optimizing electricity usage.
Across all industries, there’s a pressing need to reassess AI utilization, whether in HR, marketing, or product development. Some sectors, like finance, insurance, and healthcare, predominantly face high-risk scenarios where biased AI tools could profoundly impact individuals’ lives and well-being. However, even seemingly low-risk applications carry weight; a malfunctioning customer service chatbot, for instance, could spark a lawsuit.
I think if you’re a CEO, you must be vigilant. Responsibility for AI governance extends beyond legal and risk departments – It’s a collective responsibility of the C-suite. Boards are increasingly being required to have good oversight over use of AI too ( see statement of ESMA) Starting with an assessment of current AI usage against the EU AI Act is crucial. Any misalignment needs swift action and strategic adjustments.
In the coming years, I think that the demand for AI governance solutions will skyrocket. If this isn’t the case, something is going very, very wrong with companies’ risk assessment strategies. Non-compliance with the EU AI Act entails hefty fines and risks damaging brand reputation. Companies must weigh the consequences: either accept the financial and reputational risks or implement necessary tools and strategic AI deployment practices to ensure compliance and mitigate penalties.
To hear more from Firth-Butterfield, alongside Prove AI's CEO Mrinal Manohar and IBM Consulting Executive Partner Shyam Nagarajan, register for our upcoming webinar on complying with AI regulations.