On January 20, Donald Trump was sworn in as the 47th President of the United States. His inauguration has prompted a raft of speculation on what it will mean for the rapidly-growing artificial intelligence industry, and whether we should expect more (or less) federal regulation on AI from the United States Government.
While the consensus opinion has long maintained that a Trump-led administration is unlikely to push a host of new legislation in Congress – a point which was underscored by its repeal of the Biden executive order on AI safety within its first 24 hours in office – it’s been equally clear that it sees spurring continued AI innovation in the U.S. and beyond as a strategic priority. This point was punctuated by the administration’s subsequent announcement of Stargate, a $500 billion fund focused on building out artificial intelligence infrastructure in the U.S., led by OpenAI, SoftBank, Oracle and MGX.
Regardless of some of the speculation that has since arisen around the extent of the fund’s working capital, it’s a powerful signal that this administration wants to accelerate AI development efforts, and that it’s already creating a strong incentive structure for companies focused on AI infrastructure.
In the absence of clear regulation governing AI best practices, where can companies turn for guidance?
Recognizing that the administration is unlikely to take the legislative route, it’s increasingly clear that a standards-based approach is key to realizing effective and lasting AI solutions. Like the earlier rise of major, transformative technologies– cloud computing, database technology and even the internet itself among them – the AI industry must align around broadly-accepted standards to realize a meaningful impact at scale.
When it comes to AI standards, one clearly rises above the rest in terms of substance, scope and influence: ISO 42001. Published by the widely respected International Organization for Standardization (ISO), ISO 42001 outlines detailed standards for “establishing, implementing, maintaining and continually improving an Artificial Intelligence Management System within organizations.” Its language has been broadly adopted by nearly all of the most substantive AI legislation passed and/or under review to date, including the EU AI Act and the host of state-led bills being currently reviewed in the U.S.
Beyond the public sector, ISO 42001 is also being embraced by some of the most significant actors in the AI industry, including AWS, which became the first major cloud service provider to achieve ISO 42001 accreditation for AI services in November 2024, Cognizant and Anthropic.
For companies looking to scale and manage AI systems responsibly, complying with the ISO 42001 standard is a critically important step. The framework addresses critical issues such as ethical AI deployment, bias mitigation, and oversight, providing a clear roadmap for organizations navigating this complex space.
As the administration signals its intent to foster innovation without heavy-handed regulation, businesses must take proactive steps to align with globally accepted standards. Those who act quickly to adopt ISO 42001 will not only enhance their operational resilience but also position themselves as leaders in a rapidly evolving market.
Responsible innovation is key to realizing optimal AI outputs – and the industry is aligning around this reality. It’s less about stymying progress and more about providing the “seatbelts” needed to mitigate the damage when accidents inevitably occur during the development process.
If your organization is ready to navigate AI standards and deploy responsible AI systems, we’re here to help. Whether you need guidance on achieving ISO 42001 compliance or support in building scalable AI solutions, let’s talk.