In an interview with SiliconANGLE, Casper Labs Chief Executive Mrinal Manohar said that starting in May of last year, Casper saw that there has been some need to approach a convergence between AI and blockchain technology. That’s especially important, he said, as more companies have discovered the need to govern their model behavior and comply with regulatory standards being set by government bodies.
“There’s been a lot of cynical approaches we think like sprinkle some AI ‘fairy dust’ on blockchain with AI chatbots to understand the ecosystem and so on,” Manohar said. “But really the two big insights were, one, AI governance will become a really big thing because of government and regulatory needs and everyone feels AI must become more responsible. And second, this is one of the rare instances where an industry is emerging and the tech stack isn’t well-defined.”
To make this work, Casper Labs began working with IBM to build a software-as-a-service product using IBM’s watsonx.governance that combines it with the Casper Blockchain platform to deliver companies an auditable solution for AI model control.
Blockchain technology provides a data tracking layer using distributed ledgers and cryptography that can create a tamper-proof historical record of changes to an AI model. This means that every time a modification is made to an AI model version, a transaction can be added to a blockchain record, and that can be used as a “reset point” if anything goes wrong with the model after that point.
“Given that blockchain is tamper-resistant, completely time-synched and can be completely automated, AI governance can be done right, be highly automatable and highly certifiable right out of the gate,” said Manohar.
IBM’s watsonx.governance toolkit delivers software to automate and manage their AI models to deal with the regulatory requirements of training and running models and address the ethical and legal concerns of generative AI and machine learning models. By combining its capabilities with a tamperproof historical record of every change that happens in a model’s lifetime, it’s possible to reduce the risks and issues that might arise.
Given that AI models can experience drift because of new data trained into them from one version to another it becomes even more necessary to be able to pinpoint transparently when an event happened and what change caused it. It’s even better that it’s possible to go back to the exact moment before the problem began happening.
“It gives you real version control,” said Manohar. “Since your blockchain gives you state at every moment in time, it’s very easy to just reboot or revert back to a previous state.”
This is especially important for anomalies such as hallucinations, which is where an AI model begins to confidently make false statements much more often than before, which is something that conversational chatbots and large language models can be prone to. The same could be done if developers discovered that the model had been trained on sensitive data such as personally identifiable information or copyrighted works that need to be excised. Reverting back to the version right before that was added could be as simple as determining the transaction before the event when it was added.
Using the same blockchain technology, big organizations can also implement access controls and permissions on who can edit or train AI models. This will allow users to set which individuals or groups of individuals must participate before training data can be input, which Manohar said will be key for large organizations to control sensitive datasets for AI.
Manohar said that the first customers for this product would probably come from a variety of highly regulated and sensitive industries, especially financial products, insurance and similar where regulations are strict and legal liability from hallucinations would be an issue. Similarly, supply chain companies and any industry using AI for “track and trace” where aberrations need a good audit trail to know why they were relying on a particular model path.
“An AI system’s efficacy is ultimately as good as an organization’s ability to govern it,” said Shyam Nagarajan, global partner, blockchain and responsible AI leader at IBM Consulting. “Companies need solutions that foster trust, enhance explainability, and mitigate risk. We’re proud to bring IBM Consulting and technology to support Casper Labs in creating a new solution offering an important layer to drive transparency and risk mitigation for companies deploying AI at scale.”
The service is currently in active beta mode and Casper Labs is using the program to source enterprise feedback from enterprise users to build the most useful solution possible. As for a launch, the company said that the service on track for a launch in the third quarter of this year.