Prove AI Blog | News, Insights & Updates | Casper Labs

3 Key Takeaways | Building Responsible AI Systems

Written by Matt Coolidge | Nov 9, 2023

On November 2, we co-hosted a webinar, along with leaders from IBM Consulting and watsonx. The session focused on building responsible AI systems – and how to effectively navigate the hurdles that have prevented their widespread adoption to date. Additionally, it marked the first public preview of Brave.AI, a solution being developed with IBM. 

As the dust from the frenzied hype cycle that has defined AI for the past year-plus begins to settle, a few key realities are emerging:

  • Generative AI is by far the most current popular application of AI, and drives the majority of current and near-term AI use cases.
  • GenAI systems today are a black box when it comes to data. This non-deterministic approach relies on raking in a vast amount of inputs, synthesizing them and spitting them out on the other side with no context as to why or how it reached a given conclusion.
  • AI systems are consuming a massive and ever-growing amount of data and GPU. Training and re-parameterizing AI systems under the current system is not going to be sustainable for most organizations over the long run, as AI margins inevitably tighten.

It’s with these factors in mind that we teamed up with IBM to build Brave.AI, a first-of-its-kind AI governance tool on watsonx. If you missed the webinar (which you can replay here), or if you simply prefer the tl;dr version, here are 3 key takeaways:

  1. Version control is not only possible, but critical, for AI systems. Its efficacy rests on the ability to accurately measure what data is going in and out – something that’s made possible when you employ a tamper-proof and fully upgradable decentralized ledger system. As AI economics settle into a more rational and less-frenzied pattern, versioning will be a key step in significantly reducing the compute energy required to run AI systems.
  2. To tamper-proof data, look to automation and serialization. To trust an AI system is to trust its underlying training data. It’s difficult to do this in today’s construct, given the highly opaque nature of AI training data sets. Trust can be considerably improved if data sets are made more transparent and tamper-proof. The most cost-effective and secure way to do both is to ensure that all data that enters the system is serialized, or time-stamped, and fully auditable – a task for which blockchains are uniquely suited.
  3. Hybrid blockchains will lead the way. Running every single data transaction through a blockchain, or any other database, would be prohibitively expensive. However, it’s possible to store a transactional hash – e.g., a receipt confirming that a given transaction occurred when and where it claimed to – in a fully transparent, public blockchain environment, while storing extraneous and/or sensitive data in a private environment. In this way, you’re delivering a tamper-proof transactional history in a fully transparent manner, while meeting key economic and security considerations by keeping much of the data off-chain. This is the same hybrid playbook that drove a massive spike in cloud adoption a decade ago.

asperspHere’s to a Brave new world ahead. 

If you'd like to join the Brave.AI early access program and become one of its first users, register here.