Skip to content

Everything You Need to Know from Nvidia GTC ‘24

Abir “Bandy” Bandyopadhyay

SVP of Product Marketing

At the San Jose Convention Center last Thursday, leading researchers, engineers, and developers converged at NVIDIA’s GPU technology conference (GTC) to explore the latest in computing and AI. The event welcomed attendees both online and in-person, and hosted over 900 sessions, 300 exhibits, and 20 technical workshops. Discussion topics ranged from accelerated computing to video analytics to large language model (LLM) customization. 

There was a lot covered over the four days, but a few highlights stuck out to us on the ground.

The world’s most powerful chip is here 

If you’ve played a video game or used an application that leverages machine learning, chances are you’ve already crossed paths with one of NVIDIA’s cutting-edge graphics processing units (GPUs). Thanks to parallel processing, GPUs have the ability to break complex problems down into thousands of smaller tasks and work them out all at once. Now, these capabilities will be amplified with stronger computing and better energy efficiency. 

During his keynote presentation, Jensen Huang, CEO of NVIDIA, announced the launch of the Blackwell platform, what he dubbed as the “world’s most powerful chip.” The chip itself is massive, containing 32 Grace CPUs and 72 Blackwell GPUs that can be used alone or connected in groups of hundreds, or even thousands, of units—all operating as a single, unified system. Its computing power is unprecedented: one Blackwell GPU is four times more powerful than its predecessor, Hopper.

Major cloud service providers like AWS, Microsoft Azure, and Google Cloud have already announced plans to leverage Blackwell GPUs in their deployment technologies like deep learning and other cloud-based computing services. Because of this, more companies will likely be experimenting with (or doubling down on) AI. 

Companies can now finetune real-world workflows in virtual worlds

Every Blackwell unit has a new NVIDIA Omniverse AI system that companies can use to accelerate the creation of digital worlds. By connecting and simulating across a host of 3D tools and applications, users can even replicate workflows in a virtual environment to experiment with new ideas or processes. 

This brings us to the topic of Synthetic Data Generation (SDG), which Huang referenced in his keynote as he demoed NVIDIA’s omniverse platform. He walked through a “digital double” of a 100,000-square-foot warehouse, which was a simulation environment created to test out sensor stacks and autonomous mobile robots (AMRs) for warehouse operations. As a practical alternative to measuring data in real-world scenarios, SDG enables researchers to create artificial datasets to train AI. 

Imagine having to train the algorithms of a self-driving car, for example. AI systems would have to be exposed to dangerous driving situations like extreme weather events, complex intersections, or unpredictable behaviors of other drivers. With SDG, however, researchers can generate scenarios that accurately emulate real-world conditions while ensuring safety and scalability.

From talking cars to screening cancer, AI is emerging in unexpected places

Engineers and developers from around the world came together at the conference exhibit to talk about the hottest AI trends and innovations. Among this diverse group, brands like Ford, Jaguar Land Rover, and Mercedes Benz participated in discussions around how the technology is being used in the automotive industry. 

Bryan Goodman, Ford’s Executive Director of AI, spoke about how AI will soon shape interactions between drivers and their cars. Right now, he said, the company is building a tool that will give Ford customers the ability to ask questions about how to take care of their car and receive informed guidance from an AI-enabled voice assistant inside the vehicle. The assistant will generate responses within seconds, sparing car owners the hassle of having to dust off an old manual to find their answer. 

The exhibit floor was also well-attended by medical researchers and companies in the healthcare sector. An EU representative shared some surprising data about how AI can be used to improve cancer diagnoses. Today, doctors who treat melanoma—a common form of skin cancer—average a diagnosis accuracy rate of 70%; AI tools have proven to be nearly 100% accurate. Medical device maker DermaSensor is one such example. Recently cleared for use by the Food and Drug Administration (FDA), the handheld device uses AI-powered spectroscopy technology to evaluate lesions on patients. Its algorithms are trained on data taken from over 4,000 different types of lesions. 

Ushering in a new era of AI with increased trust and transparency

With the EU AI Act now in full effect, AI governance remains a priority for many leaders across both the public and private sectors. In a session at the GTC hosted by several prominent government officials—including Gerard de Graaf, head of the new EU office located in San Francisco, Elham Tabassi from the U.S. National Institute of Standards and Technology, and Ted W. Lieu, who represents California’s 36th district in the U.S. House of Representatives—innovators and policymakers alike joined to discuss how to deploy trustworthy AI.

The session hit on a couple of key points. First, the problem of AI governance won’t be solved with just one piece of legislation or an academic paper—it will be a long, iterative process to truly understand and control AI. Second, AI systems are neither inherently good nor bad; they reflect the values of those who train them. The success of the technology depends on how we imprint “good” values at the development stage. 

While AI comes with game-changing benefits, companies know it has outsized risks. Amid rising concerns about data privacy among consumers, leaders shared that over half of Americans today have negative views on AI. Regardless of industry, businesses are under pressure to exercise more nuanced control over what kind of data is being fed into their AI systems, as well as set operational guardrails to identify instances of bias or error. 

 

About Us

Casper Labs provides a tamper-proof, highly serialized lens to evaluate the data being fed into AI systems. We offer verification for the specific point at which a given issue impacted the system’s performance, which allows for more effective troubleshooting and remediation, and ensures a more positive end user experience.

Our solutions provide a thorough real-time audit trail within a secure and controlled environment while also leveraging a flexible and easy-to-use multi-party system. As AI-centric regulations continue to develop across the world, businesses must ensure they are employing artificial intelligence solutions responsibly.