News

Nvidia plays its ACE at Gamescom

See it, hear it, speak with it—it’s almost real.

Jon Peddie

Nvidia has introduced several technologies under the ACE umbrella, including AI Accelerator Cards, Adaptive Contrast Enhancement, and Artificial Character Engine. Additionally, it has  developed Riva for natural language processing and Audio2Face for dynamic facial animations. These technologies enable more realistic character simulations and interactions. Nvidia’s latest ACE upgrade, Nemotron-4, uses generative AI to bring game characters to life with improved efficiency and latency. The company continues to fine-tune its pipeline for more immersive experiences.

AI human image
You can speak with me. (Source: Nvidia)

Nvidia has had a lot of ACEs. The company introduced its AI Accelerator Cards (ACE) in 2019. Then there was the AI Compute Engine (ACE)—I couldn’t find a specific release date for Nvidia’s ACE technology, but it’s likely related to their AI-focused initiatives, which started around 2017–2019.

Nvidia introduced Adaptive Contrast Enhancement (ACE) as a graphics technology in 2004, aimed at enhancing contrast and image quality in games and graphics applications.

However, the latest and most interesting is Nvidia’s ACE (Artificial Character Engine), introduced in 2023, a technology aimed at creating more realistic and lifelike characters in games and simulations. This ACE uses AI and machine learning to enhance character models, animations, and interactions, making them more believable and engaging.

ACE is designed to help developers create more realistic and immersive experiences, with features like:

  • Advanced character physics and dynamics
  • Realistic skin and tissue simulations
  • Intelligent character interactions and behaviors
  • Dynamic environments and lighting

ACE is part of Nvidia’s efforts to push the boundaries of computer graphics and simulation, enabling more realistic and engaging experiences in gaming, film, and other industries.

Nvidia has been steadily updating and upgrading ACE, and now, digital humans are fully interactive and conversational.

In 2020, Nvidia introduced RIVA (Real-time Intelligent Voice Assistant). It is a deep learning-based platform for advanced natural language processing (NLP), including speech recognition, language understanding, and text-to-speech synthesis. Riva allows NPCs to understand language.

By integrating Riva with NPC systems, developers can create more realistic and interactive characters.

The technology has the potential to revolutionize the way NPCs interact with players in games, simulations, and other interactive applications, making experiences more immersive and engaging.

In 2022, Nvidia presented its Audio2Face, a technology that generates dynamic facial animations in real time, synchronized with audio inputs. This innovation enables realistic and spontaneous facial expressions without the need for pre-scripted or manually animated sequences.

Audio2Face also uses deep learning algorithms to analyze audio signals and generate corresponding facial movements, creating a more natural and engaging experience in various applications.

James NPC
Meet James, the conversational NPC. (Source: Nvidia)

Along with Audio2Face, the company introduced James, a virtual AI-powered character. James is a highly realistic and interactive digital human powered by Nvidia’s AI and graphics technologies, including Audio2Face and other advanced tools.

James was showcased as a demonstration of Nvidia’s capabilities in creating lifelike virtual characters.

At Gamescom, Nvidia introduced its latest ACE upgrade, Nemotron-4.

ACE pipeline
The latest ACE pipeline could bring game characters to life with generative AI. (Source: Nvidia)

Nemotron-4 is distilled from a larger model, which is a little less accurate in saving memory. Only 2GB of VRAM is needed.

So now, Nvidia has the whole pipeline and is fine-tuning it to be more efficient and faster. There is some latency between a query to an LLM-based AI NPC and its response, and that is off-putting; it’s the verbal equivalent of the uncanny valley. But we still live in the world of Moore’s law, and in a year’s time, that latency will disappear, and we’ll forget we ever saw it.