Nvidia CEO Jensen Huang presented an interesting picture of the merging of gaming and artificial intelligence (AI) at the recent Computex 2023 event in Taipei. The centerpiece of the show was a graphically spectacular recreation of a cyberpunk ramen store where players may converse with the virtual proprietor in real time.
Unlike standard game conversation choices, Nvidia’s proposal allows players to talk with their own voice while receiving replies from in-game characters by merely holding down a button. This immersive experience is dubbed by the firm as “the future of games.”
However, some critics say that the actual dialogue given in the presentation should be improved, implying that sophisticated AI models like as GPT-4 or Sudowrite could increase interaction quality. Nonetheless, the AI’s capacity to interpret and respond to genuine speech inputs is astounding.
The demo, created in partnership with partner Convai, serves as a demonstration for the technologies utilized in its development. Nvidia ACE (Avatar Cloud Engine) for Games is emphasized as a package of middleware that can run both locally and in the cloud. The ACE package includes, among other things, Nvidia’s NeMo tools for deploying large language models (LLMs), Riva speech-to-text and text-to-speech technologies.
The demo, which uses Unreal Engine 5 with enhanced ray-tracing capabilities, has remarkable visual quality that overshadows the chatbot aspect. While the dialogue may appear pedestrian in comparison to more complex chatbot systems, it marks a significant advancement in natural language interactions within gaming.
Nvidia’s VP of GeForce Platform, Jason Paul, said during a Computex pre-briefing that the technology may possibly grow to accommodate dialogues with numerous characters and even allow non-player characters (NPCs) to engage with each other. However, he admitted that such scenarios had not yet been thoroughly tested.
The extent to which developers will use the entire Nvidia ACE toolbox is unknown. S.T.A.L.K.E.R. 2 Heart of Chernobyl and Fort Solis, on the other hand, have already committed to adopting “Omniverse Audio2Face,” a component of Nvidia ACE that synchronizes 3D character facial motions with voice actors’ speech.
While the featured dialogue may not have properly highlighted Nvidia’s gaming and AI integration capabilities, the idea of more immersive and dynamic interactions in future games is undeniably fascinating. Nvidia devotees are anxiously awaiting the release of the demo in order to directly test the technology and view the wide range of possible results.