NVIDIA Introduces DRIVE AGX Orin
NVIDIA today introduced NVIDIA DRIVE AGX Orin™, an advanced software-defined platform for autonomous vehicles and robots.
The platform is based on the new Orin system-on-chip (SoC), which consists of 17 billion transistors and is the result of billions of dollars in investment over a four-year period. The Orin SoC is based on the next-generation NVIDIA graphics architecture and Arm Hercules CPU cores, as well as new deep learning and computer vision accelerators, which together provide 200 trillion operations per second—almost 7 times more than the previous generation NVIDIA Xavier SoC.
The Orin SoC is designed to handle a wide variety of deep learning applications and networks running simultaneously in autonomous vehicles and robots, and meets the security requirements of ISO 26262 ASIL-D standards.
As a software-defined platform, DRIVE AGX Orin is designed to create architecturally compatible platforms that scale from level two to level five (the fully self-driving level), allowing OEMs to build comprehensive families of software products. Because both Orin and Xavier are programmed using open CUDA and TensorRT APIs and libraries, developers can use their tools across product generations.
NVIDIA is giving the transportation industry access to its deep neural networks to build autonomous vehicles.
NVIDIA today announced that it is making its NVIDIA DRIVE™ deep neural networks (DNNs) for autonomous vehicles available to the transportation industry in the NVIDIA Container Repository GPU Cloud (NGC).
NVIDIA DRIVE has become the de facto standard in autonomous vehicle development and is widely used by car, truck and robotaxi manufacturers, software developers and universities. NVIDIA is now making its trained AI models and training code available to developers. Using NVIDIA AI tools, developers can customize and add complexity to these models to increase the reliability and capabilities of their self-driving systems.
The era of conversational AI begins with the release of new NVIDIA inference software
NVIDIA today introduced inference software that enables developers to create conversational AI applications that enable fully interactive experiences.
NVIDIA TensorRT™ 7, the seventh generation inference software development kit, opens the door to a world of more advanced human-AI interactions and enables real-time applications such as voice agents, chatbots and recommendation services. According to Juniper Research, 3.25 billion digital voice assistants are used worldwide. By 2023, this figure will grow to 8 billion, which will exceed the entire population of the Earth.
TensorRT 7 includes a new deep learning compiler that automatically optimizes and accelerates the increasingly complex recurrent neural networks and transformer networks needed for conversational AI applications. It accelerates conversational AI components by more than 10 times compared to the CPU, reducing latency to 300 ms. And this is exactly the threshold that is necessary for real-time interactions.