Nvidia brings generative AI compatibility to robotics platforms

2 min read
Nvidia brings generative AI compatibility to robotics platforms

It likely won’t surprise you to learn that generative AI has been a white-hot topic in the world of robotics. There are a number of different ideas floating around about the best ways to embrace the emerging technologies, from natural language commands to design. I put the question of generative AI to Deepu Talla, Nvidia’s vice president and general manager of Embedded & Edge Computing, during a recent visit to the company’s South Bay headquarters.

“I think it speaks in the results. You can already see the productivity improvement,” the executive told me. “It can compose an email for me. It’s not exactly right, but I don’t have to start from zero. It’s giving me 70%. There are obvious things you can already see that are definitely a step function better than how things were before. Summarizing something’s not perfect. I’m not going to let it read and summarize for me. So, you can already see some signs of productivity improvements.”

Turns out Nvidia was only a couple of weeks away from announcing its news pertaining to the topic. The ROSCon announcement comes alongside several other bits of news connected to its various robotics offerings, including the general availability of the Nvidia Isaac ROS 2.0 and Nvidia Isaac Sim 2023 platforms.

The systems are embracing generative AI, which should go a ways toward accelerating its adoption among roboticists. After all, as Nvidia notes, some 1.2 million developers have interfaced with the Nvidia AI and Jetson platforms. That includes some big-name clients like AWS, Cisco and John Deere.

One of the more interesting bits here is the Jetson Generative AI Lab, which gives developers access to open source large language models. The company writes:

The NVIDIA Jetson Generative AI Lab provides developers access to optimized tools and tutorials for deploying open-source LLMs, diffusion models to generate stunning images interactively, vision language models (VLMs) and vision transformers (ViTs) that combine vision AI and natural language processing to provide comprehensive understanding of the scene.

The arrival of these sorts of models can go a ways toward helping systems determine a course of action in circumstances they weren’t already trained on (on its own, simulation only goes so far). After all, while spots like warehouses and factory floors are more structured than, say, a freeway, there are still innumerable variables to contend with. The idea is to both be able to adjust on the fly and offer a more natural language interface for the systems.

“Generative AI will significantly accelerate deployments of AI at the edge with better generalization, ease of use and higher accuracy than previously possible,” Talla said in a statement tied to today’s news. “This largest-ever software expansion of our Metropolis and Isaac frameworks on Jetson, combined with the power of transformer models and generative AI, addresses this need.”

The latest versions of the platforms also bring improvements to perception and simulation.

Source link