Nvidia’s Expansion: Advancing Microservices and 3D Model Creation

  • Nvidia expands its Nvidia Inference Microservices (NIM) library to include support for physical environments and advanced visual modeling.
  • Integration of Hugging Face Inc.’s inference-as-a-service on Nvidia’s cloud platform, enhancing access and performance for 4 million developers.
  • Introduction of new NIM microservices, including Fast Voxel Database (FVDB) for improved 3D model generation and support for deep learning frameworks.
  • Launch of three microservices—USD Code, USD Search, and USD Validate—for creating and managing 3D scenes using Universal Scene Description (USD) format.
  • Advances in generative physical AI through Metropolis reference workflow and new tools for training physical machines.
  • Nvidia’s collaboration with Getty Images Holdings Inc. and Shutterstock Inc. to offer image and 3D asset generation via NIMs using Nvidia Edify.
  • Ongoing investments in OpenUSD and partnerships with Apple Inc. to enhance hybrid rendering capabilities for industrial applications.

Main AI News:

Nvidia Corp. unveiled a significant expansion of its Nvidia Inference Microservices (NIM) library at the Siggraph conference in Denver, enhancing its capabilities in physical environments, advanced visual modeling, and diverse vertical applications. The update includes the integration of Hugging Face Inc.’s inference-as-a-service on the Nvidia cloud, alongside expanded support for 3D training and inferencing.

NIM, part of the Nvidia AI Enterprise suite, provides containerized microservices that accelerate and simplify AI model deployment. These optimized inference engines cater to various hardware setups, reducing latency and operational costs while enhancing performance and scalability. Developers can leverage NIMs to swiftly deploy AI applications, fine-tune models with proprietary data, and minimize the need for extensive customization.

The partnership with Hugging Face will now offer inferencing-as-a-service via Nvidia’s DGX cloud, enabling Hugging Face’s 4 million developers to benefit from faster performance and seamless serverless inferencing. Hugging Face’s platform supports natural language processing (NLP) and machine learning, offering a library of pre-trained models for tasks like text classification, translation, and question answering, as well as a comprehensive dataset repository optimized for the Transformers library.

Nvidia also introduced advancements in generative physical AI with its Metropolis reference workflow. Metropolis comprises tools and workflows designed to build, deploy, and scale generative AI applications across various hardware platforms. Additionally, new NIM microservices will aid developers in training physical machines for complex tasks.

The announcement includes three new Fast Voxel Database (FVDB) NIM microservices that support deep learning frameworks for 3D worlds. FVDB, based on OpenVDB, provides four times the spatial scale and 3.5 times the performance of previous frameworks, along with access to an extensive library of real-world datasets. The new services streamline processes by consolidating multiple deep-learning libraries into one.

Three new microservices — USD Code, USD Search, and USD Validate — leverage the Universal Scene Description (USD) format for creating diverse 3D scenes. USD Code generates Python code and answers OpenUSD knowledge queries, USD Search offers natural language access to vast OpenUSD 3D and image data libraries, and USD Validate ensures compatibility with OpenUSD versions and generates rendered images using Omniverse cloud APIs.

Nvidia highlighted its physical AI support, which includes speech and translation, vision, and realistic animation capabilities. The new generative AI models, known as vision language models, enhance decision-making, accuracy, and interactivity. Nvidia’s AI and DGX supercomputers are utilized for training physical AI models, with Omniverse and OVX supercomputers refining skills in digital twins. The suite of services includes NIM microservices for robot simulation and learning, the OSMO orchestration service for robotics workloads, and an AI-enabled teleoperation workflow that minimizes human demonstration data for training robots.

Nvidia is also collaborating with Getty Images Holdings Inc. and Shutterstock Inc. to offer 4K image generation and 3D asset creation via NIMs, leveraging Nvidia Edify for multimodal visual generative AI. Nvidia’s ongoing investment in OpenUSD and collaboration with Apple Inc. on hybrid rendering pipelines further enhance the capabilities of its Graphics Delivery Network and Apple Vision Pro.

Developers can utilize NIM microservices and Omniverse Replicator to create generative AI-enabled synthetic data pipelines, addressing real-world data shortages that limit model training. Upcoming NIMs include USD Layout, USD Smart Material, and FDB Mesh Generation, expanding the capabilities of OpenUSD-based meshes rendered by Omniverse APIs.

Conclusion:

Nvidia’s latest expansion of its microservices library and 3D modeling capabilities represents a significant advancement in the AI and visualization sectors. By integrating Hugging Face’s services and enhancing its NIM offerings, Nvidia is positioning itself as a leader in AI infrastructure, particularly for developers and industrial applications. The introduction of advanced tools for 3D modeling and physical AI training underscores Nvidia’s commitment to addressing the evolving needs of AI-driven industries. This move not only strengthens Nvidia’s competitive edge but also accelerates innovation in generative AI and robotics, potentially setting new standards in performance and application versatility.

Source

Your email address will not be published. Required fields are marked *