TL;DR:
- Advances in computing and cloud technology are driving the AI revolution.
- Microsoft’s partnership with NVIDIA plays a crucial role in AI infrastructure and innovation.
- Wayve is developing an AI system for autonomous driving, leveraging machine learning and training on Azure.
- Microsoft offers AI infrastructure on Azure, providing optimal configurations for AI workloads.
- Microsoft and NVIDIA collaborate to make AI accessible to companies of all sizes.
- The ND v5 H100 series VMs with NVIDIA H100 GPUs enable scalable AI models.
- NVIDIA Omniverse Cloud and DGX Cloud are new offerings hosted on Microsoft Azure.
- Omniverse connects real-time sensor data to digital replicas for industrial applications.
- Omniverse integration with the Microsoft productivity suite enhances collaboration.
- Omniverse Cloud and DGX Cloud empower enterprises with dedicated AI supercomputing clusters on Azure.
Main AI News:
Advancements in robust computing and cloud technology are driving the AI revolution, enabling companies to surpass previous limitations on model sizes and fostering innovative breakthroughs poised to transform the world. This revolution is accessible to organizations of all sizes thanks to the cost-effectiveness and versatility of cloud computing.
During NVIDIA’s 2023 GTC event, held from March 20 to 23, Microsoft’s leaders took center stage, highlighting the indispensable role of AI infrastructure in driving AI innovation. The event featured captivating keynotes, insightful demos, and thought-provoking panel discussions where customers and partners delved into their pioneering adoption of AI, revolutionizing their respective industries. Microsoft, on the other hand, seized the opportunity to disclose exclusive company announcements and showcase the recently launched Azure OpenAI Service.
Furthermore, they unveiled the NVIDIA H100 GPU-powered VM series for Generative AI, the cutting-edge NVIDIA Omniverse Cloud on Azure, and the seamless integration with Microsoft 365, among other notable advancements.
One noteworthy session, titled “Accelerate AI Innovation with Unmatched Cloud Scale and Performance,” brought together luminaries such as Alex Kendall from Wayve, Nidhi Chappell from Microsoft, and NVIDIA’s own Manuvir Das. The discussion centered on the pivotal role of AI infrastructure in propelling AI innovation, the remarkable strides made in Azure AI infrastructure powered by NVIDIA, and other key insights.
Alex Kendall, the CEO of Wayve, spearheaded the session, shedding light on their remarkable endeavors in introducing autonomous driving to the market. The advent of autonomous driving is widely recognized as a game-changer, revolutionizing transportation for both people and goods within urban landscapes. However, Kendall emphasized that unlocking the full potential of autonomous driving hinges on an AI system capable of making safe and reliable decisions amidst the intricacies of urban environments.
Wayve has embarked on building an artificial intelligence system that learns to drive by processing visual input, utilizing onboard intelligence to make real-time decisions without the need for predefined rules or maps. This groundbreaking approach brings AI into the physical world, necessitating state-of-the-art advancements in machine learning and a large-scale foundational neural network. To tackle the complexities of this task, Wayve leverages self-supervised learning to train its neural networks with vast and diverse sets of data.
One of the most challenging aspects is dealing with the sheer volume of video data. Each vehicle captures terabytes of data every minute from its array of cameras and radar systems. Wayve’s training infrastructure on Azure serves as the bedrock for processing this massive influx of data, enabling the training of neural networks with billions of parameters.
“This scale is unprecedented in the realm of autonomous driving, and we aspire to push these boundaries further,” remarked Kendall. He went on to highlight the invaluable partnership with Microsoft, where they have been collaborating on supercomputing technologies that make such achievements feasible, both from a software infrastructure standpoint and by creating the requisite data and compute nodes to facilitate training at this scale.
Kendall emphasized that as the volume of data and compute resources continues to expand, the current successes achieved by Wayve will only multiply. By leveraging novel generative AI methods, they can generate, re-simulate, or fabricate experiences to enhance training.
Additionally, by scaling their simulation platform, they can effectively utilize the collective experience contributed by their fleet partners’ vehicles. Azure’s flexible infrastructure allows Wayve to create synthetic training data, train multi-agent reinforcement learning systems with new experiences, and validate the system’s performance before deploying it on public roads.
Azure Infrastructure Takes a Leap Forward
In the ever-evolving landscape of AI, meeting the escalating compute demand for AI training while ensuring exceptional performance at scale has become a pressing concern. Nidhi Chappell, General Manager of HPC, AI, SAP, and Confidential Computing at Microsoft, unveiled exciting news about Microsoft’s AI initiatives aimed at tackling these challenges head-on. This includes providing an optimal infrastructure stack to empower organizations of all sizes with cutting-edge AI capabilities.
Chappell emphasized Microsoft’s vision to infuse AI across their entire product portfolio, underscoring the need for a robust infrastructure that can support diverse AI workloads effectively. However, for many companies, constructing an on-premises AI-optimized solution stack can prove impractical and cost-prohibitive.
To address this, Microsoft has forged a strategic partnership with NVIDIA, leveraging their accelerated computing and networking technologies to power Azure AI infrastructure. This collaboration ensures that customers, regardless of their scale, can access an AI solution that is both powerful and cost-effective.
“Our design philosophy revolves around providing an optimal AI infrastructure configuration, leveraging the next-generation CPU, compute GPU, and storage,” Chappell explained. Furthermore, Microsoft’s utilization of NVIDIA solutions seamlessly interconnected with the groundbreaking NVIDIA Quantum InfiniBand enables unparalleled scalability in the cloud while being supported by best-in-class AI platform services.
Crucially, Microsoft stands out as the sole provider offering a pay-as-you-go infrastructure stack. This flexible payment model ensures that customers only pay for the resources they require, delivering optimal performance at the scale of their choosing. This approach empowers organizations to adapt their AI infrastructure to evolving needs while optimizing cost efficiency.
The partnership between NVIDIA and Microsoft is revolutionizing the accessibility of AI for companies of all sizes. Manuvir Das, Vice President of Enterprise Compute at NVIDIA, shed light on the critical role their collaboration plays in the evolution of Azure, Microsoft’s cloud computing platform.
NVIDIA, renowned as a trailblazer in accelerated computing, has made significant strides in AI, data science, and data processing. As more and more customers flock to Azure for their cloud computing needs, the partnership between Microsoft and NVIDIA focuses on seamlessly integrating this new accelerated computing model into Azure, where companies prefer to conduct their work.
The deep integration of NVIDIA’s AI software and hardware into Azure has placed high-speed AI capabilities within reach of a growing number of companies, regardless of where they stand on the AI timeline. This collaboration caters to enterprises that may not be engaged in large-scale AI projects, meaning they may lack extensive engineering teams or the need for the most powerful AI platform.
However, to effectively operationalize AI, they require more power than off-the-shelf solutions can provide. NVIDIA AI Enterprise addresses this need by packaging AI software for a wide range of use cases and seamlessly integrating them into Azure, offering enhanced capabilities without sacrificing simplicity.
To delve deeper into these insightful discussions, the complete NVIDIA GTC session titled “Accelerate AI Innovation” is available on demand. The session features notable speakers such as Alex Kendall from Wayve, Nidhi Chappell from Microsoft, and Manuvir Das from NVIDIA, providing valuable insights into the transformative potential of AI.
With the NVIDIA and Microsoft partnership driving innovation, AI is no longer confined to a select few. Instead, it becomes a powerful tool that empowers companies of all sizes to leverage the potential of AI, amplifying their capabilities and propelling them toward a future fueled by intelligent technologies.
Microsoft is proud to unveil the latest addition to its virtual machine lineup: the ND v5 H100 series. These powerful and scalable VMs are equipped with NVIDIA H100 Tensor Core GPUs, enabling companies to scale their models from eight to thousands of interconnected NVIDIA H100 GPUs, all on a single fabric, supported by NVIDIA Quantum-2 InfiniBand Networking.
Nidhi Chappell, from Microsoft, highlighted the significant performance improvements these new VMs bring to OpenAI (OAI) models compared to the previous generation of the ND series, which utilized NVIDIA A100 GPUs. This breakthrough technology opens the door to groundbreaking research in generative AI applications, empowering companies in both training and inference.
The collaboration between Microsoft and NVIDIA went beyond just hardware integration. The entire architecture was updated to seamlessly incorporate these new GPUs, ensuring that the developer community can fully leverage the benefits of the latest advancements. Regardless of whether developers are working on training or inference tasks, they can access these powerful resources through a unified programming model, unlocking a myriad of use cases and possibilities.
For a comprehensive overview of the latest updates regarding Azure’s ND series powered by NVIDIA H100 GPUs, viewers can tune in to the NVIDIA GTC session titled “Azure’s Purpose-Built AI Infrastructure using the Latest NVIDIA GPU Accelerators,” featuring Matt Vegas, Principal Product Manager at Microsoft. This session provides valuable insights into the capabilities and potential of these cutting-edge technologies.
Industrial Metaverse and AI Supercomputing Services Collaboration
During the conference, Microsoft and NVIDIA showcased their collaborative efforts in action, introducing NVIDIA Omniverse Cloud as a new platform-as-a-service hosted on Microsoft Azure. This offering provides instant access to a full-stack environment that facilitates the design, development, deployment, and management of industrial metaverse applications.
Additionally, NVIDIA DGX Cloud, an AI supercomputing service, enables enterprises to quickly access the necessary infrastructure and software for training advanced models in generative AI and other groundbreaking applications.
By connecting Omniverse to Azure Digital Twins and Internet of Things (IoT) capabilities, companies can seamlessly link real-time data from physical world sensors to digital replicas that automatically respond to changes in their physical environments. Azure’s robust cloud infrastructure ensures the seamless deployment of enterprise services at scale, offering essential features such as security, identity management, and storage.
Furthermore, Omniverse will be integrated into the Microsoft productivity suite, starting with popular applications like Microsoft Teams, SharePoint, and OneDrive. Powered by Microsoft Azure, Microsoft 365 and Azure users will have access to their very own Omniverse, enabling enhanced collaboration and productivity.
The upcoming release of Omniverse Cloud, powered by NVIDIA OVX computing systems, on Azure in the second half of the year, along with the availability of DGX Cloud in Azure from next quarter, will empower enterprises to harness dedicated clusters of NVIDIA DGX AI supercomputing and software on a monthly rental basis. These advancements cement the partnership between Microsoft and NVIDIA and provide organizations with the tools they need to unlock the full potential of industrial metaverse applications and advanced AI models.
Conlcusion:
The advancements showcased in the partnership between Microsoft and NVIDIA have significant implications for the market. The availability of powerful AI infrastructure, such as the ND v5 H100 series VMs and the NVIDIA Omniverse Cloud on Microsoft Azure, opens up new possibilities for businesses of all sizes. This technology allows companies to leverage the potential of AI, drive innovation, and unlock valuable insights from their data.
With the seamless integration of AI into Azure and the provision of scalable AI solutions, organizations can accelerate their AI initiatives, enhance productivity, and gain a competitive edge in today’s data-driven market. The collaboration between Microsoft and NVIDIA signifies a major milestone in democratizing AI and shaping the future of industries across the globe.