- OpenAI partners with Microsoft and Oracle to integrate Azure AI with Oracle Cloud Infrastructure (OCI).
- OCI’s advanced AI infrastructure supports OpenAI’s scaling efforts for services like ChatGPT.
- Larry Ellison highlights OCI’s superior AI infrastructure, attracting visionary organizations like OpenAI.
- OCI Supercluster hosts numerous AI innovators, including Adept, Modal, and NVIDIA.
- OCI offers purpose-built AI capabilities, enabling faster and more reliable model development and training.
- OCI Supercluster scales up to 64k NVIDIA Blackwell GPUs for training large language models.
- OCI Compute virtual machines and bare metal NVIDIA GPU instances support various AI applications.
Main AI News:
In a strategic collaboration, Oracle, Microsoft, and OpenAI are teaming up to broaden the capabilities of the Microsoft Azure AI platform by integrating it with Oracle Cloud Infrastructure (OCI). This move aims to bolster the capacity of OpenAI, a leading AI research and development company known for its widely-used ChatGPT service.
Sam Altman, Chief Executive Officer of OpenAI, expressed enthusiasm about the partnership, stating, “We are thrilled to collaborate with Microsoft and Oracle. By leveraging OCI, we can extend the reach of Azure’s platform, empowering OpenAI to scale its operations effectively.“
Larry Ellison, Oracle’s Chairman and Chief Technology Officer, emphasized the increasing demand for advanced AI infrastructure, remarking, “The competition to develop the most advanced large language model is fierce, driving a surge in demand for Oracle’s Gen2 AI infrastructure. Visionary organizations like OpenAI are selecting OCI for its unparalleled speed and cost-efficiency in AI infrastructure.“
OCI’s state-of-the-art AI infrastructure is at the forefront of driving AI innovation. OpenAI will join a thriving community of AI innovators worldwide who rely on OCI’s AI infrastructure to power their operations. Notable entities such as Adept, Modal, MosaicML, NVIDIA, Reka, Suno, Together AI, Twelve Labs, and xAI have already harnessed the capabilities of OCI Supercluster to develop and deploy cutting-edge AI models.
The specialized AI features offered by OCI empower both startups and enterprises to accelerate the development and training of AI models across Oracle’s distributed cloud infrastructure. For training large language models (LLMs), OCI Supercluster offers unparalleled scalability, supporting up to 64k NVIDIA Blackwell GPUs or GB200 Grace Blackwell Superchips. This infrastructure is complemented by ultra-low-latency RDMA cluster networking and a selection of HPC storage options. Additionally, OCI Compute virtual machines and bare metal NVIDIA GPU instances cater to a wide range of AI applications, including generative AI, computer vision, natural language processing, and recommendation systems.
Conclusion:
The collaboration between OpenAI, Microsoft, and Oracle signifies a significant advancement in AI infrastructure. With OCI’s robust capabilities, organizations can accelerate AI innovation and meet the growing demand for advanced AI services in the market. This partnership underscores the importance of efficient and scalable AI infrastructure in driving technological innovation and competitiveness.