TL;DR:
- Meta CEO Mark Zuckerberg merges AI research with a generative AI team to enhance technology integration.
- Meta plans to deploy 350,000 Nvidia H100 GPUs, targeting 600,000 GPUs in total with other suppliers.
- The company shifts focus from AI research to integrating AI into social media products and AR/VR hardware.
- Meta introduces a commercial Llama language model, image-generating ad tools, and Meta AI chatbot.
- Zuckerberg highlights the training of the third Llama model and the importance of new AI-driven devices.
Main AI News:
In a strategic move, Meta’s CEO, Mark Zuckerberg, announced on Thursday the integration of its AI research team with the business-focused generative AI team, emphasizing their commitment to infuse technology into their products. This initiative marks a significant step towards harnessing the power of artificial intelligence for the social media giant.
To support this endeavor, Meta is rapidly expanding its infrastructure to accommodate their AI ambitions. By the end of the year, the company aims to deploy approximately 350,000 H100 GPUs, designed by renowned chip manufacturer Nvidia. Moreover, in conjunction with similar chips sourced from other suppliers, Meta is set to amass a staggering total of 600,000 GPUs by year-end, effectively establishing one of the largest computing systems in the technology industry.
Comparatively, Amazon recently disclosed plans to build a system incorporating 100,000 Trainium2 chips, while Oracle introduced a system featuring 32,000 Nvidia H100 GPUs. While Meta has not disclosed all its GPU suppliers, they have publicly acknowledged their intentions to utilize chips from AMD. Additionally, rumors have surfaced regarding an internally developed GPU-like chip in the pipeline.
This aggressive expansion of computing resources represents a departure from Meta’s previous approach, which primarily focused on AI research through its FAIR team, with limited integration into its social media platforms and AR/VR hardware. A pivotal moment came in late 2022 with the remarkable success of OpenAI’s ChatGPT chatbot, prompting Meta to establish a dedicated “GenAI” team.
Since then, Meta has launched a commercial version of its Llama large language model and introduced innovative ad tools capable of generating image backgrounds from text prompts. Furthermore, their “Meta AI” chatbot, accessible directly through Ray-Ban smart glasses, has showcased their commitment to integrating AI into everyday experiences.
In his latest announcements, Zuckerberg also revealed that Meta is actively training a third iteration of the Llama model. These investments in AI align seamlessly with Meta’s overarching vision of a metaverse powered by augmented and virtual reality, a vision that prompted the company to rebrand as Meta in 2021. Zuckerberg emphasized the need for “new devices,” such as glasses, to facilitate interactions with AI in this emerging metaverse era.
Conclusion:
Meta’s strategic integration of AI teams and substantial investment in GPU infrastructure signifies a concerted effort to harness AI capabilities. This shift towards AI integration into their core offerings demonstrates Meta’s commitment to staying at the forefront of technology innovation and could potentially redefine the market’s expectations for AI-powered social media and AR/VR experiences.