TL;DR:
- Big Tech companies, including Microsoft and Google, face difficulties in making AI products like ChatGPT profitable.
- Operating large AI models demands substantial investments in powerful servers and energy-consuming chips.
- The cost of running AI models, like ChatGPT, is high, with some services leading to operational losses.
- AI computations differ from traditional software, making flat-fee models risky due to increased usage driving up costs.
- Companies employ strategies like introducing expensive AI-backed upgrades or exploring cost-effective alternatives.
- Microsoft’s GitHub Copilot operates at a loss despite its popularity, with some users costing the company up to $80 per month.
- The preference for powerful AI models, such as GPT-4, contributes to the high costs, but advancements in AI hardware may alleviate this.
- The industry may shift from enthusiasm and experimental budgets to a focus on AI models’ contribution to company profitability.
Main AI News:
Companies such as Microsoft and Google have channeled significant investments into AI technologies, like ChatGPT, with the aspiration of transforming them into lucrative ventures. However, the operation of advanced AI models has emerged as a formidable obstacle, with services like Microsoft’s GitHub Copilot incurring substantial operational losses.
The expenditure associated with the operation of generative AI models, particularly large language models (LLM) akin to those empowering ChatGPT, cannot be underestimated. These models demand formidable servers equipped with power-hungry, high-end chips. For instance, a recent analysis, as cited in a Reuters report, suggests that each ChatGPT query may cost up to 4 cents to execute. Consequently, corporate customers are expressing discontent with the elevated operational expenses of these AI models, as noted by Adam Selipsky, CEO of Amazon Web Services.
The prevailing cost challenge stems from the inherent nature of AI computations, often necessitating fresh calculations for each query, unlike conventional software that benefits from economies of scale. This predicament renders flat-fee models for AI services precarious, as heightened customer usage can lead to escalated operational costs and potential financial setbacks for the enterprise.
In response to this conundrum, some companies are striving to mitigate costs, while others are doubling down on their investments in AI technology. Microsoft and Google have introduced pricier AI-backed enhancements to their existing software services. Meanwhile, Zoom has reportedly sought cost reduction by occasionally employing a less intricate in-house AI model for certain tasks. Adobe is addressing the issue by imposing activity caps and charging based on usage, while Microsoft and Google predominantly adhere to flat fee structures.
Chris Young, Microsoft’s Head of Corporate Strategy, believes that realizing a return on AI investments will necessitate additional time as organizations navigate the optimal ways to harness their potential. “We’re clearly at a place where now we’ve got to translate the excitement and the interest level into true adoption,” he expressed to the outlet.
Notably, the WSJ report reveals that Microsoft’s GitHub Copilot, designed to assist app developers by generating code, has been operating at a loss despite attracting over 1.5 million users and integrating into nearly half of their coding projects. While users pay a flat fee of $10 per month for the service, the actual cost to Microsoft averages over $20 per user per month, according to an insider. In some instances, individual power users have burdened the company with costs as high as $80 per month.
One of the key reasons behind the high costs of AI services lies in the preference of some companies for the most potent AI models available. For instance, Microsoft employs OpenAI’s highly intricate LLM, GPT-4, for numerous AI functionalities. GPT-4, being one of the largest and costliest AI models to operate, demands substantial computational resources. The WSJ humorously likened employing this model for rudimentary tasks like email summarization to “getting a Lamborghini to deliver a pizza,” underscoring that using overly capable AI models can be overkill for simple functions.
In light of this, Microsoft has begun exploring more cost-effective alternatives for its Bing Chat search engine assistant, including Meta’s Llama 2 language model. However, with advancements in AI acceleration hardware over time, the operational costs of these intricate models are likely to decrease. Whether these advancements can align with the current fervor surrounding AI remains uncertain.
While enthusiasm for the AI sector persists, the WSJ suggests that we may be approaching a zenith before a dose of reality sets in. Some experts anticipate a more fiscally prudent approach in the foreseeable future. May Habib, CEO of generative AI firm Writer, opined, “Next year, I think, is the year that the slush fund for generative AI goes away,” hinting at a shift from enthusiasm and experimental budgets to a phase where the focus is squarely on the contribution of AI models to company profitability.
Conclusion:
The profitability challenge faced by Big Tech in the AI space reflects the tension between innovation and financial sustainability. As companies grapple with escalating operational costs, they must strike a delicate balance between offering cutting-edge AI services and ensuring profitability. This challenge underscores the need for continuous technological advancements and strategic pricing models in the evolving AI market.