• Source:Reuters

To support its artificial intelligence systems, OpenAI is collaborating with Broadcom and TSMC to develop its first customised chip. To keep up with its growing infrastructure needs, the company is also adding AMD and Nvidia chips, according to people who spoke to Reuters.

ChatGPT's rapidly expanding parent business, OpenAI, has looked at a number of possibilities to lower prices and diversify the chip supply. In order to finance a costly plan to construct a network of factories known as "foundries" for chip manufacture, OpenAI thought about doing everything in-house. Sources, who asked to remain anonymous because they were not authorised to discuss private affairs, claim that the business has abandoned its ambitious foundry plans for the time being due to the time and expense required to establish a network. Instead, the company intends to concentrate on its internal chip design efforts.

READ: Microsoft Faces AI Demand Uncertainty As Copilot Adoption Lags Despite Major Investments

For the first time, the company's strategy is presented here, showing how the Silicon Valley startup is using industry alliances and a combination of internal and external strategies to secure chip supply and control costs similar to those of its bigger competitors, Amazon, Meta, Google, and Microsoft. Being one of the biggest chip purchasers, OpenAI's choice to source from a wide range of chipmakers in order to create its customised chip may have wider ramifications for the IT industry.

Following the revelation, Broadcom's stock surged, closing Tuesday's trade up more than 4.5%. AMD's stock ended the day up 3.7%, continuing its morning session gains. AMD, TSMC, and OpenAI all declined to comment. Broadcom did not immediately answer a request for comment.

In order to train and operate its systems, OpenAI, which assisted in the commercialisation of generative AI that generates responses to questions that resemble those of a person, needs a significant amount of processing power. OpenAI, one of the biggest buyers of Nvidia's GPUs, employs AI processors for inference—using AI to make predictions or judgements based on fresh information—as well as for training models, where the AI learns from data.

OpenAI's efforts to develop chips were previously covered by Reuters. Conversations with Broadcom and others were covered by The Information. According to reports, OpenAI and Broadcom have been collaborating for months on the development of the company's first AI processor with an inference focus. Although there is currently a higher demand for training chips, analysts have forecast that when more AI applications are implemented, the requirement for inference chips may overtake them.

Companies like Alphabet subsidiary Google rely on Broadcom to assist them refine chip designs for manufacturing. Broadcom also provides design elements that facilitate the rapid transfer of information on and off the chips. In AI systems, where tens of thousands of processors are connected to function in unison, this is crucial.

According to two of the people, OpenAI is currently deciding whether to create or purchase more components for their chip design and may work with other partners. About 20 individuals make up the company's chip team, which is headed by elite engineers like Thomas Norrie and Richard Ho who have designed Tensor Processing Units (TPUs) at Google in the past.

According to sources, OpenAI has obtained production space with Taiwan Semiconductor production Company through Broadcom, enabling it to produce its first bespoke chip in 2026. The timing might alter, they noted. At the moment, Nvidia's GPUs account for more than 80% of the market. However, due to shortages and growing expenses, big clients like Microsoft, Meta, and now OpenAI are looking for internal or external options.

AMD's new MI300X processors are an attempt to take a piece of the market that is controlled by Nvidia, as seen by OpenAI's planned use of AMD CPUs through Microsoft's Azure, which was first reported here. After launching the chip in the fourth quarter of 2023, AMD has forecasted $4.5 billion in sales of AI chips in 2024.

The cost of running ChatGPT and training AI models is high. According to insiders, OpenAI has predicted a $5 billion loss on $3.7 billion in revenue this year. The company's biggest expense is compute costs, or the costs of the hardware, electricity, and cloud services required to process big datasets and create models. This has led to initiatives to diversify suppliers and maximise utilisation.

Also In News