Google to get surprising new partner to help design its seventh-gen AI chip

Google has been working with Broadcom to help it design the Alphabet subsidiary's AI accelerator chips known as Tensor Processing Units (TPU). Keep in mind that these are not the same as the Tensor Gx application processors used to power Pixel devices. According to a fresh report, Google might be replacing Broadcom as Google's design partner on the TPU team. The report says that Taiwan chip designer MediaTek will step in to work on the new TPUs that will be Google's seventh-generation AI chips.
Despite this report, it is noted that Google is not cutting its ties with Broadcom although there are solid reasons for Google to select MediaTek. One reason is that MediaTek has a great relationship with Taiwan-based TSMC, the world's largest chip foundry which will allow MediaTek to charge Google less per chip than the amount that Broadcom charges. Last year Google is believed to have spent between $6 billion and $9 billion on TPUs according to Omdia.
To lower its reliance on Nvidia's GPUs, which are the most widely used chips used to train AI models, Google designed the TPU AI accelerators. which are customized for AI use and are employed by Google for its internal workloads. They are also used by Google Cloud customers. As a result, Google is not as reliant on Nvidia as much as other major AI players are. Google's rivals such as OpenAI and Meta Platforms remain heavily dependent on Nvidia which can backfire when a shortage exists.

Google TPU for Machine Learning. | Image credit-Google
For example, Open AI CEO Sam Altman said at the end of last month that his company had run out of Nvidia GPUs forcing OpenAI to stagger the release of its new GPT-4.5 model. I'm asked at least 15 times a day Although I've never been asked why AI models use GPU chips instead of CPU chips, I'll give you the answer.
GPUs are used to help with the rendering of graphics and images on devices like smartphones and are known for their ability to process large amounts of data simultaneously. This is more in line with the matrix style of data processing used with AI. CPUs are known more for their sequential style of processing data which makes them less useful for AI.
Things that are NOT allowed: