#How are Intel and Google Collaborating to Enhance AI Cloud Infrastructure?
Intel and Google are significantly expanding their collaboration to support the advancement of next-generation AI cloud infrastructure. This multiyear agreement aims to increase the deployment of Intel’s Xeon processors within Google Cloud. Alongside this, both companies will work together to create custom application-specific integrated circuits (ASIC-based) infrastructure processing units (IPUs). The goal of these IPUs is to enhance performance, efficiency, and scalability specifically for artificial intelligence workloads.
This strategic partnership merges general-purpose central processing units (CPUs) with tailored infrastructure accelerators, resulting in a robust and scalable system. Such a system is essential for managing extensive AI training, inference, and various computing tasks, thereby enabling enterprises and developers to innovate more effectively within the swiftly expanding AI ecosystem.
Amin Vahdat, Google’s Senior Vice President and Chief Technologist of AI Infrastructure, stated that AI infrastructure significantly relies on CPUs and accelerators throughout all deployment phases. He emphasized that Intel’s commitment to developing the Xeon family of processors is vital for Google to handle the escalating demands of its workloads.
As a testament to this partnership's potential impact, Intel's shares experienced a notable uptick at market opening today, reflecting a year-to-date gain of around 62% as reported by Yahoo Finance.