Global Economic Brief: Intel Partners With Google as Amazon Cloud AI Revenue Surges
- Google and Intel announced an expanded multiyear partnership on April 9, 2026, focused on the continued use of Intel AI infrastructure by Google Cloud and the joint development...
- Under the agreement, Google Cloud will continue to deploy Intel Xeon processors across its workload-optimized instances.
- A central component of the partnership is the expanded co-development of custom infrastructure processing units (IPUs).
Google and Intel announced an expanded multiyear partnership on April 9, 2026, focused on the continued use of Intel AI infrastructure by Google Cloud and the joint development of advanced processors. The collaboration aims to address the increasing demand for balanced AI infrastructure as the industry moves beyond a primary reliance on accelerators.
Under the agreement, Google Cloud will continue to deploy Intel Xeon processors across its workload-optimized instances. This includes the integration of the latest Intel Xeon 6 processors, which are currently powering C4 and N4 instances. These platforms are designed to support a wide range of tasks, including general-purpose computing, latency-sensitive inference, and the coordination of large-scale AI training.
Expanding Custom Infrastructure Processing Units
A central component of the partnership is the expanded co-development of custom infrastructure processing units (IPUs). This specific collaboration, which began in 2021, focuses on custom ASIC-based IPUs. These programmable accelerators are designed to offload networking, security, and storage functions from the host CPUs.

By offloading these data center tasks, IPUs improve overall system utilization and allow for more predictable performance within hyperscale AI environments. This architectural approach is intended to increase efficiency across Google’s global operations.
AI is reshaping how infrastructure is built and scaled. Scaling AI requires more than accelerators — it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand.
Lip-Bu Tan, CEO of Intel
The Role of CPUs in AI Infrastructure
While graphics processing units (GPUs) are primarily used for the development and training of AI models, central processing units (CPUs) remain critical for running those models and maintaining general AI infrastructure. The industry has seen a growing shortage of CPUs in recent months, leading more companies to refocus their hardware strategies.
This shortage has prompted other industry players to enter the market. Arm Holdings, owned by SoftBank, recently announced the Arm AGI CPU, marking the first chip produced directly by the semiconductor giant in response to the worldwide CPU crunch.
Amin Vahdat, SVP and Chief Technologist of AI Infrastructure at Google, stated that Intel has been a trusted partner for nearly two decades. He noted that Intel’s Xeon roadmap provides the confidence necessary to meet increasing demands for efficiency and performance.
Market Impact and Strategic Momentum
The announcement contributed to a significant rally for Intel Corporation (NASDAQ:INTC). On April 9, 2026, Intel shares rose 1%, extending a five-day rally that saw the stock surge by more than 33%. During this period, Intel’s stock price surpassed $60, fueled by foundry momentum, analyst upgrades, and the Google partnership.
The recovery is also linked to Intel’s involvement in the Terafab project. Intel joined this chip production initiative alongside Tesla, xAI, and SpaceX earlier in the week, further positioning the company as a focal point in the buildout of AI infrastructure.
This strategic shift occurs amid broader trends of massive AI investment across the cloud sector. For example, Alphabet raised its 2025 capital spending forecast by $10 billion to $85 billion to meet cloud demand and maintain competitiveness in the AI race. Similarly, Amazon’s cloud division has reported annual revenues exceeding $15 billion from its AI offerings.
As Google Cloud continues to integrate Intel’s latest hardware, the focus remains on creating a balanced system where CPUs and IPUs complement accelerators to handle the diverse requirements of modern AI workloads.
