Google Cloud Delivers Customized Silicon Powered by Arm Neoverse for General-Purpose Compute and AI Inference Workloads
By Mohamed Awad, SVP and GM of the Infrastructure Business, Arm
Cloud providers are choosing Arm Neoverse to optimize their full stack, from silicon to software. Today, Google Cloud introduced custom Google Axion Processors, based on Neoverse V2, for general-purpose compute and AI inference workloads. Axion will power instances that deliver up to 60% better energy efficiency and up to 50% more performance than comparable current-generation x86-based instances.
Arm Neoverse performance efficiency enables new heights of innovation
Custom silicon can be designed to meet the specific needs and requirements of demanding modern workloads and increasingly optimized data center infrastructure. Arm engages closely with partners to evolve our architecture and CPU designs to specifically target their key workloads. Google is driving custom silicon innovation with its new Arm Neoverse-based Axion CPU for higher performance, lower power consumption and greater scalability than using legacy, off-the-shelf processors.
Built on the Armv9 Neoverse V2, Axion is a significant step as Google Cloud rearchitects their data center for the age of AI and embraces more customization to drive better performance efficiency, extending the capabilities of their general-purpose compute fleet. Axion CPUs also represent a continuation of Google Cloud’s custom silicon efforts and are specifically designed to bring more workload performance and energy-efficiency to their customers.
Google’s fleet of services, including Google Earth Engine and YouTube Ads, are already running on Arm-based servers with plans to scale deployments on Axion. Customer workloads such as CPU-based AI training and inferencing, along with other general-purpose workloads like web and app servers, containerized microservices, open-source databases, in-memory caches, data analytics engines, media processing and more will be deployed and scaled over time.
Google Cloud chose Arm for its performance, efficiency and flexibility to innovate. Combined with a robust software ecosystem, widespread industry adoption and cross-platform compatibility, integration with existing applications and tools is greatly simplified. By building on Arm, Google Cloud has access to tens of thousands of cloud customers who have already deployed their workloads. Through our collaboration on initiatives like the SystemReady Virtual Environment (VE) certification and OpenXLA, and Google’s long history of work to optimize Android, Kubernetes, and Tensorflow for the Arm architecture, time to value for Arm workloads on Google Cloud will accelerate and build customer confidence.
Delivering more in the AI era
The world is internalizing the profound changes that AI can bring societies, particularly as hundreds of millions of users are experiencing generative AI in the real-world. Today cloud providers are moving quickly to accommodate the rapidly increasing demand for more AI.
Arm Neoverse is fundamental to this transition, enabling more computations per watt of power consumed. AI developers can leverage trained models on CPU using a fraction of the energy and in one-third the time compared to previous systems based on legacy architectures. Developers can also accelerate inference performance significantly, which contributes to lower operating costs and more efficient utilization of computing resources. Across the industry, Neoverse also provides unmatched flexibility for on-chip or chip-to-chip integration of compute acceleration engines such as NPUs or GPUs to enable more efficient designs for generative AI.
Together, Google Cloud and Arm are delivering Arm Neoverse-based Axion CPUs to offer more value and choice to their customers. The collaboration on Axion underscores continued innovation on the cloud-native frontier, driving further transformation across the computing infrastructure – an infrastructure that runs on Arm.
|
Arm Ltd Hot IP
Related News
- HCLTech and Arm Collaborate on Custom Silicon Chips Optimized for AI Workloads
- HCLTech and Arm collaborate on custom silicon chips optimized for AI workloads
- Alphawave Semi Elevates Chiplet-Powered Silicon Platforms for AI Compute through Arm Total Design
- Driving the Custom Silicon Revolution with Arm Neoverse Compute Subsystems
- Arm Neoverse Adopted by Google Cloud
Breaking News
- Jury is out in the Arm vs Qualcomm trial
- Ceva Seeks To Exploit Synergies in Portfolio with Nano NPU
- Synopsys Responds to U.K. Competition and Markets Authority's Phase 1 Announcement Regarding Ansys Acquisition
- Alphawave Semi Scales UCIe™ to 64 Gbps Enabling >20 Tbps/mm Bandwidth Density for Die-to-Die Chiplet Connectivity
- RaiderChip Hardware NPU adds Falcon-3 LLM to its supported AI models
Most Popular
E-mail This Article | Printer-Friendly Page |