This document explains how to run Arm workloads on Google Kubernetes Engine (GKE). You can run Arm workloads in the following ways:
- GKE Autopilot mode: on the
Autopilot container-optimized compute platform, explicitly request
the Arm architecture and the
autopilot-armComputeClass for general-purpose workloads. To request specific hardware, use thePerformanceorScale-Outcompute classes. - GKE Standard mode: using the C4A, N4A, or Tau T2A machine series.
You can run single-architecture Arm images or multi-architecture (multi-arch) images compatible with both x86 and Arm processors. To learn about the benefits of Arm, see Arm VMs on Compute.
Run Arm workloads on GKE
See the following for more information about choosing workloads to deploy on Arm and preparing those workloads for deployment:
Choosing workloads to run on Arm: Consider the benefits of the following options when choosing workloads to run on Arm:
- Autopilot container-optimized compute platform: Recommended for general-purpose Arm workloads in Autopilot clusters, providing Pod-based billing and elasticity without requiring you to manage specific machine types.
Specific machine families: For workloads requiring specific hardware characteristics, consider the following machine types. For more information, see the table in General-purpose machine family for Compute Engine:
- C4A nodes provide Arm-based compute which achieves consistently high performance for your most performance-sensitive Arm-based workloads.
- N4A nodes provide Arm-based compute that balances price and performance.
- T2A nodes are appropriate for more-flexible workloads, or workloads which rely on horizontal scale-out.
Deploying across architectures: With GKE, you can use multi-arch images to deploy one image manifest across nodes with different architectures, including Arm.
- To ensure that your container image is Arm-compatible and can run on your targeted architectures, see Build multi-architecture images for Arm workloads.
- To follow a tutorial for using multi-arch images to deploy across architectures, see Migrate x86 application on GKE to multi-arch with Arm.
Preparing Arm workloads for deployment: Once you have an Arm-compatible image, use node affinity rules and node selectors to make sure your workload is scheduled to nodes with a compatible architecture type.
- Autopilot clusters: see Deploy Autopilot workloads on Arm architecture.
- Standard clusters: see Prepare an Arm workload for deployment.
Requirements and limitations
- Arm nodes are available in Google Cloud locations that support Arm architecture. For details, see Available regions and zones.
- The
general-purpose-armpod family andautopilot-armcompute class are only available in the following regions:us-east1,us-west1, andeurope-west1. - Config Connector and Config Controller are not supported on clusters with Arm node pools.
- See the following requirements and limitations for the C4A virtual machines
(VMs) and the
c4a-highmem-96-metalbare metal instance (Preview), respectively:- C4A VMs:
- To create a cluster that uses
Autopilot
mode, cluster
autoscaling, node
auto-provisioning,
or ComputeClasses
that auto-create node pools,
use the following versions or later:
- 1.30.7-gke.1136000
- 1.31.3-gke.1056000
- To create a Standard cluster, use 1.30.4-gke.1213000 or later.
- To use Local
SSDs, use
the following versions or later:
- 1.30.12-gke.1033000
- 1.31.8-gke.1045000
- 1.32.1-gke.1357000
- GKE doesn't support the following features with C4A VMs:
- To create a cluster that uses
Autopilot
mode, cluster
autoscaling, node
auto-provisioning,
or ComputeClasses
that auto-create node pools,
use the following versions or later:
- The C4A bare metal instance,
c4a-highmem-96-metal(Preview):- To create a cluster that uses
Autopilot
mode, cluster
autoscaling,
node
auto-provisioning,
or ComputeClasses
that auto-create node pools, use version 1.35.3-gke.1389000 or later.
The following also applies:
- To configure auto-created node
pools
that use
c4a-highmem-96-metal, you must explicitly specify the machine type. If you specify only the C4A machine series, and not the machine type, GKE provisions C4A VMs, not bare metal instances. This behavior is true for both node auto-provisioning, and ComputeClasses that auto-create node pools.
- To configure auto-created node
pools
that use
- To create a Standard cluster, use version 1.35.0-gke.2232000 or later.
- In addition to the limitations of the C4A VMs, GKE doesn't
support the following features with
c4a-highmem-96-metal(Preview):- Local SSDs.
- Provisioning the bare metal instance through optimizing Autopilot Pod performance by choosing a machine series, because you can't explicitly select the machine type.
- Live migration. For more information, see Manage disruption to GKE nodes that don't live migrate.
- To create a cluster that uses
Autopilot
mode, cluster
autoscaling,
node
auto-provisioning,
or ComputeClasses
that auto-create node pools, use version 1.35.3-gke.1389000 or later.
The following also applies:
- C4A VMs:
See the following requirements and limitations for N4A:
- To create a cluster with N4A nodes that uses Autopilot mode, use GKE version 1.34.1-gke.3403001 or later.
GKE doesn't support the following features with N4A nodes:
- Local SSDs
- Confidential GKE Nodes
- GPUs
- Compact placement
- Simultaneous multi-threading (SMT)
- Persistent disks (use Hyperdisk instead, see Supported disk types for N4A)
- Nested virtualization
- 1 GB hugepages (only 2 MB hugepages supported)
See the following requirements and limitations for T2A:
GKE doesn't support the following features with T2A nodes:
What's next
- Create clusters and node pools with Arm nodes
- Build multi-architecture images for Arm workloads
- Prepare an Arm workload for deployment
- Migrate x86 application on GKE to multi-arch with Arm