Overview
GKE network isolation reduces exposure by keeping cluster nodes private and by controlling how administrators reach the control plane. Older guidance described this as a private cluster. Current GKE guidance frames this as network isolation, including private nodes, optional private control-plane access, and restricted public access where needed.
Rationale
Private nodes reduce the attack surface because nodes do not need public IP addresses. For sensitive environments, you should also review whether the control plane needs a public endpoint at all, whether authorized networks are enabled, and whether Cloud NAT or other egress paths are configured correctly.
Remediation guidance
Using Google Cloud Console
- Open
Kubernetes Enginein Google Cloud Console. - Select the affected cluster.
- Open
Networking. - Enable private nodes for the cluster.
- If your security standard requires it, also disable public control-plane access or restrict it with authorized networks.
- Ensure outbound access is handled through
Cloud NATor another approved egress path.
Using Command Line
For current GKE network isolation, enable private nodes:
gcloud container clusters update [CLUSTER_NAME] --location [LOCATION] --enable-private-nodes --enable-ip-alias
If your standard requires the control plane to be internal only, create or update the cluster to use a private endpoint as well. Validate the resulting network isolation settings:
gcloud container clusters describe [CLUSTER_NAME] --location [LOCATION] --format='yaml(controlPlaneEndpointsConfig,privateClusterConfig,networkConfig)'
Important rollout note
In current GKE, cluster-level updates to private-node settings might only apply to new node pools. Existing node pools can require migration or recreation. Review the exact cluster mode and version before assuming this is an in-place fix.
Better platform fix
For regulated or internet-exposed environments, define a standard network-isolation baseline: private nodes by default, explicit control-plane access rules, Cloud NAT for egress, and authorized networks if IP-based endpoints remain enabled.
References
- https://cloud.google.com/kubernetes-engine/docs/concepts/network-isolation
- https://cloud.google.com/kubernetes-engine/docs/how-to/latest/network-isolation
- https://cloud.google.com/kubernetes-engine/docs/how-to/legacy/network-isolation
Service-wide remediation
Recommended when many resources are affected: remediate the cluster networking baseline once, not cluster by cluster.
Update landing-zone templates and GKE modules so new clusters use the approved network-isolation model by default.
Operational rollout
- Confirm Shared VPC, Private Google Access, and Cloud NAT prerequisites first.
- Migrate non-production clusters before production clusters.
- Re-scan after the node-pool changes have completed.
Query logic
These are the stored checks tied to this control.
Kubernetes Cluster is created with Private cluster enabled
Connectors
Covered asset types
Expected check: eq []
gkeClusters(where:{privateClusterConfig:null}){...AssetFragment}
Google Cloud