Overview
A private cluster is a cluster that makes your master inaccessible from the public internet. In a private cluster, nodes do not have public IP addresses, so your workloads run in an environment that is isolated from the internet. Nodes have addressed only in the private RFC 1918 address space. Nodes and masters communicate with each other privately using VPC peering.
Rationale
With a Private cluster enabled, VPC network peering gives you several advantages over using external IP addresses or VPNs to connect networks, including:
- Network Latency: Public IP networking suffers higher latency than private networking.
- Network Security: Service owners do not need to have their services exposed to the public Internet and deal with its associated risks.
- Network Cost: GCP charges egress bandwidth pricing for networks using external IPs to communicate even if the traffic is within the same zone. If however, the networks are peered they can use internal IPs to communicate and save on those egress costs. Regular network pricing still applies to all traffic.
Remediation guidance
Using Console
- Go to Kubernetes GCP Console by visiting https://console.cloud.google.com/kubernetes/list?
- Click on
CREATE CLUSTER - Choose required name/value for cluster fields
- Click on
More - From the
Private clusterdrop-down menu, selectEnabled - Verify that
VPC native (alias IP)is set toEnabled - Set
Master IP rangeto as per your required IP range - Click on
Create
Using Command Line
To create cluster with Private cluster enabled, run the following command:
gcloud beta container clusters create \[CLUSTER_NAME\] --zone \[COMPUTE_ZONE\] --private-cluster --master-ipv4-cidr 172.16.0.16/28 --enable-ip-alias --create-subnetwork ""
NOTE: When you create a private cluster, you must specify a /28 CIDR range for the VMs that run the Kubernetes master components. You also need to enable Alias IPs. The range you specify for the masters must not overlap with any subnet in your cluster's VPC.
Default Value
By default, Private cluster is disabled when you create a new cluster.
References
- https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters
Notes
This is a Beta release of private clusters. This feature is not covered by any SLA or deprecation policy and might be subject to backward-incompatible changes.
Multiple Remediation Paths
Google Cloud
SERVICE-WIDE (RECOMMENDED when many resources are affected): Enforce Organization Policies at org/folder level so new resources inherit secure defaults.
gcloud org-policies set-policy policy.yaml
ASSET-LEVEL: Use the product-specific remediation steps above for only the impacted project/resources.
PREVENTIVE: Use org policy constraints/custom constraints and enforce checks in deployment pipelines.
References for Service-Wide Patterns
- GCP Organization Policy overview: https://cloud.google.com/resource-manager/docs/organization-policy/overview
- GCP Organization policy constraints catalog: https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints
- gcloud org-policies: https://cloud.google.com/sdk/gcloud/reference/org-policies
Operational Rollout Workflow
Use this sequence to reduce risk and avoid repeated drift.
1. Contain at Service-Wide Scope First (Recommended)
- Google Cloud: apply organization policy constraints at org/folder scope.
gcloud org-policies set-policy policy.yaml
2. Remediate Existing Affected Assets
- Execute the control-specific Console/CLI steps documented above for each flagged resource.
- Prioritize internet-exposed and production assets first.
3. Validate and Prevent Recurrence
- Re-scan after each remediation batch.
- Track exceptions with owner and expiry date.
- Add preventive checks in IaC/CI pipelines.
Query logic
These are the stored checks tied to this control.
Kubernetes Cluster is created with Private cluster enabled
Connectors
Covered asset types
Expected check: eq []
gkeClusters(where:{privateClusterConfig:null}){...AssetFragment}
Google Cloud