Overview
Access scopes are the legacy method of specifying permissions for your instance. Before the existence of IAM roles, access scopes were the only mechanism for granting permissions to service accounts. By default, your node service account has access scopes.
Rationale
If you are not creating a separate service account for your nodes, you should limit the scopes of the node service account to reduce the possibility of a privilege escalation in an attack. This ensures that your default service account does not have permissions beyond those necessary to run your cluster. While the default scopes are limited, they may include scopes beyond the minimally required scopes needed to run your cluster.
Remediation guidance
Using Console
- Go to Kubernetes GCP Console by visiting https://console.cloud.google.com/kubernetes/list?
- Click on
CREATE CLUSTER - Choose required name/value for cluster fields
- Click on
More - Under
Access scopesselectSet access for each APIand choose minimal API access as you desired - Click on
Create
Using Command Line
To create a cluster with least privileged/Custom Access scopes:, run the following command:
gcloud container clusters create \[CLUSTER_NAME\] --zone \[COMPUTE_ZONE\] --scopes=\[CUSTOM_SCOPES\]
NOTE: The default scopes for the nodes in Kubernetes Engine are devstorage.read_only, logging.write, monitoring, service.management.readonly, servicecontrol, and trace.append. When setting scopes, these are specified as gke-default. If you are accessing private images in Google Container Registry, the minimally required scopes are only logging.write, monitoring, and devstorage.read_only.
Default Value
By default, 'Allow default access' is chosen under Access scopes when you create a new cluster.
References
- https://cloud.google.com/compute/docs/access/service-accounts?hl=en_US#the_default_service_account
Multiple Remediation Paths
Google Cloud
SERVICE-WIDE (RECOMMENDED when many resources are affected): Enforce Organization Policies at org/folder level so new resources inherit secure defaults.
gcloud org-policies set-policy policy.yaml
ASSET-LEVEL: Use the product-specific remediation steps above for only the impacted project/resources.
PREVENTIVE: Use org policy constraints/custom constraints and enforce checks in deployment pipelines.
References for Service-Wide Patterns
- GCP Organization Policy overview: https://cloud.google.com/resource-manager/docs/organization-policy/overview
- GCP Organization policy constraints catalog: https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints
- gcloud org-policies: https://cloud.google.com/sdk/gcloud/reference/org-policies
Operational Rollout Workflow
Use this sequence to reduce risk and avoid repeated drift.
1. Contain at Service-Wide Scope First (Recommended)
- Google Cloud: apply organization policy constraints at org/folder scope.
gcloud org-policies set-policy policy.yaml
2. Remediate Existing Affected Assets
- Execute the control-specific Console/CLI steps documented above for each flagged resource.
- Prioritize internet-exposed and production assets first.
3. Validate and Prevent Recurrence
- Re-scan after each remediation batch.
- Track exceptions with owner and expiry date.
- Add preventive checks in IaC/CI pipelines.
Query logic
These are the stored checks tied to this control.
Kubernetes Clusters are created with limited service account Access scopes for Project access
Connectors
Covered asset types
Expected check: eq []
{gkeClusters(where: {nodePools_SOME: {nodeConfig: { oauthScopes_INCLUDES:"https://www.googleapis.com/auth/cloud-platform"}}}) {...AssetFragment}}
Google Cloud