Overview
A network policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints. NetworkPolicy resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods. The Kubernetes Network Policy API allows the cluster administrator to specify what pods are allowed to communicate with each other.
Rationale
By default, pods are non-isolated; they accept traffic from any source. Pods become isolated by having a NetworkPolicy that selects them. Once there is any NetworkPolicy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy. (Other pods in the namespace that are not selected by any NetworkPolicy will continue to accept all traffic.)
Remediation guidance
Using Console
- Go to Kubernetes GCP Console by visiting https://console.cloud.google.com/kubernetes/list?
- Select Kubernetes clusters for which Network policy is disabled
- Click on EDIT button and Set Network policy for master and Network policy for nodes to Enabled under Cluster section
Using Command Line
To enable Network policy for an existing cluster, run the following command:
gcloud container clusters update \[CLUSTER_NAME\] --zone \[COMPUTE_ZONE\] -- enable-network-policy
Impact
Enabling/Disabling Network Policy causes a rolling update of all cluster nodes, similar to performing a cluster upgrade. This operation is long-running and will block other operations on the cluster (including delete) until it has run to completion.
Default Value
By default, Network Policy is disabled when you create a new cluster.
References
- https://kubernetes.io/docs/concepts/services-networking/network-policies/#the- networkpolicy-resource
- https://kubernetes.io/docs/reference/generated/kubernetes- api/v1.10/#networkpolicy-v1-networking
Notes
Google Kubernetes Engine has partnered with Tigera to provide Project Calico to enforce network policies within your cluster. So all above remediation works well only with the same.
Multiple Remediation Paths
Google Cloud
SERVICE-WIDE (RECOMMENDED when many resources are affected): Enforce Organization Policies at org/folder level so new resources inherit secure defaults.
gcloud org-policies set-policy policy.yaml
ASSET-LEVEL: Use the product-specific remediation steps above for only the impacted project/resources.
PREVENTIVE: Use org policy constraints/custom constraints and enforce checks in deployment pipelines.
References for Service-Wide Patterns
- GCP Organization Policy overview: https://cloud.google.com/resource-manager/docs/organization-policy/overview
- GCP Organization policy constraints catalog: https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints
- gcloud org-policies: https://cloud.google.com/sdk/gcloud/reference/org-policies
Operational Rollout Workflow
Use this sequence to reduce risk and avoid repeated drift.
1. Contain at Service-Wide Scope First (Recommended)
- Google Cloud: apply organization policy constraints at org/folder scope.
gcloud org-policies set-policy policy.yaml
2. Remediate Existing Affected Assets
- Execute the control-specific Console/CLI steps documented above for each flagged resource.
- Prioritize internet-exposed and production assets first.
3. Validate and Prevent Recurrence
- Re-scan after each remediation batch.
- Track exceptions with owner and expiry date.
- Add preventive checks in IaC/CI pipelines.
Query logic
These are the stored checks tied to this control.
Network policy is enabled on Kubernetes Engine Clusters
Connectors
Covered asset types
Expected check: eq []
gkeClusters(where:{networkPolicyEnabled:false}){...AssetFragment}
Google Cloud