Overview
Enable Endpoint Private Access to restrict access to the cluster's control plane to only an allowlist of authorized IPs.
Rationale
Authorized networks are a way of specifying a restricted range of IP addresses that are permitted to access your cluster's control plane. Kubernetes Engine uses both Transport Layer Security (TLS) and authentication to provide secure access to your cluster's control plane from the public internet. This provides you the flexibility to administer your cluster from anywhere; however, you might want to further restrict access to a set of IP addresses that you control. You can set this restriction by specifying an authorized network. Restricting access to an authorized network can provide additional security benefits for your container cluster, including: • Better protection from outsider attacks: Authorized networks provide an additional layer of security by limiting external access to a specific set of addresses you designate, such as those that originate from your premises. This helps protect access to your cluster in the case of a vulnerability in the cluster's authentication or authorization mechanism. • Better protection from insider attacks: Authorized networks help protect your cluster from accidental leaks of master certificates from your company's premises. Leaked certificates used from outside Cloud Services and outside the authorized IP ranges (for example, from addresses outside your company) are still denied access
Impact
When implementing Endpoint Private Access, be careful to ensure all desired networks are on the allowlist (whitelist) to prevent inadvertently blocking external access to your cluster's control plane.
Audit
Check for the following to be 'enabled: true'
export CLUSTER_NAME=<your cluster name>
aws eks describe-cluster --name ${CLUSTER_NAME} --query "cluster.resourcesVpcConfig.endpointPublicAccess"
aws eks describe-cluster --name ${CLUSTER_NAME} --query
"cluster.resourcesVpcConfig.endpointPrivateAccess"
Check for the following is not null:
export CLUSTER_NAME=<your cluster name>
aws eks describe-cluster --name ${CLUSTER_NAME} --query
"cluster.resourcesVpcConfig.publicAccessCidrs"
Default value
By default, Endpoint Public Access is disabled.
Remediation guidance
By enabling private endpoint access to the Kubernetes API server, all communication between your nodes and the API server stays within your VPC. You can also limit the IP addresses that can access your API server from the internet, or completely disable internet access to the API server. With this in mind, you can update your cluster accordingly using the AWS CLI to ensure that private endpoint access is enabled. If public endpoint access must remain enabled, configure a tightly scoped CIDR allowlist.
AWS CLI
Restrict public access to specific CIDRs while keeping private endpoint access enabled:
aws eks update-cluster-config \
--region {{asset.region}} \
--name {{asset.name}} \
--resources-vpc-config endpointPrivateAccess=true,endpointPublicAccess=true,publicAccessCidrs="{{manual.allowedCidrs}}"
If internet access to the control plane is not required, prefer disabling the public endpoint entirely:
aws eks update-cluster-config \
--region {{asset.region}} \
--name {{asset.name}} \
--resources-vpc-config endpointPrivateAccess=true,endpointPublicAccess=false
Validation
aws eks describe-cluster \
--region {{asset.region}} \
--name {{asset.name}} \
--query 'cluster.resourcesVpcConfig.{Private:endpointPrivateAccess,Public:endpointPublicAccess,AllowedCidrs:publicAccessCidrs}'
Note: The CIDR blocks specified cannot include reserved addresses. There is a maximum number of CIDR blocks that you can specify. For more information, see the EKS Service Quotas link in the references section. For more detailed information, see the EKS Cluster Endpoint documentation link in the references section.
References
- https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html
- https://docs.aws.amazon.com/cli/latest/reference/eks/update-cluster-config.html
Multiple Remediation Paths
AWS
SERVICE-WIDE (RECOMMENDED when many resources are affected): Deploy centralized guardrails and remediation using AWS Config Conformance Packs and (if applicable) AWS Organizations SCPs.
aws configservice put-organization-conformance-pack --organization-conformance-pack-name <pack-name> --template-s3-uri s3://<bucket>/<template>.yaml
ASSET-LEVEL: Apply the resource-specific remediation steps above to only the affected assets.
PREVENTIVE: Add CI/CD policy checks (CloudFormation/Terraform validation) before deployment to prevent recurrence.
References for Service-Wide Patterns
- AWS Config Conformance Packs: https://docs.aws.amazon.com/config/latest/developerguide/conformance-packs.html
- AWS Organizations SCP examples: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples.html
Operational Rollout Workflow
Use this sequence to reduce risk and avoid repeated drift.
1. Contain at Service-Wide Scope First (Recommended)
- AWS: deploy/adjust organization conformance packs and policy guardrails.
aws configservice put-organization-conformance-pack --organization-conformance-pack-name <pack-name> --template-s3-uri s3://<bucket>/<template>.yaml
2. Remediate Existing Affected Assets
- Execute the control-specific Console/CLI steps documented above for each flagged resource.
- Prioritize internet-exposed and production assets first.
3. Validate and Prevent Recurrence
- Re-scan after each remediation batch.
- Track exceptions with owner and expiry date.
- Add preventive checks in IaC/CI pipelines.
Query logic
These are the stored checks tied to this control.
EKS Clusters without restricted access to control plane endpoint
Connectors
Covered asset types
Expected check: eq []
{
eksClusters(
where: {
OR: [
{ vpcConfigEndpointPrivateAccess: false }
{ vpcConfigPublicAccessCIDRs: [] }
{ vpcConfigPublicAccessCIDRs_INCLUDES: "0.0.0.0/0" }
]
}
) {
...AssetFragment
}
}
AWS