The Open Source Way to Rightsize Kubernetes


Rightsizing resource requests is an increasing challenge for teams using Kubernetes—and especially critical as they scale their environments. Overprovisioning CPU and memory lead to costly overspending, but underprovisioning risks CPU throttling and out-of-memory errors if requested resources aren’t sufficient. Dev and engineering teams that don’t thoroughly understand the live performance profile of their containers will usually play it safe and request vastly more CPU and memory resources than required, often with significant budget waste.

The open source Kubecost tool ( has had a Request Sizing dashboard to help Kubernetes users bring more cost efficiency to their resource requests. One of the tool’s most popular optimization features, the dashboard identifies over-requested resources, offers recommendations for appropriate per-container resource requests, and estimates the cost-savings impact of implementing those recommendations. The dashboard utilizes actual usage data from live containers to provide accurate recommendations. However, leveraging the dashboard has included some hurdles, requiring users to manually update YAML requests to align resource requests with Kubecost recommendations or introduce integrations using a CD tool.

The newly released Kubecost v1.93 eliminates those hurdles by introducing 1-Click Request Sizing. With this feature added to the open source tool, dev and engineering teams can click a button to apply container request right-sizing recommendations automatically.

The following step-by-step example introduces overprovisioned Kubernetes workloads and uses 1-Click Request Sizing to bring those requests to an optimized size. Before we begin, you’ll need a Kubernetes cluster to work with. While this example uses Civo Kubernetes, Kubecost request sizing is available for any Kubernetes environment.

To create an example cluster (if needed), use this to create civo Kubernetes cluster using Civo CLI:

civo k3s create request-sizing-demo --region LON1
The cluster request-sizing-demo (84c6c595-505e-4e35-8e38-61364a1a80bc) has been created

Now, let’s get started.

1) Install Kubecost and Enable Cluster Controller

If using a previous Kubecost installation, enable Cluster Controller using the helm value below.

Kubecost ensures a transparent permission model by keeping all cluster modification capabilities in the separate Cluster Controller component. 1-Click Request Sizing APIs reside in Cluster Controller since Kubernetes API write permission is required to edit container requests.

Here, we’ll install Kubecost and enable Cluster Controller:

helm repo add kubecost
helm repo update
helm upgrade 
	--create-namespace kubecost 
	--namespace kubecost 
	--version "v1.94.0-rc.1" 
	--set clusterController.enabled=true

After waiting a few minutes for the containers to get up and running, check the Kubecost namespace:

→ kubectl get deployment -n kubecost
NAME                      		READY   UP-TO-DATE   AVAILABLE   AGE
kubecost-cluster-controller   	1/1	 	1        		1       	2m12s
kubecost-cost-analyzer    	1/1 		1        		1       	2m12s
kubecost-grafana          		1/1 		1        		1       	2m12s
kubecost-kube-state-metrics   	1/1 		1        		1       	2m12s
kubecost-prometheus-server	1/1 		1        		1       	2m12s

Here we see that Kubecost is installed and running correctly.

2) Make a Sample Overprovisioned Workload

We’ll purposefully create a workload that requests more resources than it needs, enabling 1-Click Request Sizing to come to the rescue. The following bash creates an “rsizing” namespace holding a 2-replica NGINX deployment, with considerable container resource requests:

kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
  name: rsizing
apiVersion: apps/v1
kind: Deployment
  name: nginx-deployment
  namespace: rsizing
    app: nginx
  replicas: 2
      app: nginx
        app: nginx
      - name: nginx
        image: nginx:1.14.2
            cpu: 300m
            memory: 500Mi

We’ll check that this deployment is scheduled and running correctly:

→ kubectl get pod -n rsizing
NAME                           			READY   STATUS	RESTARTS   AGE
nginx-deployment-bd6c697bf-qxtvk    	1/1	   Running	0	      	10s
nginx-deployment-bd6c697bf-b2zml   	1/1 	   Running   	0	      	11s

Next, we’ll use a JSONPath expression to check in on the Pods running, and the requests of their containers:

→ kubectl get pod -n rsizing -o=jsonpath="{range .items[*]}{}{'t'}{range .spec.containers[*]}{.name}{'t'}{.resources.requests}{'n'}{end}{'n'}{end}"

nginx-deployment-bd6c697bf-qxtvk    nginx    {"cpu":"300m","memory":"500Mi"}

nginx-deployment-bd6c697bf-b2zml    nginx    {"cpu":"300m","memory":"500Mi"}

Just as we planned, the containers are making outsized resource requests. Next, we’ll fix those issues.

3) View Kubecost Recommendations and Put Them Into Action

Access Kubecost’s frontend with kubectl’s port-forward:

kubectl port-forward -n kubecost service/kubecost-cost-analyzer 9090

Allow Kubecost a few minutes to collect usage profiling data and prepare its recommendations for request sizing. Then go to the request sizing recommendation page at http://localhost:9090/request-sizing.html?filters=namespace%3Arsizing. Note that this link includes a filter to show only recommendations for the “rsizing” namespace. With Cluster Controller enabled, the “Automatically implement recommendations” button will be available on this page as well:

The NGINX deployment isn’t getting any traffic, causing it to be severely overprovisioned. Kubecost recognized this fact and has suggested shifting to a 10m CPU request and 20MiB RAM request. Click the “Automatically implement recommendations” button and you’ll get this message:

These recommendations are filtered to the “rsizing” namespace, so clicking the Yes option will apply recommendations for this filtered set.

Now check the status of the cluster:

→ k get pod -n rsizing          NAME    READY   STATUS    	RESTARTS    AGE
nginx-deployment-574cd8ff7f-5czgz   	1/1 	Running   	0      		16s
nginx-deployment-574cd8ff7f-srt8j   	1/1 	Running   	0      		9s
nginx-deployment-bd6c697bf-qxtvk	0/1 	Terminating   	0      		53m
nginx-deployment-bd6c697bf-b2zml	0/1 	Terminating   	0      		53m

After the old Pod versions have terminated, use the JSONPath expression again to check the new Pods:

→ kubectl get pod -n rsizing -o=jsonpath="{range .items[*]}{}{'t'}{range .spec.containers[*]}{.name}{'t'}{.resources.requests}{'n'}{end}{'n'}{end}"

nginx-deployment-574cd8ff7f-5czgz    nginx    {"cpu":"10m","memory":"20971520"}

nginx-deployment-574cd8ff7f-srt8j    nginx    {"cpu":"10m","memory":"20971520"}

Kubecost has successfully resized the container requests! And at both the Pod and NGINX deployment levels:

→ k get deploy -n rsizing nginx-deployment -o=jsonpath="{.spec.template.spec.containers[0].resources}" | jq
  "requests": {
	"cpu": "10m",
	"memory": "20971520"

4) Remove the Demo Cluster

Don’t forget to clean up after this demonstration by removing the test cluster (avoiding any unnecessary costs):

→ civo k3s remove request-sizing-demo --region LON1

Discover More About Kubecost’s 1-Click Request Sizing

This example how Kubernetes users can easily and automatically optimize their resource utilization demonstrated with 1-Click Request Sizing from the open source Kubecost tool. To learn more, additional documentation is available here:

Kubernetes costs can easily spiral out of control at scale if not carefully monitored, and if unexpected cost centers or errors with the potential to spur on runaway expenses aren’t swiftly addressed and remediated. Teams using Kubernetes need the visibility to view the complete picture of their Kubernetes spending in real-time. That visibility must include the ability to zoom out to a holistic view that accounts for external cloud services and infrastructure costs, and to zoom in and assign costs to each specific deployment, service, and namespace. Teams then need the tools to take action and successfully pursue cost efficiency optimization across their Kubernetes implementations. In this vein, 1-Click Request Sizing adds a powerful tool to Kubernetes users’ arsenal, making it that much simpler to keep Kubernetes budgets in check.


Leave a Comment