1001th K8s CPU limit article

Published October 12, 2022

If you are somewhat into Kubernetes, then you have probably read the previous 1000 articles about why you should not be setting CPU limits, so why should you bother with going through this short post? Well, I will try my best to touch on the basics of how the CPU and memory works and how does that affect Kubernetes resource limits.

CPU requests

To understand what CPU limits do, it is crucial to understand what setting CPU requests does. When you set your CPU request to 1 CPU unit, it does not mean that 1 physical CPU core / 1 virtual core gets reserved for that workload, and no one in the whole universe can take it away from it. Why is that? So, how the CPU works is that a core can only work on a single task at a time, but it switches between tasks so quickly, that you do not notice this switching at all (the decision of which task to work on next is made by a scheduling algorithm in the kernel). Now, it is obvious that if we assigned a core to every workload, then we would quickly run out of resources. What actually happens is when you specify a CPU request for your workload, then that workload is guaranteed to receive that much CPU time (and when the CPU is done working on the task it goes on to work on the next one and will eventually circle back to that task if it is not finished yet).

The main takeaway from the above is that guaranteed does not equal reserved, and you can not reserve parts of the CPU to only work on one workload because it has to also work on every other task in the system.

CPU limits

And now to the question of why setting CPU limits is bad. There are cases when there is some excess CPU time that could be allocated to workloads, and I mean who would say no to that? But when you set CPU limits and that excess CPU time added to the current CPU time hits the CPU limit, then that excess CPU time, which the process could very well use, is lost forever. When you hit the CPU limit, then your process gets throttled and the work on that process is delayed by quite a bit. Note that no process killing is done when the CPU limit is hit.

Memory requests and limits

So, how is the memory any different and why is setting a memory limit considered to be okay? Memory is treated differently because once you assign some memory to a process, then contrary to the CPU, it can not be taken away without losing everything inside (slight oversimplification). Pods are assigned the amount of memory they request, but once they exceed the limit, then the process in the Pod’s containers, which uses the most memory, gets killed by the kernel. Setting a memory limit is the safety measure that a process inside the containers does not gobble (gobble gobble) up all the available memory in the system, and killing the process is the kill switch to that.


As you can see, the devil lies in the details of how the CPU and the memory are designed to work in a system. The CPU is designed to work on multiple processes in quick succession, while the memory is designed to stay with one program until it is done running.

The best practice regarding resource requests and limits is to always set memory requests and limits and always set CPU requests, but not limits, for your workloads.

Please note, that there are valid use cases for setting CPU limits, especially when it comes to multi-tenant clusters.