Understanding Linux CPU Load
Are you having problems understanding Linux CPU Load? You don’t know when is good or bad… or what those three numbers mean?
Understanding Linux CPU Load
If you are a sysadmin one of the most important task includes monitoring the performance which includes a parameter call “Load Average” or “CPU Load”. You might be familiar with Linux load averages. Load averages are the three numbers shown with the uptime and top commands.
Whats Load Average?
Is a value which represents the load for a specific period of time. is also considered the ratio of active tasks to the number of available CPUs.
load average: 0.10 (1 min), 0.08 (5 min), 0.01 (15 min)
On plain english, this is the number of blocking processes in the run queue averaged over a certain time waiting for something to continue.
How can i check the CPU Load?
You can use either top or uptime command. The output would look like this:
What those three numbers mean?
These numbers show you the average load over the last one minute, the last five minutes, and the last fifteen minutes.
load average: 1.05, 0.70, 5.09
- load average over the last 1 minute: 1.05
- load average over the last 5 minutes: 0.70
- load average over the last 15 minutes: 5.09
Over the last 1 minute: The computer was overloaded by 5% on average. On average, .05 processes were waiting for the CPU. (1.05)
Over the last 5 minutes: The CPU idled for 30% of the time. (0.70)
Over the last 15 minutes: The computer was overloaded by 409% on average. On average, 4.09 processes were waiting for the CPU. (5.09)
When load average values are good or bad?
It depends on the number of physical CPUs or CPU cores of your system
On single-processor/single-core systems, load of 1.00 means 100% CPU utilization, a dual-core, a load of 2.00 is 100% CPU utilization.
Base on experience, 0.70 is the common threshold to determine if a system might be overloaded or there might be any kind of I/O problem. If your load average is staying above > 0.70, it’s time to investigate before things get worse. This is valid, of course, for single-processor.
Understanding Linux CPU Load traffic analogy
One CPU is like a single lane of traffic, Imagine you are a bridge operator, sometimes your bridge is so busy there are cars lined up to cross, so what numbering system are you going to use?
- 0.00 means there’s no traffic. In fact, between 0.00 and 1.00 means there’s no backup, and an arriving car will just go right on.
- 1.00 means the bridge is exactly at capacity. All is still good, but if traffic gets a little heavier, things are going to slow down.
- over 1.00 means there’s backup. How much? Well,
- 2.00 means that there are two lanes worth of cars total — one lane’s worth on the bridge, and one lane’s worth waiting.
- 3.00 means there are three lane’s worth total — one lane’s worth on the bridge, and two lanes’ worth waiting. Etc.
The problem with a load of 1.00 is that you have no room. In practice, many sysadmins will draw a line at 0.70, If your load average is staying above > 0.70, it’s time to investigate before things get worse.
If your load average stays above 1.00, find the problem and fix it now. Otherwise, you’re going to get woken up in the middle of the night, and it’s not going to be fun.
If your load average is above 5.00, you could be in serious trouble, your box is either hanging or slowing way down. Don’t let it get there.
Got a quad-processor system? It’s still healthy with a load of 3.00.
On multi-processor system, the load is relative to the number of processor cores available. The “100% utilization” mark is 1.00 on a single-core system, 2.00, on a dual-core, 4.00 on a quad-core, etc.
If we go back to the bridge analogy, the “1.00” really means “one lane’s worth of traffic”. On a one-lane bridge, that means it’s filled up. On a two-late bridge, a load of 1.00 means its at 50% capacity — only one lane is full, so there’s another whole lane that can be filled.
Same with CPUs: a load of 1.00 is 100% CPU utilization on single-core box. On a dual-core box, a load of 2.00 is 100% CPU utilization.
Multicore vs. multi-processor
While we’re on the topic, let’s talk about multicore vs. multiprocessor. For performance purposes, is a machine with a single dual-core processor basically equivalent to a machine with two processors with one core each? Yes. Roughly. There are lots of subtleties here concerning amount of cache, frequency of process hand-offs between processors, etc. Despite those finer points, for the purposes of sizing up the CPU load value, the total number of cores is what matters, regardless of how many physical processors those cores are spread across.
- The “number of cores = max load” on a multicore system, your load should not exceed the number of cores available.
The “cores is cores” How the cores are spread out over CPUs doesn’t matter. Two quad-cores == four dual-cores == eight single-cores. It’s all eight cores for these purposes.