Team LiB
Previous Section Next Section

The Tick Rate: HZ

The frequency of the system timer (the tick rate) is programmed on system boot based on a static preprocessor define, HZ. The value of HZ differs for each supported architecture. In fact, on some supported architectures, it even differs between machine types.

The kernel defines the value in <asm/param.h>. The tick rate has a frequency of HZ hertz and a period of 1/HZ seconds. For example, in include/asm-i386/param.h, the i386 architecture defines:

#define HZ 1000        /* internal kernel time frequency */

Therefore, the timer interrupt on i386 has a frequency of 1000HZ and occurs 1,000 times per second (every one-thousandth of a second, which is every millisecond). Most other architectures have a tick rate of 100. Table 10.1 is a complete listing of the supported architectures and their defined tick rates.

Table 10.1. Frequency of the Timer Interrupt

Architecture

Frequency (in Hertz)

Alpha

1024

Arm

100

Cris

100

h8300

100

i386

1000

ia64

32 or 1024[2]

m68k

100

m68knommu

50, 100, or 1000

Mips

100

mips64

100 or 1000

Parisc

100 or 1000

Ppc

1000

ppc64

1000

s390

100

Sh

100 or 1000

Sparc

100

sparc64

1000

Um

100

v850

24, 100, or 122

x86-64

1000


[2] The IA-64 simulator has a tick rate of 32Hz. Real IA-64 machines have a tick rate of 1024Hz.

When writing kernel code, never assume that HZ has any given value. This is not a common mistake these days because so many architectures have varying tick rates. In the past, however, Alpha was the only architecture with a tick rate not equal to 100Hz, and it was common to see code incorrectly hard-code the value 100 when the HZ value should have been used. Examples of using HZ in kernel code are shown later.

The frequency of the timer interrupt is rather important. As you already saw, the timer interrupt performs a lot of work. Indeed, the kernel's entire notion of time derives from the periodicity of the system timer. Picking the right value, like a successful relationship, is all about compromise.

The Ideal HZ Value

Starting with the initial version of Linux, the i386 architecture has had a timer interrupt frequency of 100 Hz. During the 2.5 development series, however, the frequency was raised to 1000 Hz and was (as such things are) controversial. Because so much of the system is dependent on the timer interrupt, changing its frequency has a reasonable impact on the system. Of course, there are pros and cons to larger versus smaller HZ values.

Increasing the tick rate means the timer interrupt runs more frequently. Consequently, the work it performs occurs more often. This has the following benefits:

  • The timer interrupt has a higher resolution and, consequently, all timed events have a higher resolution

  • The accuracy of timed events improves

The resolution increases by the same factor as the tick rate increases. For example, the granularity of timers with HZ=100 is 10 milliseconds. In other words, all periodic events occur on the timer interrupt's 10 millisecond boundary and no finer precision[3] is guaranteed. With HZ=1000, however, resolution is 1 millisecondten times finer. Although kernel code can create timers with 1-millisecond resolution, there is no guarantee the precision afforded with HZ=100 is sufficient to execute the timer on anything better than 10-millisecond intervals.

[3] Use precision here in the computer sense, not the scientific. Precision in science is a statistical measurement of repeatability. In computers, precision is the number of significant figures used to represent a value.

Likewise, accuracy improves in the same manner. Assuming the kernel starts timers at random times, the average timer is off by half the period of the timer interrupt because timers might expire at any time, but are executed only on occurrences of the timer interrupt. For example, with HZ=100, the average event occurs +/ 5 milliseconds off from the desired time. Thus, error is 5 milliseconds on average. With HZ=1000, the average error drops to 0.5 millisecondsa tenfold improvement.

This higher resolution and greater accuracy provides multiple advantages:

  • Kernel timers execute with finer resolution and increased accuracy (this provides a large number of improvements, one of which is the following).

  • System calls such as poll() and select() that optionally employ a timeout value execute with improved precision.

  • Measurements, such as resource usage or the system uptime, are recorded with a finer resolution.

  • Process preemption occurs more accurately.

Some of the most readily noticeable performance benefits come from the improved precision of poll() and select() timeouts. The improvement might be quite large; an application that makes heavy use of these system calls might waste a great deal of time waiting for the timer interrupt, when, in fact, the timeout has actually expired. Remember, the average error (that is, potentially wasted time) is half the period of the timer interrupt.

Another benefit of a higher tick rate is the greater accuracy in process preemption, which results in decreased scheduling latency. Recall from Chapter 4 that the timer interrupt is responsible for decrementing the running process's timeslice count. When the count reaches zero, need_resched is set and the kernel runs the scheduler as soon as possible. Now assume a given process is running and has 2 milliseconds of its timeslice remaining. In 2 milliseconds, the scheduler should preempt the running process and begin executing a new process. Unfortunately, this event does not occur until the next timer interrupt, which might not be in 2 milliseconds. In fact, at worst the next timer interrupt might be 1/HZ of a second away! With HZ=100, a process can get nearly ten extra milliseconds to run. Of course, this all balances out and fairness is preserved, because all tasks receive the same imprecision in schedulingbut that is not the issue. The problem stems from the latency created by the delayed preemption. If the to-be-scheduled task had something time sensitive to do, such as refill an audio buffer, the delay might not be acceptable. Increasing the tick rate to 1000Hz lowers the worst-case scheduling overrun to just 1 millisecond, and the average-case overrun to just 0.5 milliseconds.

Now, there must be some downside to increasing the tick rate or it would have been 1000Hz (or even higher) to start. Indeed, there is one large issue: A higher tick rate implies more frequent timer interrupts, which implies higher overhead, because the processor must spend more time executing the timer interrupt handler. The higher the tick rate, the more time the processor spends executing the timer interrupt. This adds up to not just less processor time available for other work, but also a more frequent thrashing of the processor's cache. The issue of the overhead's impact is debatable. A move from HZ=100 to HZ=1000 clearly brings with it ten times greater overhead. However, how substantial is the overhead to begin with? The final agreement is that, at least on modern systems, HZ=1000 does not create unacceptable overhead and the move to a 1000Hz timer has not hurt performance too much. Nevertheless, it is possible in 2.6 to compile the kernel with a different value for HZ[4].

[4] Because of architectural and NTP-related issues, however, not just any value is acceptable for HZ. On x86, 100, 500, and 1000, all work fine.

A Tickless OS?

You might wonder whether an operating system even needs a fixed timer interrupt. Is it possible to design an OS without ticks? Yes, it is possible but it might not be pretty.

There is no absolute need for a fixed timer interrupt. Instead, the kernel can use a dynamically programmed timer for each pending event. This quickly adds a lot of timer overhead, so a better idea is to have just one timer, and program it to occur when the next earliest event is due. When that timer executes, create a timer for the next event and repeat. With this approach, there is no periodic timer interrupt and no HZ value.

Two issues need to be overcome with this approach. The first is how to manage some concept of ticks, at least so the kernel can keep track of relative time. This is not too hard to solve. Solving the second issuehow to overcome the overhead of managing all the dynamic timers, even with an optimized approachis a bit harder. The overhead and complexity is high enough that the Linux kernel does not take this approach. Nonetheless, people have tried and the results are interestingsearch online archives if interested.


    Team LiB
    Previous Section Next Section