Team LiB
Previous Section Next Section

Locking

Now, let's consider a more complicated race condition that requires a more complicated solution. Assume you have a queue of requests that need to be serviced. How you implement the queue is irrelevant, but you can assume it is a linked list, where each node represents a request. Two functions manipulate the queue. One function adds a new request to the tail of the queue. Another function removes a request from the head of the queue and does something useful with the request. Various parts of the kernel invoke these two functions; thus, requests are continually being added, removed, and serviced. Manipulating the request queues certainly requires multiple instructions. If one thread attempts to read from the queue while another is in the middle of manipulating it, the reading thread will find the queue in an inconsistent state. It should be apparent the sort of damage that could occur if access to the queue could occur concurrently. Often, when the shared resource is a complex data structure, the result of a race condition is corruption of the data structure.

The previous scenario, at first, might not have a clear solution. How can you prevent one processor from reading from the queue while another processor is updating it? Although it is feasible for a particular architecture to implement simple instructions, such as arithmetic and comparison, atomically it is ludicrous for architectures to provide instructions to support the indefinitely sized critical regions that would exist in the previous example. What is needed is a way of making sure that only one thread manipulates the data structure at a timeor, lockingaccess to it while another thread of execution is in the marked region.

A lock provides such a mechanism; it works much like a lock on a door. Imagine the room beyond the door as the critical region. Inside the room, only one thread of execution can be present at a given time. When a thread enters the room, it locks the door behind it. When the thread is finished manipulating the shared data, it leaves the room and unlocks the door. If another thread reaches the door while it is locked, it must wait for the thread inside to exit the room and unlock the door before it can enter. Threads hold locks; locks protect data.

In the previous request queue example, a single lock could have been used to protect the queue. Whenever there was a new request to add to the queue, the thread would first obtain the lock. Then it could safely add the request to the queue and ultimately release the lock. When a thread wanted to remove a request from the queue, it too would obtain the lock. Then it could read the request and remove it from the queue. Finally, it would release the lock. Any other access to the queue would similarly need to obtain the lock. Because the lock can be held by only one thread at a time, only a single thread can manipulate the queue at a time. If a thread comes along while another thread is already updating it, the second thread has to wait for the first to release the lock before it can continue. The lock prevents concurrency and protects the queue from race conditions.

Any code that accesses the queue first needs to obtain the relevant lock. If another thread of execution comes along, the lock prevents concurrency:

Thread 1

Thread 2

TRy to lock the queue

try to lock the queue

succeeded: acquired lock

failed: waiting...

access queue...

waiting...

unlock the queue

waiting...

...

succeeded: acquired lock

 

access queue...

 

unlock the queue


Notice that locks are advisory and voluntary. Locks are entirely a programming construct that the programmer must take advantage of. Nothing prevents you from writing code that manipulates the fictional queue without the appropriate lock. Such a practice, of course, would eventually result in a race condition and corruption.

Locks come in various shapes and sizesLinux alone implements a handful of different locking mechanisms. The most significant difference between the various mechanisms is the behavior when the lock is contended (already in use)some locks simply busy wait[2], whereas other locks put the current task to sleep until the lock becomes available. The next chapter discusses the behavior of the different locks in Linux and their interfaces.

[2] That is, spin in a tight loop, waiting for the lock to become available.

Astute readers are now screaming. The lock does not solve the problem; it simply shrinks the critical region down to just the lock and unlock code: probably much smaller, sure, but still a potential race! Fortunately, locks are implemented using atomic operations that ensure no race exists. A single instruction can verify whether the key is taken and, if not, seize it. How this is done is very architecture-specific, but almost all processors implement an atomic test and set instruction that tests the value of an integer and sets it to a new value only if it is zero. A value of zero means unlocked.

What Causes Concurrency, Anyway?

In user-space, the need for synchronization stems from the fact that programs are scheduled preemptively by the will of the scheduler. Because a process can be preempted at any time and another process can be scheduled onto the processor, it is possible for a process to be involuntarily preempted in the middle of accessing a critical region. If the newly scheduled process then enters the same critical region (say, if the two processes are threads and they access the same shared memory), a race can occur. The same problem can occur with multiple single-threaded processes sharing files, or within a single program with signals, because signals can occur asynchronously. This type of concurrencywhere two things do not actually happen at the same time, but interleave with each other such that they might as wellis called pseudo-concurrency.

If you have a symmetrical multiprocessing machine, two processes can actually be executed in a critical region at the exact same time. That is called true concurrency. Although the causes and semantics of true versus pseudo concurrency are different, they both result in the same race conditions and require the same sort of protection.

The kernel has similar causes of concurrency. They are

  • Interrupts An interrupt can occur asynchronously at almost any time, interrupting the currently executing code.

  • Softirqs and tasklets The kernel can raise or schedule a softirq or tasklet at almost any time, interrupting the currently executing code.

  • Kernel preemption Because the kernel is preemptive, one task in the kernel can preempt another.

  • Sleeping and synchronization with user-space A task in the kernel can sleep and thus invoke the scheduler, resulting in the running of a new process.

  • Symmetrical multiprocessing Two or more processors can be executing kernel code at exactly the same time.

It is important that all kernel developers understand and prepare for these causes of concurrency. It is a major bug if an interrupt occurs in the middle of code that is manipulating a resource and the interrupt handler can access the same resource. Similarly, it is a bug if kernel code can be preempted while it is accessing a shared resource. Likewise, it is a bug if code in the kernel sleeps while in the middle of a critical section. Finally, two processors should never be able to simultaneously access the same shared data. With a clear picture of what data needs protection, it is not hard to provide the locking to keep the world safe. Rather, the hard part is identifying these conditions and realizing that to prevent concurrency, you need some form of protection. Let us reiterate this point, because it is quite important. Implementing the actual locking in your code to protect shared data is not hard, especially when done early on while designing the code. The tricky part is identifying the actual shared data and the corresponding critical sections. This is why designing locking into your code from the get-go, and not as an afterthought, is of paramount importance. It can be very hard to go in, after the fact, and identify what needs locking and retrofit locking into the existing code. The result is usually not pretty, either. The moral of this is to always design proper locking into your code from the beginning.

Code that is safe from concurrent access from an interrupt handler is said to be interrupt-safe. Code that is safe from concurrency on symmetrical multiprocessing machines is SMP-safe. Code that is safe from concurrency with kernel preemption is preempt-safe[3]. The actual mechanisms used to provide synchronization and protect against race conditions in all these cases is covered in the next chapter.

[3] You will also see that, barring a few exceptions, being SMP-safe implies being preempt-safe.

So, How Do I Know What Needs Protecting?

Identifying what data specifically needs protection is vital. Because any data that can be accessed concurrently probably needs protection, it is often easier to identify what data does not need protection and work from there. Obviously, any data that is local to one particular thread of execution does not need protection, because only that thread can access the data. For example, local automatic variables (and dynamically allocated data structures whose address is stored only on the stack) do not need any sort of locking because they exist solely on the stack of the executing thread. Likewise, data that is accessed by only a specific task does not require locking (because a process can execute on only one processor at a time).

What does need locking? Most global kernel data structures do. A good rule of thumb is that if another thread of execution can access the data, the data needs some sort of locking; if anyone else can see it, lock it. Remember to lock data, not code.

CONFIG Options: SMP Versus UP

Because the Linux kernel is configurable at compile-time, it makes sense that you can tailor the kernel specifically for a given machine. Most importantly, the CONFIG_SMP configure option controls whether the kernel supports SMP. Many locking issues disappear on uniprocessor machines; consequently, when CONFIG_SMP is unset, unnecessary code is not compiled into the kernel image. For example, such configuration enables uniprocessor machines to forego the overhead of spin locks. The same trick applies to CONFIG_PREEMPT (the configure option enabling kernel preemption). This was an excellent design decisionthe kernel maintains one clean source base, and the various locking mechanisms are used as needed. Different combinations of CONFIG_SMP and CONFIG_PREEMPT on different architectures compile in varying lock support.

In your code, provide appropriate protection for the most pessimistic case, SMP with kernel preemption, and all scenarios will be covered.


Whenever you write kernel code, you should ask yourself these questions:

  • Is the data global? Can a thread of execution other than the current one access it?

  • Is the data shared between process context and interrupt context? Is it shared between two different interrupt handlers?

  • If a process is preempted while accessing this data, can the newly scheduled process access the same data?

  • Can the current process sleep (block) on anything? If it does, in what state does that leave any shared data?

  • What prevents the data from being freed out from under me?

  • What happens if this function is called again on another processor?

  • What am I going to do about it?

In short, nearly all global and shared data in the kernel requires some form of the synchronization methods, discussed in the next chapter.

    Team LiB
    Previous Section Next Section