Modern CPUs have multiple cores that allow the execution of several instructions simultaneously, which can lead to the faster overall execution of programs. One of the first reasons for implementing a multithreaded application is to utilize more than one core AND simultaneously maintain some synchronous shared memory between those threads. If there is no need to share any state between the threads, you would probably be better off running several instances of the same program, as different processes, perhaps with different inputs.
A multithreaded application does not always make a program run faster. It is a mistake to pre-optimize and have multiple threads before an analysis guides you to do so. A multithreaded program is usually more complex to write and debug. Since threads coordinate and “communicate” by accessing shared memory, data races (formally defined below) may occur without proper synchronization, leading to a potential program crash.
Consider, for example, that in a program that is not CPU-bound, it is not our CPU that slows down the process – it is not the bottleneck we need to eliminate. In this case, upgrading the CPU or splitting up the computing work by adding another thread would not yield better throughput.
On the other hand, many programs gain from running multiple threads. For example, say we may have a blocking I/O that slows down our application. In this case, introducing another thread handling the I/O may speed up the process. There are applications where multithreading is a must. Just consider a modern operating system required to manage concurrent access to multiple resources. For an excellent discussion on when applications should use multi-threads and when not, see Ansel Sermersheim’s CppCon 2017 talk: “Multithreading is the answer. What is the question?”.
This post will discuss multithreaded programming in modern C++. It will begin with laying the foundations of multithreaded programming and later on explore more advanced topics, including data structures and algorithms that are Wait-Free and Lock-Free.
Before C++ 11
C++ has been around since 1985, but it wasn’t until C++11 that standard specifications included a concept of threads. The standard binds the implementors of C++ compilers and standard library, so we C++ developers can use what it guarantees for writing code.
Without threads in the spec, we had to look elsewhere to create multithreaded programs.
We used to depend on external libraries to develop multithreaded programs in C++. The POSIX thread library, for instance, provided a standards-based thread API for C and C++. We used pthreads to create applications and had to observe two separate specifications to create a multi-threaded application, the C++ specification for the general code and the POSIX spec for the multithreaded part.
However, even when following those two specifications, we were somewhat limited when using standard library containers and functions.
Just a small example: let’s assume that in our pre-C++11 code two threads were calling two const member functions of the same standard library container. Was it thread-safe? Pre-C++11, the implementation of those two const member functions may have potentially changed a mutable data member, resulting in an Undefined Behaviour due to a data race. We could have delved into the implementation details of the container or trusted a colleague that such concurrent calls were thread-safe but we certainly could not write a library-agnostic code that was thread-safe. It all changed with C++11.
Since C++11
C++11 finally made it possible to write cross-platform, library-agnostic code that is thread-safe while consulting only the C++ specification. It (1) Introduced types such as std::thread, std::mutex, std::atomic<>, std::future<>, and much more; and (2) defined a memory model and set clear guarantees that allow the creation thread-safe implementations.
This is just an example of such a specification guarantee that allows the use of std containers in a thread-safe code:
“A C++ standard library function shall not directly or indirectly modify objects (6.9.2) accessible by threads other than the current thread unless the objects are accessed directly or indirectly via the function’s non-const arguments, including this.” (from: https://isocpp.org/files/papers/N4860.pdf)
We are first going to briefly explore a few key concepts required to define the basic multithreaded guarantees in C++. From now on, whenever referring to the C++ standard, we mean what is available from C++11 and all the way through C++20. Whenever referring to changes that were only introduced after C++11, I will explicitly mention that. Let’s first explore a few key concepts:
Threads
The standard defines a thread of execution as a single flow of control within a program, including the initial invocation of a specific function, and recursively including every function invocation subsequently executed by the thread. Note that one thread can create another.
Memory location
A memory location is either an object of scalar type (arithmetic type, pointer type, enumeration type or std::nullptr_t) or the maximal sequence of adjacent bit-fields all having nonzero width (you can see an example here).
The C++ standard defines that two or more threads of execution can access separate memory locations without interfering with each other.
Note that in the Memory Model, the standard defines a Byte as the smallest addressable unit of memory. Every Byte has a unique address and it must support at least 256 different values. In practice, a Byte nowadays will typically include 8 bits.
Data Race
A race condition or data race occurs when at least two threads access the same memory location simultaneously and at least one of them performs a write operation. Multiple simultaneous reads are absolutely fine and thread-safe. When one thread attempts to change the value of an addressable Byte(s), the value observed by another thread may be per the specification – any value – making the program’s behavior undefined. We should not assume anything about the value even if we think we understand the architecture. We will further discuss this in the future and refer to specific architectures and possible outcomes of such data races.
Following the definition of those, let’s inspect a simple piece of code:
#include <iostream>
#include <thread>
void increment(int* a_ptr) {
++*a_ptr;
}
int main() {
int a = 0;
std::thread th1(increment, &a);
std::thread th2(increment, &a);
th1.join();
th2.join();
std::cout << "a = " << a << '\n';
}
How many threads are in the above program?
Three – the main thread and the two created by the std::thread objects: th1 and th2.
Each of those two is given a function as an argument (increment) and an argument for the invocation of increment, which is the address of a, meaning &a. Once constructed, the std::thread invokes increment with a copy of the arguments (Note that thread arguments are copied/moved to be passed by value, we will further discuss this in another blog).
After creating the two threads, we call join() on both thread objects:
th1.join();
th2.join();
th1.join() waits for th1 to finish executing increment and join the main thread then th2.join() waits for th2 to join. After the first line, we are guaranteed that th1 had finished, and after the second line we are guaranteed that both are finished. Note that the execution of the function called by th2 may have finished before th1. We will get back to join(), detach(), and such in the future.
What will be the printed value?
We may increment “a” twice so it can be:
a = 2
However, take a look at what we have seen after running this code example a thousand times:
http://coliru.stacked-crooked.com/a/b417d145ea99907b
998 times we got:
a = 2
And twice:
a = 1
Why did we get a = 1?
The ++*a_ptr instruction is a read-modify-write, and actually requires three machine instructions. For example, on a typical architecture such as X86, it would be:
- Read the value in the memory location of “a” into a CPU register
- Modify the value in the same CPU register to the value + 1
- Write the value + 1 back into the same memory location of “a”
The read and write operations are from and to the cache, not directly from the main memory.
There is a possibility that the read operations (1) are performed by both threads before one of them completes the write operation (3). Without loss of generality, let’s assume th1 executes (1) first, then the following instructions would end with a value of 1 in *a:
- th1 executes (1) and reads the value 0 into a register
- th1 executes (2) and modifies the value in the register to 1
- th2 executes (1) and reads the value 0 into a register
- th1 executes (3) and writes the value 1 into *a_ptr
- th2 executes (2) and modifies the value in the register to 1
- th2 executes (3) and writes the value 1 into *a_ptr
We would get the same value if 3 is executed before 2 or at the same exact time, and even if 5 and 6 are executed before or concurrently with 2. There are quite a few execution orders that would yield the same a = 1 result. However, if one of the threads had fully completed the write operation (3) before the other executed a read instruction (1), then the value written in *a at the end of the run would be 2.
To ensure the program has a deterministic output, we need to make these read-modify-write operations atomic, meaning that while one thread executes ++*a_ptr that requires three machine instructions, another thread will not be able to read nor write into *a_ptr.
One way to do so is to mark the memory location “a” as std::atomic<int> instead of int. Meaning that while one thread attempts to write into this memory location any other thread that observes the value of a will either see it after the other thread has fully completed the write operation or before the operation had even started, but not in any intermediate stage of the operation. In other words, before or after the entire read-modify-write operation.
It is important to remember that in almost every modern architecture, the value is read and written from/to the cache system and not directly from/to the main memory, making it even more likely to have this kind of disruption. In any case, whenever there is a potential data race in our code we should consider it as Undefined Behavior, as per the standard specification. We should not assume anything on the value observed by a thread if another executes a write operation. Without proper synchronization, such multithreaded programs may crash or, perhaps worse: create a bug that does not crash our program.
Recall that we have seen that running this piece of code 1000 yielded a = 2 99.8% of the time; why was there such a small chance of getting a = 1 in this example?
Because the time it takes to run the function increment is very short relative to creating a new thread of execution and perhaps running it on a different core.
Conclusion
This example demonstrates that even if many runs produce the correct result in a multithreaded program, there may still be a chance for failure. The result depends on the actual order of instruction being executed, which may change from one run to another. Therefore to ensure our code is correct, we require proper logical proofing, and testing must be more intensive than for a single-threaded program.
When writing a multithreaded program, we need to make sure not to write any data races. We will see other methods to fix the code in the future, but for now, making this int atomic by using std::atomic<int> will do the trick:
#include <iostream>
#include <thread>
#include <atomic>
void increment(std::atomic<int>* a_ptr) {
++*a_ptr;
}
int main() {
std::atomic<int> a = 0;
std::thread th1(increment, &a);
std::thread th2(increment, &a);
th1.join();
th2.join();
std::cout << "a = " << a << '\n';
}
For the simple code example we reviewed in this blog, we have seen several different actual orders of instruction execution, and (in the non-atomic/fixed version) not all of them produced the same program result. The actual order of instruction execution may be different from the source code we wrote. It can be a result of compiler optimizations, but it can also be our CPU that reorders our reads and writes by doing things such as prefetching, speculations, and more. Our cache system can also result in this reordering; because of a write buffer, private/shared cache mechanisms, and others. The end result would be the same: an instruction we wrote after another is executed before.
We will get back to those in the future. Still, the good news is that for the purpose of code proofing, the source of the aforementioned possible reorderings should not matter to us, as any such reordering can be thought of as a C++ source code expression reordering. So we can reduce this issue by bearing in mind that our lines of code may be reordered and interleaved between threads. So if loads (reads) and stores (writes) do not necessarily execute in the same order we wrote them we must explore the guarantees that data synchronization primitives such as atomics and mutexes provide.
Luckily they provide strong enough guarantees so that we can make sure that even when executing multiple instructions simultaneously, we can reason about the correctness of our program. In a future post, we will also see how we can use memory_order to enable more flexibility than the default behavior guarantees and by that to potentially improve performance without giving up correctness. In case you would like to delve into those right now, this is the exact moment Herb Sutter discusses the order of execution in his excellent talk: C++ and Beyond 2012: Herb Sutter – atomic Weapons.