Async Rust: Cooperative vs Preemptive scheduling
Understand how Rust co-operate with developers
Threads were designed to parallelize compute-intensive tasks. However, these days, a lot of applications (such as a network scanner) are I/O (Input / Output) intensive.
Thus, threads have two significant problems:
- They use a lot of memory (compared to others solutions).
- Launches and context switches have a cost that can be felt when a lot (in the ten of thousands) threads are running.
In practice, it means that by using threads, our apps would spend a lot of time waiting for network requests to complete and use way more resources than necessary.
Please welcome async-await
.
The problem with Threads
From a programmer’s perspective, async
/await
provides the same things as threads: concurrency,
better hardware utilization, improved speed, but with dramatically better performance and lower
resource usage for I/O bound workloads.
What is an I/O bound workload? Those are tasks that spend most of their time waiting for network or disk operations to complete instead of being limited by the computing power of the processor.
Threads were designed a long time ago, when most of the computing was not network (web) related stuff, and thus are not suitable for too many concurrent I/O tasks.
operation | async | thread |
---|---|---|
Creation | 0.3 microseconds | 17 microseconds |
Context switch | 0.2 microseconds | 1.7 microseconds |
As we can see with these measurements made by Jim Blandy, context switching is roughly 8.5 times faster with async than with Linux threads and use approximately 20 times less memory.
In the programming language world, there are mainly 2 ways to deal with I/O tasks: preemptive scheduling and cooperative scheduling.
Preemptive Scheduling
Preemptive scheduling is when the scheduling of the tasks is out of the control of the developer, entirely managed by a runtime. Whether the programmer is launching a sync or an async task, there is no difference in the code.
For example, the Go programming relies on preemptive scheduling.
It has the advantage of being easier to learn: for the developers, there is no difference between sync and async code. Also, it is almost impossible to misuse: the runtime takes care of everything.
Here is an example of making an HTTP request in Go:
resp, err := http.Get("https://kerkour.com")
Just by looking at this snippet, we can’t tell if http.Get
is I/O intensive or compute intensive.
The disadvantages are:
- Speed, which is limited by the cleverness of the runtime.
- Hard to debug bugs: If the runtime has a bug, it may be extremely hard to find it out, as the runtime is treated as dark magic by developers.
Cooperative Scheduling
On the other hand, with cooperative scheduling, the developer is responsible for telling the
runtime when a task is expected to spend some time waiting for I/O. Waiting, you said? Yes, you get
it. It’s the exact purpose of the await
keyword. It’s an indication for the runtime (and compiler)
that the task will take some time waiting for an operation to complete, and thus the computing
resources can be used for another task in the meantime.
It has the advantage of being extremely fast. Basically, the developer and the runtime are working together, in harmony, to make the most of the computing power at disposition.
The principal disadvantage of cooperative scheduling is that it’s easier to misuse: if a await
is
forgotten (fortunately, the Rust compiler issues warnings), or if the event loop is blocked (what is
an event loop? continue reading to learn about it) for more than a few micro-seconds, it can have a
disastrous impact on the performance of the system.
The corollary is that an async
program should deal with extreme care with compute-intensive
operations.
Here is an example of making an HTTP request in Rust:
let res = reqwest::get("https://www.rust-lang.org").await?;
The .await
keyword tells us that the reqwest::get
function is expected to take some time to
complete.
Runtimes
What is a runtime? How do they work under the hood? It will be answered in the next post. See you there!