The concept of event loops is not new in programming, but I think it is still a really important technique to learn. They allow a program to send a request for something and carry on doing things rather than waiting on things to happen. It is a great way of multi-tasking in a single thread but there are some gotchas to be aware of.
In this blog post I’ll give a short intro to event loops and how to get the most out of them.
At their core they are exactly what they sound like. A piece of code, usually executed in a single thread off the main thread which will loop waiting for events to fire. It is done in such a way so as to have almost no burden on the CPU. When those events fire they typically trigger a callback in the application to notify it. Think of it a little like software interrupts or a much more advanced version of OS signal handling.
The key thing to remember when programming event loops is that when a callback is triggered you will want to return control back to the loop ASAP. If you continue expensive processing in that callback then the entire loop is held up until return. Which could have a performance implication for your application.
It is essentially asynchronous programming. In a world where you typically first learn to program linearly and then using threads this can be a little difficult to grasp at first.
The simple answer is “performance”, for example, instead of handling one TCP/IP connection per thread you could handle many in a single thread, and then multi-thread with multiple event loops. This is a great help when trying to solve what was the C10K problem (and later scaled up to the C10M problem).
Practically every OS has asynchronous calls to watch for changes on multiple sockets simultaneously in a very high-performance way. This is also called “non-blocking” because it does not block the execution of the program waiting for something. In many programming languages there are usually libraries to makes these easier to use.
In C there are a few very popular libraries, the first being libevent which has been around many years and has many extensive features. A lighter-weight event library that is also popular is libev, their implementation is much more focused. My favourite is close to libev but with a nicer API and features that I typically need called libuv.
In this example (I have called timer.c) an timer event is created that will fire after the first second of the event loop running and then every 100ms after that until it has fired 20 times.
// SPDX-License-Identifier: BSD-3-Clause
#include <uv.h>
#include <stdio.h>
int counter = 0;
static void timer_cb(uv_timer_t *handle)
{
// The void* data poiner has our counter
int *counter = (int*) handle->data;
printf("Timer event fired!\n");
// Increment the counter
(*counter)++;
// When the counter his 20 fires this should stop the timer and cause the
// application to exit
if (*counter > 20)
{
uv_timer_stop(handle);
}
}
int main()
{
// Create a timer handler and a counter to be passed
uv_timer_t timer_handle;
int counter = 0;
// libuv creates a default loop, you can use this or initialize a new one.
// This call initializes the timer handle and adds it to the main loop.
uv_timer_init(uv_default_loop(), &timer_handle);
// Give the counter as callback data. This is a void* pointer for your
// application to use.
timer_handle.data = &counter;
// Start the timer, note that it doesn't start at this point because the
// main loop isn't running yet. This triggers after the first 1000ms and
// every 100ms after that.
uv_timer_start(&timer_handle, timer_cb, 1000, 100);
// Run the event loop and block until all events are finished. In our case
// the timer has a repeat so will block until the timer is stopped
uv_run(uv_default_loop(), UV_RUN_DEFAULT);
// Cleanup and exit
uv_loop_close(uv_default_loop());
return 0;
}
For this to work you will need libuv-dev or libuv-devel installed on your Linux operating system and would compile with:
gcc -o timer timer.c -luv
We could have opted to run the loop just until one event fired or just once even with no event firing, which can be useful if you application needs to check for events before continuing execution. You can also easily add and remove events from an event loop whilst the loop is running.
The real power of this comes when we are dealing with many things at the same time. I could expand this to have many timers running simultaneously triggering different callbacks all in the same thread. I could also (as I have done in the past) have an event loop on a pool of SQL connections using an asynchronous SQL API watching for network events on them all simultaneously and acting accordingly.
This method of working can work for pretty much anything that an application would typically have to wait for, keyboard input, file IO, network, etc… libuv even supports OS signal handling events using this method.
libuv also includes a universal threading library (works on POSIX and Windows) so it would be easy to spawn a thread to do something when an event is fired.
Event loops are an extremely powerful tool and whilst I haven’t demonstrated their full power here you should now have a better picture of what they are and how they work.
Image credit: Spirals and loops by Benny Mazur used under a CC BY 2.0 license.
I recently acquired an Amiga 1200 motherboard in a spares/repairs condition for about £100 recently.…
Whilst I do like working with STM32 development boards, some basic information I need can…
When I last left this blog series, the first of Stoo Cambridge's A4000s had gone…
At the beginning of November, I started working for wolfSSL. Now that I'm a week…
In my previous post, I had the Amiga 4000 known as Jools mostly repaired, there…
In my last post, I created a list of things I needed to repair on…
View Comments
One perhaps important detail about spending as little time as possible, is that it will apply to every single system you have to interact with, regardless of asynchronous or synchronous.
I know you likely know this, but it's an area I see come up again and again.
Even in the extreme micro-services world of Functions as a service and message queues, taking advantage of swathes of computers, it pays dividends through compounding to reduce precious microseconds, milliseconds, seconds and minutes.
Even if supplying a callback so you can close a request upon receipt and then after transmission (freeing up the thing deferring elsewhere) of result (at some point later); you pay for time on the machines used, so reducing time can be of tangible benefit.
Thanks for the comment. Completely agree. I touched on that a little in the post and it is something I've hit in a big way before. Especially with Python green threads and an external C library with blocking network calls (to the extent a watchdog fired).
Micro-services and low-latency message systems are where this really fits well, but there are many places where this way of thinking would bring a much more responsive application.