blog header

adesso Blog

What is eBPF?!

The origins of eBPF trace back to a 1992 whitepaper, written by Steven McCanne and Van Jacobson, that described a pseudomachine capable of running filter programs, which were able to determine whether to accept or reject an incoming network packet. This technology later became known as the Berkeley Packet Filter (BPF) and it was introduced to the Linux kernel in 1997.

eBPF came alive in 2014, when lots of groundbreaking changes were added to it, including helper functions, the verifier, eBPF maps and the bpf system call, which is used to interact with the kernel space from the comfort of the user space.


Figure 1: Random number generator helper function

eBPF can essentially be described as a native kernel technology which allows users to execute mini-programs on any kernel event, so if you are already familiar with event-driven technologies, then it is going to be fairly easy to understand. The aforementioned programs can be attached to these things called hook points, which can be kernel functions, userspace functions, system calls, tracepoints and even network devices (both physical and virtual!). Basically any time the event, which the program is attached to happens, our program gets called and we can do anything we want (well not completely of course).

Right now it might seem that eBPF is a disaster about to happen, with the ability to run any code inside kernel space, but the verifier is there to prevent this. Any eBPF code submitted to the kernel has to pass through the verifier first, which can be sort of thought of as a static analysis for eBPF code. It makes sure that the code doesn’t access arbitrary kernel memory, it always terminates, so it doesn’t contain any infinite loops for example and the program has an acceptable level of complexity and size.

Why eBPF is going to change everything

We know that legacy security tools have a hard time interpreting containerization technologies and Linux primitives, which means that it might wrongfully detect where exactly an incident originated from. Also introducing observability to our infrastructure stack might be difficult, and a ton more difficult if we are talking about a microservice-based infrastructure as that would mean application code changes to every single application. With the use of eBPF tools we can satisfy all of our observability needs as we have access to almost every event happening on the host, even inside containers, as containers inherit the kernel of the host, so an incident happening in a container can easily be distinguished from an incident on the host itself or inside another container.

Tetragon is an eBPF-based runtime security tool which lets us accomplish basically everything described above. It is Kubernetes-aware, as it has the ability to attach additional context to the kernel events happening, so if a user is trying to do anything malicious inside a Kubernetes Pod, then we will know in which Namespace and which Pod did this user try to perform that malicious action. Tetragon can be configured through Tracing Policies, which even allow us to perform certain actions, for example killing a process if someone tries to write to a file described in the policy.

eBPF also has a great impact in the networking space, mainly with its ability to attach programs to network devices and Katran is one of the projects which is worth mentioning. It is a layer 4 load balancer based on eBPF and since 2017 every packet which was sent to Facebook has passed through eBPF. One of the strengths of eBPF for networking is that we can save a lot of processing power and context switching with the ability to perform actions, which we previously performed inside user space, inside the kernel.

Why eBPF won’t change everything

Every up-and-coming popular technology has downsides of course, even eBPF and one of the bigger ones is complex state management. eBPF maps allow us to store data, which we can later access inside the user space, but they were not built to handle the volume, which is required for certain use cases, like storing endpoints for circuit breaking or buffering requests when implementing retries. Implementing a service mesh through eBPF is also a hot topic, but that also requires state management and also TLS termination would be problematic to implement right now for various reasons.

Conclusion

eBPF is a really great technology already, with a great impact at Layer 3 and 4, but things become a little more complicated when we get to Layer 7, but I’m hopeful that it will reach new heights in the future and who knows, maybe we will also have Doom running inside the kernel one day.

Picture Dominik  Táskai

Author Dominik Táskai

Dominik Táskai is a tech enthusiast currently working as a DevOps Engineer and studying everything DevOps and Cloud Native related, especially focused on Kubernetes and Go.

Category:

Architecture

Tags:

-

Our blog posts at a glance

Our tech blog invites you to dive deep into the exciting dimensions of technology. Here we offer you insights not only into our vision and expertise, but also into the latest trends, developments and ideas shaping the tech world.

Our blog is your platform for inspiring stories, informative articles and practical insights. Whether you are a tech lover, an entrepreneur looking for innovative solutions or just curious - we have something for everyone.

To the blog posts

Save this page. Remove this page.