🧠 Humans of Cyber | Thomas Graf
Cilium turns eBPF into a cloud-native data plane for networking, security, and observability, scaling Kubernetes beyond iptables.
Cloud-native security stopped being a perimeter problem the moment workloads became ephemeral. In Kubernetes, IP addresses churn, services scale in seconds, and “the network” is no longer a stable boundary. That is the environment Cilium was built for: a programmable, identity-aware data plane that lives where the truth lives, inside the Linux kernel.
Cilium is best understood as a platform that collapses three concerns into one system:
Networking (CNI and service connectivity)
Security (policy enforcement tied to workload identity)
Observability (flow-level visibility without sidecars everywhere)
The unifying mechanism is eBPF, which makes the kernel itself programmable in tightly constrained, performance-safe ways.
Where Cilium Came From, and Why That Matters
The Cilium story starts with eBPF becoming viable as a modern kernel primitive and with Thomas Graf pushing an early commit in December 2015. The project was founded by the creators of Isovalent in 2016, with a specific target: replace brittle, iptables-heavy container networking with a datapath designed for dynamic systems.
Cilium’s CNCF trajectory is part of why it became infrastructure-grade:
CNCF incubation: October 13, 2021
CNCF graduation: October 11, 2023
That path matters because it typically correlates with hardened releases, clearer governance, and broad production adoption.
On the industry side, the project’s momentum attracted major backing and eventually consolidation:
Series A (2020) and Series B (2022) funding rounds expanded the platform’s scope
Cisco’s acquisition of Isovalent (announced Dec 21, 2023; later closed) marked eBPF’s shift from “promising” to “strategic pillar” in mainstream networking and security
What Cilium Actually Is (Beyond “a CNI”)
Cilium began as a high-performance Kubernetes networking layer, but it matured into a broader dataplane platform:
Cilium CNI: pod networking and policy enforcement
eBPF kube-proxy replacement: service load balancing without iptables scaling pain
Hubble: flow visibility built directly on the datapath
Tetragon: runtime security and enforcement at syscall boundaries
Cilium Service Mesh: a sidecar-reduced, dataplane-forward approach to mesh functions
If you only think of Cilium as “that fast CNI,” you miss the point. Cilium is an attempt to make networking and security behave like modern systems engineering: programmable, observable, and driven by identity rather than address.
How Cilium Uses eBPF Without Becoming a Science Project
eBPF is the enabling layer, not the product. Cilium uses eBPF programs attached to kernel hook points so it can observe and control traffic without shuttling packets up to user space.
The practical implication: fewer context switches, less overhead, and enforcement where packets are actually processed.
Key hook points:
XDP (driver-level): earliest interception, useful for high-performance drops and routing decisions
TC ingress/egress: deeper processing, header manipulation, policy decisions
socket and trace hooks: richer context for visibility and security signals
This “kernel-first” approach is the root of Cilium’s performance profile and its ability to offer observability that isn’t bolted on as an afterthought.
Identity Beats IP: The Core Security Model
Kubernetes breaks IP-centric security. Pods are short-lived and their addresses are recycled. Static rulesets tied to IPs become operational debt.
Cilium replaces that model with workload identities derived from Kubernetes labels. Each workload gets a security identity; enforcement uses fast kernel maps to translate packet context into identity context.
This is the mental shift:
Old world: “allow 10.0.0.12 to talk to 10.0.0.20 on 443”
Cilium world: “allow service=payments to talk to service=ledger on 443”
That change makes policy survivable in environments where the underlying addresses are not.
Routing Models: Three Ways to Fit Real Networks
Cilium supports multiple routing approaches, mainly because cloud and on-prem networks have different constraints.
1) Encapsulation (VXLAN/Geneve overlay)
Best when the underlay cannot or should not route pod CIDRs.
Lower setup friction
Some packet overhead from encapsulation
Convenient metadata propagation in tunnel headers
2) Native routing
Best when the underlay can route pod IPs.
Highest throughput potential
No encapsulation overhead
Requires routing awareness in the network
3) Cloud-optimized modes
Provider-native approaches such as AWS ENI-style behavior (and equivalents) align pod addressing and routing with cloud primitives.
This flexibility is a big reason Cilium shows up in very different environments, from tightly controlled enterprise networks to hyperscaler-managed Kubernetes.
The kube-proxy Replacement: Why Scale Changes Everything
iptables-based service routing degrades as the cluster grows because rule evaluation and updates trend toward linear behavior as services and endpoints multiply.
Cilium’s eBPF load balancer replaces that behavior with kernel maps and constant-time lookups. In practice, this reduces the blast radius of growth:
more services
more endpoints
more churn
The service routing stays stable rather than becoming the hidden tax in cluster scale-ups.
Service Mesh Without Sidecars Everywhere
Traditional meshes commonly deploy a sidecar proxy per pod, which adds operational and resource overhead. Cilium’s approach is dataplane-heavy:
use eBPF for the fast path (L4 enforcement, identity, visibility)
offload complex L7 handling to per-node Envoy, not per-pod proxies
This is not “mesh without Envoy,” it is “mesh without per-pod proxy sprawl.” The operational win is fewer moving parts at the application boundary.
Hubble: Observability That Comes From the Datapath
Hubble is powerful because it does not need to “guess” the network. It reads what the datapath already knows.
That enables:
service dependency graphs
flow-based troubleshooting (including why a policy dropped a packet)
protocol-aware visibility for common L7 signals in supported cases
For teams operating zero-trust policies, this shortens the path from “it broke” to “this exact rule blocked this exact flow.”
Tetragon: Runtime Visibility and Enforcement at the Kernel Boundary
Cilium solves the network boundary; Tetragon extends the same kernel-native model to runtime behavior.
The key differentiator is synchronous enforcement via kernel hooks (including BPF-LSM where applicable): block or kill at the moment a violating action occurs, rather than reacting after user-space processing.
This matters because “detect then respond” can be too late in fast-moving runtime attacks.
Where Cilium Shows Up in the Real World
Cilium is widely deployed across:
managed Kubernetes platforms
multi-cloud and hybrid environments
on-prem clusters that need strict policy without the iptables tax
ClusterMesh extends the identity model and service discovery across clusters, supporting global architectures that need consistent enforcement across regions.
Governance and Community Shape the Trajectory
Cilium’s sustainability is not just kernel engineering; it is also governance structure and contributor diversity. A defined contributor ladder and multi-vendor participation reduce the risk of single-vendor capture, which is critical for infrastructure components that become “too central to replace.”
Cisco’s acquisition introduced a new dynamic: larger distribution and strategic alignment, paired with the responsibility to preserve open governance expectations. The commitment to keep projects open source and the presence of external advisory structures are signals the community watches closely.
Credit: The People Behind the Platform
Cilium’s origin and technical direction are closely associated with Thomas Graf and the founding team that built the project’s eBPF-first approach through Isovalent, alongside the broader maintainer and contributor community that scaled it into a CNCF-graduated standard.
The project’s strength comes from that combination: a clear founding vision, sustained engineering discipline, and an ecosystem that treated eBPF not as a gimmick, but as the future substrate of cloud-native networking and security.
Subscribe and Comment.
Copyright © 2026 911Cyber. All Rights Reserved.
Follow 911Cyber on:



