Security breaches will continue to increase in sophistication. Microsegmentation addresses this by enabling granular controls over network traffic, enforcing intent based, workload aware policies at the application layer. This ensures that only the necessary communication between services can happen. Think least privilege/least access for network communications.
In this post, I’ll walk through implementing a microsegmentation solution using Kubernetes and Cilium. It’s a deep dive into setting up a secure cluster, enforcing traffic controls, and observing policies in action.
What Makes Microsegmentation Different from Firewalls?
While firewalls operate at the perimeter or enforce rules based on IP addresses and ports, microsegmentation focuses on securing east west traffic within your infrastructure. With tools like Cilium, you can have:
- Application Aware Policies: Policies are tied to workloads, not IPs, making them resilient to dynamic environments like Kubernetes.
- Protocol Level Insights: Cilium goes beyond TCP/UDP and supports Layer 7 (e.g., HTTP, DNS), enabling fine grained controls such as restricting specific API endpoints.
- Identity Based Security: Policies are based on pod labels, namespaces, and identities rather than static IPs, aligning with DevOps workflows.
- Real Time Observability: With tools like Hubble, you can trace every flow in your cluster to validate policies and detect anomalies.
This approach ensures that your security posture is adaptable and scalable for modern applications.
The Objective
The goal was to implement a secure, observable microsegmentation solution using Cilium with the following capabilities:
- Enforce Pod Level Isolation: Allow only specific pods to communicate with each other.
- Inspect Traffic Flows: Use Hubble to monitor and debug network policies.
- Leverage Kubernetes Labels: Dynamically apply policies based on workload metadata.
Setup Overview
- Environment: Ubuntu 20.04
- Kubernetes: v1.31.3
- Cilium: Installed via Helm
- Tools:
kubectl
,helm
,hubble
Steps to Implementation
Installing Kubernetes
Install Kubernetes Dependencies
These commands worked for me on Ubuntu 20.04:
- Add the GPG key for Kubernetes packages:
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
- Add the repository with the signed GPG key:
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
- Install Kubernetes components:
sudo apt update && sudo apt install -y kubelet kubeadm kubectl
Initialize the Cluster
sudo kubeadm init --pod-network-cidr=10.97.0.0/16
Set Up kubeconfig
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Verify Cluster
kubectl get nodes
Installing Cilium
Install Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Add Cilium Helm Repository
helm repo add cilium https://helm.cilium.io/
helm repo update
Deploy Cilium
helm install cilium cilium/cilium --namespace kube-system \
--set kubeProxyReplacement=strict \
--set hubble.relay.enabled=true \
--set hubble.ui.enabled=true \
--set enable-hubble=true
Verify Installation
kubectl get pods -n kube-system
Setting Up Hubble
Install Hubble CLI
wget https://github.com/cilium/hubble/releases/download/v0.11.6/hubble-linux-amd64.tar.gz
tar -xvf hubble-linux-amd64.tar.gz
sudo mv hubble /usr/local/bin/
Adding NodePort Configuration for Hubble Relay
To ensure that the Hubble Relay service is always accessible on a specific IP and port without requiring manual port forwarding, you need to edit its Kubernetes service configuration and set it to use a NodePort
.
Edit the Hubble Relay Service
First, edit the Hubble Relay service to configure it as a NodePort
service.
kubectl -n kube-system edit svc hubble-relay
In the editor, modify the spec
section of the service as follows:
spec:
type: NodePort
ports:
- name: grpc
port: 80
targetPort: 80
nodePort: 4245
Step 2: Verify the Configuration
Ensure the service reflects the changes:
kubectl -n kube-system get svc hubble-relay
You should see something like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hubble-relay NodePort 10.97.0.100 <none> 80:4245/TCP 10m
Step 3: Access Hubble Relay
Now, Hubble Relay is accessible on your node’s IP (e.g., 192.168.1.222
) and the specified NodePort (4245
):
hubble config set relay-address 192.168.1.222:4245
This ensures persistent access to the Hubble Relay service.
Verify Hubble Observability
Test the setup by observing traffic:
hubble status
hubble observe
Observe Traffic
hubble observe --from-pod default/busybox --to-pod default/test-app
Writing and Applying Policies
Basic Allow Policy
Allow traffic from busybox
to test-app
:
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: allow-test-app
namespace: default
spec:
endpointSelector:
matchLabels:
app: test-app
ingress:
- fromEndpoints:
- matchLabels:
app: busybox
Apply the policy:
kubectl apply -f allow-test-app.yaml
This CiliumNetworkPolicy allows ingress traffic to pods labeled app: test-app
in the default
namespace, but only from pods labeled app: busybox
. By specifying endpointSelector
, it targets test-app
, while the fromEndpoints
rule limits traffic sources to busybox
pods. Traffic from any other source is denied by default, ensuring precise communication control and enhancing security within the cluster.
Testing Policies
- Test access from
busybox
:
kubectl exec -it busybox -- wget -qO- http://10.97.103.97
- Observe traffic in Hubble:
hubble observe --from-pod default/busybox --to-pod default/test-app
Deny All Traffic Policy
To test stricter isolation:
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: deny-all
namespace: default
spec:
endpointSelector:
matchLabels:
app: test-app
ingress: []
Apply the deny-all policy:
kubectl apply -f deny-all.yaml

Debugging Tips
If issues arise, use these commands to debug:
- Check Policy Enforcement Mode:
kubectl -n kube-system exec -it ds/cilium -- cilium config | grep PolicyEnforcement
- Set to
always
if policies aren’t enforced:
kubectl -n kube-system exec -it ds/cilium -- cilium config PolicyEnforcement=always
- Monitor Traffic in Real-Time:
hubble observe --from-pod default/busybox --to-pod default/test-app
- Inspect Logs:
kubectl -n kube-system logs ds/cilium
- List Applied Policies:
kubectl get ciliumnetworkpolicy -n default
Adding Advanced Policies for Microsegmentation
After getting the basics of microsegmentation working, the next step is exploring advanced use cases. These include Layer 7 policies for HTTP based traffic and namespace isolation for securing communication boundaries. Below is an example of each and how they can enhance security and control in your cluster.
Advanced Use Case 1: Namespace Isolation
In this scenario, I want to restrict communication between workloads in different namespaces. For example, only allow traffic from a frontend
namespace to a backend
namespace while denying all other cross namespace traffic.
Policy: Namespace Isolation
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: namespace-isolation
namespace: backend
spec:
endpointSelector:
matchLabels:
app: backend-app
ingress:
- fromEntities:
- cluster
- fromEndpoints:
- matchLabels:
namespace: frontend
Explanation:
endpointSelector
ensures this policy applies only to pods labeledapp=backend-app
in thebackend
namespace.- Traffic is allowed from:
- The same cluster (
fromEntities: cluster
), which includes kube-proxy and internal services. - Pods in the
frontend
namespace.
- The same cluster (
Testing:
- Deploy a pod in the
frontend
namespace and verify access:
kubectl exec -n frontend -it busybox -- wget -qO- http://<backend-pod-ip>
- Deploy a pod in another namespace (e.g.,
test
) and ensure access is denied.
Advanced Use Case 2: HTTP Filtering with Layer 7 Policies
With Cilium, you can enforce Layer 7 policies to control HTTP traffic at the API level. For example, allow only GET
requests to a /api
endpoint on test-app
while denying other HTTP methods or paths.
Policy: HTTP Filtering
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: http-filtering
namespace: default
spec:
endpointSelector:
matchLabels:
app: test-app
ingress:
- fromEndpoints:
- matchLabels:
app: busybox
toPorts:
- ports:
- port: "80"
protocol: TCP
rules:
http:
- method: "GET"
path: "/api"
- method: "POST"
path: "/submit"
Explanation:
endpointSelector
applies the policy totest-app
.- Traffic is allowed only from
busybox
to port80
with:GET
requests to/api
.POST
requests to/submit
.
- All other traffic (e.g.,
DELETE
,PUT
, or access to other paths) is denied by default.
Testing:
- Test allowed traffic:
kubectl exec -it busybox -- wget -qO- http://<test-app-ip>/api
- Test denied traffic:
kubectl exec -it busybox -- wget -qO- http://<test-app-ip>/delete
Hubble Observability for Advanced Policies
Use Hubble to verify Layer 7 policies in action:
hubble observe --from-pod default/busybox --to-pod default/test-app
- Allowed traffic will show
Policy verdict: ALLOWED
with L7 metadata (e.g., HTTP methods and paths). - Denied traffic will show
Policy verdict: DENIED
with details about the blocked request.
Final Thoughts
Implementing microsegmentation with Cilium isn’t what I would call fun, but it was a rewarding experience. It’s more than just adding firewalls. It allows you to enforce security at the application layer with precision, adapting to modern dynamic environments.
These examples serve as a foundation for more complex setups, such as multi cluster microsegmentation or integrating with external identity providers for zero trust networking.
0 Comments