Everyone can contribute! Learn DevOps and Cloud Native in our cafe ☕


Technology is moving fast in the DevOps and Cloud Native community.

Join the conversation and add your thoughts, tips, experiences, stories.

"Everyone Can Contribute" is inspired by GitLab's mission.

52. #EveryoneCanContribute Cafe: Learned at KubeCon EU, feat. Cilium Tetragon first try



Insights

Michael F., Niclas and Michael A. talked about the KubeCon EU summary in the opsindev.news June issue and looked into the various KubeCon EU YouTube playlists:

At first, Michael shared the insights from eBPF day, and highlighted Tetragon now being open source. Niclas mentioned that they use Cilium in production.

Isovalent open-sourced Tetragon as a new Cilium component that enables real-time, eBPF security observability and runtime enforcement. Recommend watching the eBPF day keynote at KubeCon EU, where Thomas Graf also explains the basics and future of eBPF in Cloud Native.

Spontaneous let’s try Tetragon

From talking about Tetragon, it was not far of using the Civo Kubernetes cluster already running and deploy Tetragon.

civo kubernetes create ecc-kubeconeu
civo kubernetes config ecc-kubeconeu --save
kubectl config use-context ecc-kubeconeu
kubectl get node

helm repo add cilium https://helm.cilium.io
helm repo update
helm install tetragon cilium/tetragon -n kube-system
kubectl rollout status -n kube-system ds/tetragon -w

kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.11/examples/minikube/http-sw-app.yaml

After inspecting the raw JSON logs, Michael used jq for better formatting. We continued using the tetragon-cli binary on macOS to observe and filter more event types.

kubectl logs -n kube-system ds/tetragon -c export-stdout -f | jq

wget https://github.com/cilium/tetragon/releases/download/tetragon-cli/tetragon-darwin-amd64.tar.gz
tar xzf tetragon-darwin-amd64.tar.gz
chmod +x tetragon

kubectl logs -n kube-system ds/tetragon -c export-stdout -f | ./tetragon observe

We wondered about a Homebrew formula for the CLI, feature proposal here.

One way to see something is to execute a command inside a container.

kubectl exec -ti tiefighter -- /bin/bash

whoami

# figure out which distribution and package manager
cat /etc/os-release 

apk update
apk add curl
curl https://everyonecancontribute.cafe

Next to the default demo cases, Michael deployed and showed his KubeCon EU demo, a C++ application which leads memory when DNS resolution fails. Together with kube-prometheus we inspected the Prometheus graph interface, querying for container_memory_rss{container=~"cpp-dns-leaker-service.*"}.

git clone https://gitlab.com/everyonecancontribute/observability/cpp-dns-leaker.git && cd cpp-dns-leaker

kubectl apply -f https://gitlab.com/everyonecancontribute/observability/cpp-dns-leaker/-/raw/main/manifests/cpp-dns-leaker-service.yml

kubectl logs -f deployment.apps/cpp-dns-leaker-service-o11y

We’ve talked shortly about the application’s code, endless loop, allocating !MB memory, and freeing after operations. The DNS handle error function continues on error, but does not free the buffer. This creates the memory leak to observe.

We simulated chaos by scaling the Core DNS replicas to zero in the running Kubernetes cluster. Alternatively, deploy Chaos Mesh and inject DNS failures.

kubectl scale --replicas 0 deploy/coredns -n kube-system
kubectl scale --replicas 2 deploy/coredns -n kube-system

TCP connection observability was next.

kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/crds/examples/tcp-connect.yaml

We could not make the file handle demo work, probably a kernel specific limitation in Civo. We will research async.

kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/crds/examples/sys_write_follow_fd_prefix.yaml

kubectl logs -n kube-system ds/tetragon -c export-stdout -f | ./tetragon observe --namespace default --pod xwing

kubectl exec -it xwing -- /bin/bash

vi /etc/password

Even though, impressive first demo with Tetragon. We will continue to evaluate demo cases and how it fits in production observability, maybe at Kubernetes Community Days Berlin in 2 weeks where Michael is giving a talk.

Learned at KubeCon: More Updates

OpenTelemetry

OpenTelemetry announced GA for metrics at KubeCon EU, which means that the APIs are stable, and we can look into the collector, auto-instrumentation, and much more. A deep dive into OpenTelemetry metrics touches on the getting started questions, provides the architecture, tools/frameworks to use, and much more. Fantastic article!

The KubeCon EU community vote in TAG Observability is very interesting: Add profiling as OpenTelemetry supported event type

Jaeger Tracing can now accept the OpenTelemetry protocol directly, allowing trace data sent directly: “With this new capability, it is no longer necessary … to run the OpenTelemetry Collector in front of the Jaeger backend.”

eBPF

Bumblebee also brings in a new perspective, helping to build, run and distribute eBPF programs using OCI images. Another great example is Parca for Profiling: at eBPF day at KubeCon EU, the change from C to Rust for more programming safety was a super interesting talk.

Niclas mentioned the [service mesh vs. eBPF](topic and blog post)

Chaos engineering

Michael shared insights into the developer learning Observability talk story at KubeCon EU (slides).

We talked about using podtato-head as demo application for deployment testing, Michael opened the Kubernetes Monitoring Workshop slides to share exercises.

In between, Michael shared the background story about the DevOps Twins, and how https://devops-twins.com/ came to life. Or why GitLab DevRels wear the same shoes - more KubEcon stories in Michael’s blog post.

News

The next meetup happens on July 12, 2022.

We will meet on the second Tuesday at 9am PT.


Date published: June 14, 2022

Tags: Cicd, Containers, Dev, Devsecops, Kubernetes, Cloudnative, Security, Kubecon