Very good talks at this pre-FOSDEM 2020 Meetup in an unusual industrial location.
So You’re Using Kubernetes! A Practical Guide for Application Security
Phil Estes | @estesp | IBM
Talk summary
In this talk we’ll cover the options and best practices at each layer for deploying and running applications in a secure way. We will also look at the ever-growing ecosystem of tooling–spanning both open source and vendor-specific–that can be useful as developers help their overall organizations move towards production-ready secure applications in Kubernetes!
BUT…Container Security is Hard!!
- Use a cloud provider
- User recommended guides and profiles publicly available (CIS, NIST/NVD, DockerBench, etc.)
- Try out emerging tooling
- Generate seccomp profiles by running your application with BPF tracing: github.com/containers/oci-seccomp-bpg-hook
SECure COMPuting (seccomp)
seccomp is a computer security facility in the Linux kernel. seccomp allows a process to make a one-way transition into a “secure” state where it cannot make any system calls except exit, sigreturn, read and write to already-open file descriptors.
- LWN.net: A seccomp overview
- SECure COMPuting with filters (kernel.org)
- Seccomp in Kubernetes — Part I: 7 things you should know before you even start!
Github - oci-seccomp-bpf-hook
OCI hook to trace syscalls and generate a seccomp profile
OCI hooks to generate seccomp profiles by tracing the syscalls made by the container. The generated profile would whitelist all the syscalls made and blacklist every other syscall.
The syscalls are traced by launching a binary by using the prestart OCI hook. The binary started spawns a child process which attaches function enter_trace to the raw_syscalls:sys_enter tracepoint using eBPF. The function looks at all the syscalls made on the system and writes the syscalls which have the same PID namespace as the container to the perf buffer. The perf buffer is read by the process in the userspace and generates a seccomp profile when the container exits.
github.com/oci-seccomp-bpf-hook
eBPF
eBPF should stand for something meaningful, like Virtual Kernel Instruction Set (VKIS), but due to its origins it is extended Berkeley Packet Filter
An eBPF program is “attached” to a designated code path in the kernel. When the code path is traversed, any attached eBPF programs are executed. Given its origin, eBPF is especially suited to writing network programs and it’s possible to write programs that attach to a network socket to filter traffic, to classify traffic, and to run network classifier actions. It’s even possible to modify the settings of an established network socket with an eBPF program. The XDP project, in particular, uses eBPF to do high-performance packet processing by running eBPF programs at the lowest level of the network stack, immediately after a packet is received.
Another type of filtering performed by the kernel is restricting which system calls a process can use. This is done with seccomp BPF.
eBPF is also useful for debugging the kernel and carrying out performance analysis; programs can be attached to tracepoints, kprobes, and perf events. Because eBPF programs can access kernel data structures, developers can write and test new debugging code without having to recompile the kernel. The implications are obvious for busy engineers debugging issues on live, running systems. It’s even possible to use eBPF to debug user-space programs by using Userland Statically Defined Tracepoints.
The power of eBPF flows from two advantages: it’s fast and it’s safe. To fully appreciate it, you need to understand how it works.
- LWN.net: A thorough introduction to eBPF
- Youtube - Linux conf 2017 BPF: Tracing and More (46:17) by Brendan Gregg
- Brendan Gregg’s Blog:
- Learn eBPF Tracing: Tutorial and Examples
- Linux Extended BPF (eBPF) Tracing Tools
Demos
The demos are located at github.com/estesp/playground
How to Train Your Red Team (for Cloud-Native)
Andrew Martin | @sublimino and @controlplaneio
Talk summary
How do we safely introduce Cloud Native software without opening unexpected security holes? By understanding risk, modelling threats, and attacking our own systems.
“Simulation” (i.e. playing hacking games on production-like infrastructure) is rising to prominence as a comprehensive training method for penetration testers, Red Teams, and infrastructure engineers. It safely demonstrates the risks an organisation or platform may face by using a controlled environment that looks and feels like production — but is only a clone.
In this highly technical talk we:
- cover the challenges faced introducing Cloud Native to financial organisations
- show the steps taken to threat model Kubernetes
- build and automate attack trees and kill chains for likely (and perversely difficult) compromise scenarios
- demonstrate an open-source Kubernetes CTF platform
Attack Trees
Once Attack Trees are created one can use them to:
- Map security controls and standards onto the tree in order to understand their coverage
- Automate Security regression test
- Enable the SOC to develop a protective monitoring policy and provide situational awareness in the event of a suspected breach
- Add colour to discussions between security and project teams
Attackers think in Graphs, Defenders think in Lists
Full article from @JohnLaTwC: Attackers think in graphs. As long as this is true, attackers win
Attack Tree sample of a compromised container:
kubesim.io
kubesim.io is a K8S Hacking and Hardening Simulator
- infrastructure deployment
- cluster provisiioning and workload configuration
- scenario runner with challenges, hints, and scoring
- raw command line experience
- open source core* at github.com/kubernetes-simulator/simulator
How to train your red team
- Hosted: kubesim.io
- Open Source: github.com/kubernetes-simulator/simulator
- Attack Tree: github.com/cncf/financial-user-group/tree/master/projects/k8s-threat-model
- Training: control-plane.io/
Runtime Security with Kubernetes - Let’s hack a cluster
Kris Nova | @kris-nova
Bio
Kris Nova, is Chief Open Source Advocate at Sysdig, focuses on security, intrusion detection, and the Linux kernel with Kubernetes and eBPF. As an active advocate for open source, Nova is an ambassador for the CNCF and the creator of kubicorn, a successful Kubernetes infrastructure management tool. Nova joins Sysdig from Heptio/VMWare, where she was a Senior Developer Advocate. Prior to VMWare, Nova was at Deis/Microsoft, where she was a developer advocate and an engineer on Kubernetes. Nova has a deep technical background in the Go programming language and has authored many successful open-source tools in Go. Nova has organized many special interest groups in Kubernetes. She is a leader in the community. She understands the frustration with running cloud native infrastructure via a distributed cloud-native application and authored an O’Reilly book on the topic, Cloud Native Infrastructure. Nova lives in Seattle and spends her free time climbing mountains.
Talk summary
Kubernetes is complex, and extremely vulnerable. In 2019 we explored the complexity of the Kubernetes codebase, and the antipatterns therein. This year we want to look at understanding how we observe our cluster at runtime. Let’s live code some C and C++ and explore the libraries that bring Wireshark, Falco, and Sysdig to life. We concretely demonstrate how we are able to audit a Kubernetes system, by taking advantage of auditing the kernel’s syscall information while enriching this data with meta information from Kubernetes.
We start off by presenting the problem of Kubernetes security at runtime. We discuss concerns with namespace and privilege escalation in a Kubernetes environment. We discover how auditing the kernel gives us visibility into both the container layer, as well as the underlying system layer.
We look at building an eBPF probe, or kernel module to begin auditing syscall metrics. We discover how we are able to pull those out of the kernel into userspace, and start exploring powerful patterns for using these metrics to secure a Kubernetes cluster.
The audience walks away understanding how the kernel treats containers, and how we are able to easily make sense of them. The audience also walks away equipped with an OSS toolkit for understanding, observing, and securing a Kubernetes environment.
This session was repeated and recorded at FOSDEM. See the video from FOSDEM
Slides: github.com/kris-nova/public-speaking
Falco
Falco efficiently leverages Extended Berkeley Packet Filter (eBPF), a secure mechanism, to capture system calls and gain deep visibility. By adding Kubernetes application context and Kubernetes API audit events, teams can understand who did what.
Falco, the open source cloud-native runtime security project, is the defacto Kubernetes threat detection engine. Falco detects unexpected application behavior and alerts on threats at runtime.
Falco is the first runtime security project to join CNCF Incubating stage.
- Falco.org
- github/falcosecurity/falco
- SELinux, Seccomp, Sysdig Falco, and you: A technical discussion
Open Policy Agent (OPA)
The Open Policy Agent (OPA, pronounced “oh-pa”) is an open source, general-purpose policy engine that unifies policy enforcement across the stack. OPA provides a high-level declarative language that let’s you specify policy as code and simple APIs to offload policy decision-making from your software. You can use OPA to enforce policies in microservices, Kubernetes, CI/CD pipelines, API gateways, and more.
OPA decouples policy decision-making from policy enforcement. When your software needs to make policy decisions it queries OPA and supplies structured data (e.g., JSON) as input. OPA accepts arbitrary structured data as input.
Flexible architecture, can be used as daemon or as library in your aplication.
Applying Policy Throughout The Application Lifecycle with Open Policy Agent by Gareth Rushgrove at Snyk
Links
- kubernetes security audit
- Kris Nova on twitter
- Kris Nova on github
- get involved with falco
- More Fosdem security talks this weekend: