Skip to content

alyvusal/falco

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 

Repository files navigation

Falco

Falco taps into the Linux kernel, capturing system calls made by applications running in the cluster. This is achieved through two main mechanisms: a kernel module and extended Berkeley Packet Filter (eBPF) technology, giving it a versatile approach to monitoring system behavior across different environments.

  • Falco libraries: The Falco libraries, or “libs,” are responsible for collecting the data the sensor will process. They also manage state and provide multiple layers of enrichment for the collected data.
  • Plugins - The plugins extend the sensor with additional data sources. For example, plugins make it possible for Falco to use AWS CloudTrail and Kubernetes audit logs as data sources.
  • Falco: This is the main sensor executable, including the rule engine.
  • Falcosidekick: is responsible for routing the notifications and connecting the sensor to the external world.
  • Falco-Talon: Response Engine

Redis: usages:

  • falco → falcosidekick → falcosidekick-ui → redis (to store logs when webui output enabled)
  • falco → falcosidekick → redis

Install

Install on k8s

Falco container must have access to /var/run/docker.sock and /proc, mount it. See Kind example

Deploy Falco, Falcosidekick and Falcosidekick UI

Falcosidekick has separate chart but installation goes over Falco chart, don't install separately

helm repo add falcosecurity https://falcosecurity.github.io/charts

helm upgrade -i falco falcosecurity/falco \
  --namespace falco --create-namespace \
  --version 6.4.0 \
  -f k8s/helm/falco.yaml

UI: http://falcosidekick-ui-192.168.0.100.nip.io

Rule

Rules Structure

- rule: ""
  desc: ""
  condition: "<use sysdig tools to see proc event names>"
  output: ""
  priority: WARNING
  tags: []
falco --list [-v]  # prints full list of fields

sysdig captures system calls

# Create the trace file
sudo sysdig -w testfile.scap

# You will likely want to use a filter to keep the file size under control
sudo sysdig proc.name=cat -w testfile.scap
# do: cat /etc/hosts
# close sysdig with Ctrl-C

# Read it
sysdig -r testfile.scap

# Process the trace file with Falco
falco -e testfile.scap

# You can also use filters when reading trace files:
sysdig -r testfile.scap proc.name=cat
  • libscap: library for system capture
  • libsinsp: library for system inspection

Test App

# Trigger a rule
kubectl create deployment nginx --image=nginx
kubectl exec -it $(kubectl get pods --selector=app=nginx -o name) -- cat /etc/shadow

# Check falco logs
kubectl logs -l app.kubernetes.io/name=falco -n falco -c falco | grep Warning

test custom rule

kubectl exec -it $(kubectl get pods --selector=app=nginx -o name) -- touch /etc/test_file_for_falco_rule
kubectl logs -l app.kubernetes.io/name=falco -n falco -c falco | grep Warning

See sample events generators

Plugins

Install falcoctl (readme)

How t get version:

sudo falcoctl index add falcosecurity https://falcosecurity.github.io/falcoctl/index.yaml
sudo falcoctl index list
sudo falcoctl index update falcosecurity

# search for plugin and rules
sudo falcoctl artifact search kubernetes

# list tags for artifact
sudo falcoctl artifact info falco-rules
sudo falcoctl artifact info k8saudit
sudo falcoctl artifact info k8saudit-rules

# if not tag found like below, then use `0` like k8saudit:0

# check latest version
sudo falcoctl artifact config k8saudit

k8saudit

Kind

Kind uses in kind-control-plane container (ip: 172.18.0.X) for kube-apiserver pod networking as hostNetwork: true, so DNS server will be default 172.18.0.1:53

Post "http://falco-k8saudit-webhook.falco.svc:9765/k8s-audit?timeout=30s": dial tcp: lookup falco-k8saudit-webhook.falco.svc on 172.18.0.1:53: no such host

Add belwo record to /etc/hosts on your laptop

<ClusterIP falco-k8saudit-webhook>  falco-k8saudit-webhook.falco.svc

or add below port mapping in kind config like cilium.yaml

- role: control-plane
  extraPortMappings:
  - containerPort: 30007  # Audit backend for Falco webhook service
    hostPort: 30007
    protocol: TCP
    listenAddress: 192.168.0.100

then in audit webhook backend file server: http://192.168.0.100:30007/k8s-audit will work

Metadata collectors

Metada logging slso sent by audito policy logs with k8saudit. TODO: verify this and check for duplicate logs

REFERENCE

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors