Status: Under development
A command line tool that can be used to build the Falco kernel module and eBPF probe.
When you meet kernelversion
that refers to the version you get executing uname -v
:
For example, below, the version is the 59
after the hash
uname -v
#59-Ubuntu SMP Wed Dec 4 10:02:00 UTC 2019
When you meet kernelrelease
, that refers to the kernel release you get executing uname -r
:
uname -r
4.15.0-1057-aws
driverkit kubernetes --output-module /tmp/falco.ko --kernelversion=81 --kernelrelease=4.15.0-72-generic --driverversion=master --target=ubuntu-generic
driverkit docker --output-module /tmp/falco.ko --kernelversion=81 --kernelrelease=4.15.0-72-generic --driverversion=master --target=ubuntu-generic
Create a file named ubuntu-aws.yaml
containing the following content:
kernelrelease: 4.15.0-1057-aws
kernelversion: 59
target: ubuntu-aws
output:
module: /tmp/falco-ubuntu-aws.ko
probe: /tmp/falco-ubuntu-aws.o
driverversion: master
Now run driverkit using the configuration file:
driverkit docker -c ubuntu-aws.yaml
It is possible to customize the kernel module name that is produced by Driverkit with the moduledevicename
and moduledrivername
options.
In this context, the device name is the prefix used for the devices in /dev/
, while the driver name is the kernel module name as reported by modinfo
or lsmod
once the module is loaded.
At the moment, driverkit supports:
- amd64 (x86_64)
- arm64 (aarch64)
The architecture is taken from runtime environment, but it can be overridden through architecture
config.
Driverkit also supports cross building for arm64 using qemu from an x86_64 host.
Note: we could not automatically fetch correct architecture because some kernel names do not have the -$arch
, namely Ubuntu ones.
Example configuration file to build both the Kernel module and eBPF probe for Ubuntu generic.
kernelrelease: 4.15.0-72-generic
kernelversion: 81
target: ubuntu-generic
output:
module: /tmp/falco-ubuntu-generic.ko
probe: /tmp/falco-ubuntu-generic.o
driverversion: master
Example configuration file to build both the Kernel module and eBPF probe for Ubuntu AWS.
kernelrelease: 4.15.0-1057-aws
kernelversion: 59
target: ubuntu-aws
output:
module: /tmp/falco-ubuntu-aws.ko
probe: /tmp/falco-ubuntu-aws.o
driverversion: master
kernelrelease: 2.6.32-754.14.2.el6.x86_64
kernelversion: 1
target: centos
output:
module: /tmp/falco-centos6.ko
driverversion: master
kernelrelease: 3.10.0-957.12.2.el7.x86_64
kernelversion: 1
target: centos
output:
module: /tmp/falco-centos7.ko
driverversion: master
kernelrelease: 4.18.0-147.5.1.el8_1.x86_64
kernelversion: 1
target: centos
output:
module: /tmp/falco-centos8.ko
driverversion: master
kernelrelease: 4.14.26-46.32.amzn1.x86_64
target: amazonlinux
output:
module: /tmp/falco_amazonlinux_4.14.26-46.32.amzn1.x86_64.ko
driverversion: master
kernelrelease: 4.14.171-136.231.amzn2.x86_64
target: amazonlinux2
output:
module: /tmp/falco_amazonlinux2_4.14.171-136.231.amzn2.x86_64.ko
probe: /tmp/falco_amazonlinux2_4.14.171-136.231.amzn2.x86_64.o
driverversion: master
kernelrelease: 5.10.96-90.460.amzn2022.x86_64
target: amazonlinux2022
output:
module: /tmp/falco_amazonlinux2022_5.10.96-90.460.amzn2022.x86_64.ko
probe: /tmp/falco_amazonlinux2022_5.10.96-90.460.amzn2022.x86_64.o
driverversion: master
Example configuration file to build both the Kernel module and eBPF probe for Debian.
kernelrelease: 4.19.0-6-amd64
kernelversion: 1
output:
module: /tmp/falco-debian.ko
probe: /tmp/falco-debian.o
target: debian
driverversion: master
Example configuration file to build both the Kernel module and eBPF probe for Flatcar.
The Flatcar release version needs to be provided in the kernelrelease
field instead of the kernel version.
kernelrelease: 3185.0.0
target: flatcar
output:
module: /tmp/falco-flatcar-3185.0.0.ko
probe: /tmp/falco-flatcar-3185.0.0.o
driverversion: master
kernelrelease: 3.10.0-1160.66.1.el7.x86_64
target: redhat
output:
module: /tmp/falco-redhat7.ko
driverversion: master
builderimage: registry.redhat.io/rhel7:rhel7_driverkit
The image used for this build was created with the following command:
docker build --build-arg rh_username=<username> --build-arg rh_password=<password> -t registry.redhat.io/rhel7:rhel7_driverkit -f Dockerfile.rhel7 .
--secret option! |
---|
and Dockerfile.rhel7:
FROM registry.redhat.io/rhel7
ARG rh_username
ARG rh_password
RUN subscription-manager register --username $rh_username --password $rh_password --auto-attach
RUN yum install gcc elfutils-libelf-devel make -y
docker login registry.redhat.io |
---|
kernelrelease: 4.18.0-372.9.1.el8.x86_64
target: redhat
output:
module: /tmp/falco-redhat8.ko
probe: /tmp/falco-redhat8.o
driverversion: master
builderimage: redhat/ubi8:rhel8_driverkit
The image used for this build was created with the following command:
docker build --build-arg rh_username=<username> --build-arg rh_password=<password> -t redhat/ubi8:rhel8_driverkit -f Dockerfile.rhel8 .
--secret option! |
---|
and Dockerfile.rhel8:
FROM redhat/ubi8
ARG rh_username
ARG rh_password
RUN subscription-manager register --username $rh_username --password $rh_password --auto-attach
RUN yum install gcc curl elfutils-libelf-devel kmod make \
llvm-toolset-0:12.0.1-1.module+el8.5.0+11871+08d0eab5.x86_64 cpio -y
kernelrelease: 5.14.0-70.13.1.el9_0.x86_64
target: redhat
output:
module: /tmp/falco-redhat9.ko
probe: /tmp/falco-redhat9.o
driverversion: master
builderimage: docker.io/redhat/ubi9:rhel9_driverkit
The image used for this build was created with the following command:
docker build -t docker.io/redhat/ubi9:rhel9_driverkit -f Dockerfile.rhel9 .
and Dockerfile.rhel9:
FROM docker.io/redhat/ubi9
RUN yum install gcc elfutils-libelf-devel kmod make cpio llvm-toolset -y
❗ subscription-manager does not work on RHEL9 containers: Host must have a valid RHEL subscription |
---|
In case of vanilla, you also need to pass the kernel config data in base64 format.
In most systems you can get kernelconfigdata
by reading /proc/config.gz
.
kernelrelease: 5.5.2
kernelversion: 1
target: vanilla
output:
module: /tmp/falco-vanilla.ko
probe: /tmp/falco-vanilla.o
driverversion: 0de226085cc4603c45ebb6883ca4cacae0bd25b2
Now you can add the kernelconfigdata
to the configuration file, to do so:
zcat /proc/config.gz| base64 -w0 | awk '{print "kernelconfigdata: " $1;}' >> /tmp/vanilla.yaml
The command above assumes that you saved the configuration file at /tmp/vanilla.yaml
Usually, building for a vanilla
target requires more time.
So, we suggest to increase the driverkit
timeout (defaults to 60
seconds):
driverkit docker -c /tmp/vanilla.yaml --timeout=300
- Have a package that can build the Falco kernel module in k8s
- Have a package that can build the Falco kernel module in docker
- Have a package that can build the Falco eBPF probe in k8s
- Have a package that can build the Falco eBPF probe in docker
- Support the top distributions in our Survey and the Vanilla Kernel
- Ubuntu (
ubuntu-aws
,ubuntu-generic
) - CentOS 8
- CentOS 7
- CentOS 6
- AmazonLinux (
amazonlinux
,amazonlinux2
) - Debian
- Vanilla kernel (
vanilla
)
- Ubuntu (
We are conducting a survey to know what is the most interesting set of Operating Systems we must support first in driverkit.
You can find the results of the survey here
You probably came here because you want to tell the Falco Drivers Build Grid to build drivers for a specific distro you care about.
If that distribution is not supported by driverkit, the Falco Drivers Build Grid will not be able to just build it as it does for other distros.
To add a new supported distribution, you need to create a specific file implementing the builder.Builder
interface.
You can find the specific distribution files into the pkg/driverbuilder/builder folder.
Here's the Ubuntu one for reference.
Following this simple set of instructions should help you while you implement a new builder.Builder
.
Create a file, named with the name of the distro you want to add in the pkg/driverbuilder/builder
folder.
touch pkg/driverbuilder/builder/archlinux.go
Your builder will need a constant for the target it implements. Usually that constant
can just be the name of the distribution you are implementing. A builder can implement
more than one target at time. For example, the Ubuntu builder implements both ubuntu-generic
and ubuntu-aws
to reflect the organization that the distro itself has.
Once you have the constant, you will need to add it to the BuilderByTarget
map.
Open your file and you will need to have something like this:
// TargetTypeArchLinux identifies the Arch Linux target.
const TargetTypeArchLinux Type = "archlinux"
type archLinux struct {
}
func init() {
BuilderByTarget[TargetTypeArchLinux] = &archLinux{}
}
Now, you can implement the builder.Builder
interface for the archlinux
struct
you just registered.
Here's a very minimalistic example.
func (v archLinux) Script(c Config) (string, error) {
return "echo 'hello world'", nil
}
Essentially, the Script
function that you are implementing will need to return a string containing
a bash
script that will be executed by driverkit at build time.
Under pkg/driverbuilder/builder/templates
folder, you can find all the template scripts for the supported builders.
Adding a new template there and using go:embed
to include it in your builder, allows leaner code
without mixing up templates and builder logic.
For example:
//go:embed templates/archlinux.sh
var archlinuxTemplate string
Depending on how the distro works, the script will need to fetch the kernel headers for it at the specific kernel version specified
in the Config
struct at c.Build.KernelVersion
.
Once you have those, based on what that kernel can do and based on what was configured
by the user you will need to build the kernel module driver and/or the eBPF probe driver.
How does this work?
If the user specifies:
c.Build.ModuleFilePath
you will need to build the kernel module and save it in /tmp/driver/falco.ko`c.Build.ProbeFilePath
you will need to build the eBPF probe and save it in /tmp/driver/probe.o`
The /tmp/driver
MUST be interpolated from the DriverDirectory
constant from builders.go
.
If you look at the various builder implemented, you will see that the task of creating a new builder can be easy or difficult depending on how the distribution ships their artifacts.
Driverkit builder image supports 4 gcc versions:
- GCC-8
- GCC-6.3.0
- GCC-5.5.0
- GCC-4.8.4
You can dynamically choose the one you prefer, likely switching on the kernel version.
For an example, you can check out Ubuntu builder, namely: ubuntuGCCVersionFromKernelRelease
.
Driverkit builder image supports 2 llvm versions:
- llvm-7
- llvm-12
You can dynamically choose the one you prefer, likely switching on the kernel version.
For an example, you can check out Debian builder, namely: debianLLVMVersionFromKernelRelease
.
When creating a new builder, it is recommended to check that kernel-crawler can also support collecting the new builders kernel versions and header package URLs. This will make sure that the latest drivers for the new builder are automatically built by test-infra. If required, add a feature request for support for the new builder on the kernel-crawler repository.