Why do I see errors like 'unknown capability "CAP_AUDIT_READ"' when I try to run a Concourse task? - concourse

When running a Concourse worker an web UI on my linux distribution of choice, I see the following when I try to run the example hello world pipeline:
runc start: exit status 1: unknown capability "CAP_AUDIT_READ"
What's going on?

Concourse uses Garden + runC for its container management and OCI containerization backend. To achieve containerization, certain kernel capabilities are required on the host OS running the worker process.
If you are seeing errors when running a task such as unknown capability "CAP_AUDIT_READ" or any other unknown capability errors, it is likely that your host machine's kernel version is not supported.
The version of Garden + runC Concourse relies on requires kernel version 3.19+, so you will need to run your worker on an OS which supports this kernel version, or update the kernel accordingly.

Related

Spiffe error while deploying client-agent pods

I am using this guide for deploying Spiffe on K8s Cluster "https://spiffe.io/docs/latest/try/getting-started-k8s/"
One of the steps in this process is running the command "kubectl apply -f client-deployment.yaml" which deploys spiffe client agent.
But the pods keeps on getting in the error state
Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "sleep": executable file not found in $PATH: unknown
Image used : ghcr.io/spiffe/spire-agent:1.5.1
It seems connected to this PR from 3 days ago (there is no longer a "sleep" executable in the image).
SPIRE is moving away from the alpine Docker release images in favor of scratch images that contain only the release binary to minimize the size of the images and include only the software that is necessary to run in the container.
You should report the issue and use
gcr.io/spiffe-io/spire-agent:1.2.3
(the last image they used) meanwhile.

GitLab CI for systemd service

I have built a deb package for a systemd service, which I would like to test after building it with GitLab CI.
The image I use is based on debian:stable (i.e. buster at the time of this writing).
I am doing a basic smoke test like this:
test:
stage: test
script:
- dpkg -i myservice.deb
- systemctl start myservice
This fails with an error message, because systemctl is not found. If I install that as part of the test, it still fails because systemd is not the first process on the system.
How can I test a systemctl service on GitLab CI? Is there a Debian image which runs systemd?
systemd is not compatible with Docker, and is normally not needed as the Docker paradigm is to run one service per container, therefore a service control framework does not make much sense. Unless you specifically rely on systemd for tests…
I found an instruction to run systemd inside a Docker container at https://developers.redhat.com/blog/2014/05/05/running-systemd-within-docker-container/ (and, as a follow-up, https://developers.redhat.com/blog/2016/09/13/running-systemd-in-a-non-privileged-container/). It is based on Redhat but the principles of it should apply to other distributions as well (and it gives some more background info on systemd/docker compatibility).
However, at this point I am wondering if tests conducted on such a setup are still meaningful. It might be better to rely on a local VM for tests.

Installing kubernetes on less ram

Is it possible to install kubernetes by kubeadm init command on system has RAM less than 1GB ?I have tried to install but it failed in kubeadm init command.
As mentioned in the installation steps to be taken before you begin, you need to have:
linux compatible system for master and nodes
2GB or more RAM per machine
network connectivity
swap disabled on every node
But going back to your question, It may be possible to run the installation process, but the further usability is not possible. This configuration will be not stable.

Linux kernel tune in Google Container Engine

I deployed a redis container to Google Container Engine and get the following warnings.
10:M 01 Mar 05:01:46.140 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
I know to correct the warning I need executing
echo never > /sys/kernel/mm/transparent_hugepage/enabled
I tried that in container but does not help.
How to solve this warning in Google Container Engine?
As I understand, my pods are running on the node, and the node is a VM private for me only? So I ssh to the node and modify the kernel directly?
Yes, you own the nodes and can ssh into them and modify them as you need.

Howto detect in Docker the container kernel version

The application stack I want to dockerize shall run within a CentOS container. The installation procedure verifies kernel version to ensure application requirements are met. Currently it is detected using "uname ...".
However the application now detects the host kernel version, which is "UBUNTU ..." and not "CentOS" ..."
Is it possible to detect the container's kernel version?
Thanks.
In fact, the kernel is the same in the host and in the container. That's the very principle of containerization: the kernel is shared (because actually, a container is a collection of processes running on top of the host kernel, with special isolation properties).
Does that pose a problem for your application?