Calico - nf_conntrack_proto_sctp - project-calico

I noticed this error in the calico log.
calico-node [INFO][2355687] felix/int_dataplane.go 1660: attempted to modprobe nf_conntrack_proto_sctp error=exit status 1 output=""
It attempts to turn on the nf_conntrack_proto_sctp kernel module, but the operating system kernel does not contain this module.
Is it possible to disable the use of this module in calico?

Related

How to log kernel messages to /var/log/messages on Yocto system with systemd?

We have a fairly standard Yocto system with Busybox and systemd. We have noticed that kernel messages (visible with dmesg etc) are not logged to /var/log/messages, but are only visible with "journalctl".
The system runs all of these:
/sbin/klogd -n
/sbin/syslogd -n
/lib/systemd/systemd-journald
I have noticed that kernel messages are actually logged in /var/log/messages if journald is stopped. That service provides little of value to us, so disabling or removing it altogether could be a solution, but how can this be done?

kubelet Error while processing event /sys/fs/cgroup/memory/libcontainer_10010_systemd_test_default.slice

I have setup Kubernetes 1.15.3 cluster on Centos 7 OS using systemd cgroupfs. on all my nodes syslog started logging this message frequently.
How to fix this error message?
kubelet: W0907 watcher.go:87 Error while processing event ("/sys/fs/cgroup/memory/libcontainer_10010_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): readdirent: no such file or directory
Thanks
It's a known issue with a bad interaction with runc; someone observed it is actually caused by a repeated etcd health check but that wasn't my experience on Ubuntu, which exhibits that same behavior on every Node
They allege that updating the runc binary on your hosts will make the problem go away, but I haven't tried that myself
I had exactly the same problem with the same kubernetes version and with the same context -that is changing cgroups to systemd. Github ticket for this error is created here.
After changing container runtime, as it is described in this tutorial to systemd error start popping out in kublete service log.
What worked for me was to update docker and containerd to following versions.
docker: v19.03.5
containerd: v1.2.10
I assume that any version higher than above will fix the problem as well.

Why do I see errors like 'unknown capability "CAP_AUDIT_READ"' when I try to run a Concourse task?

When running a Concourse worker an web UI on my linux distribution of choice, I see the following when I try to run the example hello world pipeline:
runc start: exit status 1: unknown capability "CAP_AUDIT_READ"
What's going on?
Concourse uses Garden + runC for its container management and OCI containerization backend. To achieve containerization, certain kernel capabilities are required on the host OS running the worker process.
If you are seeing errors when running a task such as unknown capability "CAP_AUDIT_READ" or any other unknown capability errors, it is likely that your host machine's kernel version is not supported.
The version of Garden + runC Concourse relies on requires kernel version 3.19+, so you will need to run your worker on an OS which supports this kernel version, or update the kernel accordingly.

Linux kernel tune in Google Container Engine

I deployed a redis container to Google Container Engine and get the following warnings.
10:M 01 Mar 05:01:46.140 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
I know to correct the warning I need executing
echo never > /sys/kernel/mm/transparent_hugepage/enabled
I tried that in container but does not help.
How to solve this warning in Google Container Engine?
As I understand, my pods are running on the node, and the node is a VM private for me only? So I ssh to the node and modify the kernel directly?
Yes, you own the nodes and can ssh into them and modify them as you need.

Howto detect in Docker the container kernel version

The application stack I want to dockerize shall run within a CentOS container. The installation procedure verifies kernel version to ensure application requirements are met. Currently it is detected using "uname ...".
However the application now detects the host kernel version, which is "UBUNTU ..." and not "CentOS" ..."
Is it possible to detect the container's kernel version?
Thanks.
In fact, the kernel is the same in the host and in the container. That's the very principle of containerization: the kernel is shared (because actually, a container is a collection of processes running on top of the host kernel, with special isolation properties).
Does that pose a problem for your application?