selinux not working under containerd with selinux-enable=true - kubernetes

I have two k8s cluster, one using docker and another using containerd directly, both with selinux enabled.
but I found selinux not actually working on the containerd one, although this two cluster have the same version of containerd and runc.
did i miss some setting with containerd?
docker: file label is container_file_t, and process runs as container_t, selinux works fine
K8s version: 1.17
Docker version: 19.03.6
Containerd version: 1.2.10
selinux enable by adding ["selinux-enabled": true] to /etc/docker/daemon.json
// create pod using tomcat official image then check the process and file label
# kubectl exec tomcat -it -- ps -eZ
LABEL PID TTY TIME CMD
system_u:system_r:container_t:s0:c655,c743 1 ? 00:00:00 java
# ls -Z /usr/local/openjdk-8/bin/java
system_u:object_r:container_file_t:s0:c655,c743 /usr/local/openjdk-8/bin/java
containerd: file label is container_var_lib_t, and process runs as spc_t, selinux makes no sense
K8s version: 1.15
Containerd version: 1.2.10
selinux enable by setting [enable_selinux = true] in /etc/containerd/config.toml
// create pod using tomcat official image then check the process and file label
# kubectl exec tomcat -it -- ps -eZ
LABEL PID TTY TIME CMD
system_u:system_r:spc_t:s0 1 ? 00:00:00 java
# ls -Z /usr/local/openjdk-8/bin/java
system_u:object_r:container_var_lib_t:s0 /usr/local/openjdk-8/bin/java
// seems run as spc_t is correct
# sesearch -T -t container_var_lib_t | grep spc_t
type_transition container_runtime_t container_var_lib_t : process spc_t;

From this issue we can read:
Containerd includes minimal support for SELinux. More accurately, it
contains support to run ON systems using SELinux, but it does not make
use of SELinux to improve container security.
All containers run with the
system_u:system_r:container_runtime_t:s0 label, but no further
segmentation is made
There is no full support for what you are doing using Containerd. Your approach is correct but the problem is lack of support to this functionality.

Related

has to restart minikube to make my service successfully resquested [duplicate]

I try to run minikube v0.22.1 and kubectl v1.7.5 on MacOS with Virtualbox.
$ minikube start
Starting local Kubernetes v1.7.5 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
$ minikube version
minikube version: v0.22.1
$ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
However all kubectl commands fail with "connection refused - did you specify the right host or port?"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T19:32:26Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"}
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?
The solution proposed here (sudo ifconfig vboxnet0 up) did not help, the vboxnet0 interface is up.
Any ideas or suggestions are highly appreciated.
If you run
kubectl config get-contexts
Do you get the following?
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube
If not that means your kubectl context is not correctly setup. To setup the context correctly run this
kubectl config use-context minikube
You may have it stopped or saved for any reason. sometimes, after you enable/disable addons you may need to restart it.
1) Restart minikube VM, stop it
$ minikube stop
2) Start it again, make sure you assign enough cpu/memory (the following is just an example of how to pass it, you need to adjust it based on available resources in your machine)
$ minikube start --memory=10000 --cpu 4
If this didn't work out, you can do the following that will help you to know more about the underlying cause of problem:
Check minikube status and make sure the status is Running
$ minikube status
Or, check minkube logs:
minikube logs
Finally, if you couldn't fix it, you may need to delete and start it from scratch
$ minikube delete && minikube start
Ref: https://github.com/kubernetes/minikube/issues/1498
I will just drop this in here in case anyone find this question.
As of right now I don't know the versions of the OP's setup. So I'm going to assume he has the latest version that was available when he posted, which was: 0.22.1
Description
I had a similar issue. The cluster was timing-out irregularly. One moment I got answers using kubectl cluster-info dump another I didn't. Then it worked again, and then it didn't. I found a github bug report with a solution.
Solution
Remove your VirtualBox VM.
Remove the ./minikube folder.
Remove the minikube executable.
Install version 0.19.0.
Verify that minikube is working with: kubectl
Versions
OS: Windows 10 (Home edition)
Minikube bugged version: 0.22.2
Minikube working version: 0.19.0
Kubectl (client): v1.7.0
Kubectl (server): v1.6.0
EDIT:
I kept having some issue with minikube after I posted this original answer. I found something that fixed the issue completely.
It's related to the dynamic memory setting in Hyper-V.
Solution
1. Turn off the hyper-v minikube VM.
2. Go to the VM's settings.
3. Turn off dynamic memory allocation.
4. Assign a decent amount of memory.
5. Save and turn the VM on again.
This should work with any minikube version. See this github issue for progress on an automatated solution
When debugging the minikube commands, e.g.
$ minikube dashboard --loglevel 0 --logtostderr
some proxy issues became visible and could be solved.
I ran into this situation this morning (another Monday!) on MacOS 11.3 with minikube v1.19.0.
I ran minikube status and got the following:
E0503 14:15:43.912005 7308 status.go:412] kubeconfig endpoint: got: 127.0.0.1:64041, want: 127.0.0.1:56537
minikube
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Misconfigured
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
Seemed like good advice, so I did run minikube update-context and got this:
šŸŽ‰ "minikube" context has been updated to point to 127.0.0.1:56537
šŸ’— Current context is "minikube"
After which everything worked like it did on Friday.
After the Linux Security OS patching and reboot we are unable to start kubernetes service received below error.
Error message: The connection to the server 192.168.1.101:8443 received while starting the kubernetes service.
This issue happened due to systemd package got updated during the security patching.
So We did below action to bring up the application On each master nodes
1. Update the /usr/lib/systemd/system/kubelet.service fie by removing the below two lines:
ExecStartPost=/bin/bash -c 'umask 0022; pgrep -x kubelet > /run/kubelet.pid'
ExecStopPost=/bin/bash -c 'rm -f /run/kubelet.pid'
2. Update the /usr/lib/systemd/system/kube-proxy.service fie by removing the below two lines:
ExecStartPost=/bin/bash -c 'umask 0022; pgrep -x kubelet > /run/kubelet.pid'
ExecStopPost=/bin/bash -c 'rm -f /run/kubelet.pid'
3. Run the kube-restart.sh on the master nodes.
4. run the kube-restart.sh on the worker nodes.
Update: I am using minikube version: v1.25.2
The command mentioned in this thread did NOT work:
minikube start --memory=10000 --cpu 4 #this will FAIL
This, however, DID WORK (use cpus instead. I also changed values to show minimum requirement for Docker):
minikube start --memory=1800 --cpus=2 # this will work
minikube start --memory=1800 --cpus 2 # this will also work
minikube delete && minikube start
sudo minikube start --vm-driver=none (start minikube again)
This solved my problem
minikube delete
minikube start
just restarted the container

Location of Kubernetes config directory with Docker Desktop on Windows

I am running a local Kubernetes cluster through Docker Desktop on Windows. I'm attempting to modify my kube-apiserver config, and all of the information I've found has said to modify /etc/kubernetes/manifests/kube-apiserver.yaml on the master. I haven't been able to find this file, and am not sure what the proper way is to do this. Is there a different process because the cluster is through Docker Desktop?
Is there a different process because the cluster is through Docker Desktop?
You can get access to the kubeapi-server.yaml with a Kubernetes that is running on Docker Desktop but in a "hacky" way. I've included the explanation below.
For setups that require such reconfigurations, I encourage you to use different solution like for example minikube.
Minikube has a feature that allows you to pass the additional options for the Kubernetes components. You can read more about --extra-config ExtraOption by following this documentation:
Minikube.sigs.k8s.io: Docs: Commands: Start
As for the reconfiguration of kube-apiserver.yaml with Docker Desktop
You need to run following command:
docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
Above command will allow you to run:
vi /etc/kubernetes/manifests/kube-apiserver.yaml
This lets you edit the API server configuration. The Pod running kubeapi-server will be restarted with new parameters.
You can check below StackOverflow answers for more reference:
Stackoverflow.com: Answer: Where are the Docker Desktop for Windows kubelet logs located?
Stackoverflow.com: Answer: How to change the default nodeport range on Mac (docker-desktop)?
I've used this answer without $ screen command and I was able to reconfigure kubeapi-server on Docker Desktop in Windows

Error starting Docker container (WSL, docker-ce, Ubuntu 16.04)

Microsoft Windows [Version 10.0.17134.285],
Ubuntu 16.04 (WSL),
docker-ce (stable)
I am following the instructions here - https://nickjanetakis.com/blog/setting-up-docker-for-windows-and-wsl-to-work-flawlessly. I opted for "stable" rather than "edge". I mounted the c drive mapping manually with
sudo mkdir /c
sudo mount --bind /mnt/c /c
rather than the WSL config file way, because I wasn't sure if I wanted it for ALL my WSL instances. Other than that, I followed the instructions.
I have started the Docker daemon with
sudo cgroupfs-mount
sudo dockerd -H tcp://0.0.0.0:2375 --tls=false
When I try to start my container with
docker run -d -p 27017:27017 --name testDB mongo:3.4
I get
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:46: preparing rootfs caused \\\"invalid argument\\\"\"": unknown.
and I cannot connect to the MongoDB on the container using localhost:27017.
docker ps -a
shows
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e115d1c409a3 mongo:3.4 "docker-entrypoint.sā€¦" 6 seconds ago Created 0.0.0.0:27017->27017/tcp testDB
and
docker info
shows
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 1
Server Version: 18.06.1-ce
Storage Driver: overlay2
Backing Filesystem: <unknown>
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Kernel Version: 4.4.0-17134-Microsoft
Operating System: Ubuntu 16.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.9GiB
Name: DESKTOP-4F100D9
ID: EFH2:O3RT:3OO4:27P5:ZNK7:N5JW:WE5M:4VSK:QREN:YCV4:GSYG:ZDTR
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No memory limit support
WARNING: No swap limit support
WARNING: No kernel memory limit support
WARNING: No oom kill disable support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
WARNING: No cpu shares support
WARNING: No cpuset support
Any ideas what I did wrong and how to fix it?
(I need to run Docker under Linux(WSL) - I cannot use Docker for Windows because we are using VirtualBox, and Hyper-V is disabled)
Currently, you cannot use docker daemon directly from WSL. There are several issues, mostly with networking. It works only for simple images like hello world (Reddit topic)
What you can do, is connect from WSL to docker daemon in windows. So following the tutorial, you mentioned is fine, but if you're running it with VirtualBox you have to either start default machine or create and start a new one. This docker machine will be your daemon.
By default docker-machine command is not working correctly in WSL, but you can make it works by putting this code to e.g. ~/.bashrc file
# Ability to run docker-machine command properly from WSL
function docker-machine()
{
if [ $1 == "env" ]; then
docker-machine.exe $1 $2 --shell bash | sed 's/C:/\/c/' | sed 's/\\/\//g' | sed 's:#.*$::g' | sed 's/"//g'
printf "# Run this command to configure your shell:\n"
printf "# eval \"\$(docker-machine $1 $2)\"\n"
else
docker-machine.exe "$#"
fi
}
export -f docker-machine
After running source ~/.bashrc or reopening the bash you can run:
docker-machine start default - will start machine
eval $(docker-machine env default) - will connect your bash session to the machine
and then you should be able to run all the docker stuff like
docker ps
docker run -it alpine sh
docker build
etc
The docker machine will run until you either stop it or you shut down your PC. If you open a new bash session (window), you have to run just eval $(docker-machine env default) in order to connect your new session to the machine.
Hope it helps. :)
This is a simple solution which is to use Docker on windows in WSL instead.
Just add the following to your WSL .bashrc file.
export PATH="$HOME/bin:$HOME/.local/bin:$PATH"
export PATH="$PATH:/mnt/c/Program\ Files/Docker/Docker/resources/bin"
alias docker=docker.exe
alias docker-compose=docker-compose.exe
Reference: https://blog.jayway.com/2017/04/19/running-docker-on-bash-on-windows/

How to confirm minikube is using hyperkit

When trying to run minikube with hyperkit, I was getting errors about xhyve not being installed. I installed that and reran minikube start --vm-driver hyperkit with no issues.
I was under the impression that hyperkit was a replacement for xhyve, not a supplement to it.
When I run ps I see both com.docker.hyperkit and docker-machine-driver-xhyve running.
How can I confirm that minikube is correctly using hyperkit?
Docker for Mac changed virtualization layer few times last years, and it can confuse users after updates of environment.
If the process list shows both com.docker.hyperkit and xhyve processes is probably due
to docker-machine environment which was previously set up using docker-machine-driver-xhyve.
You may consider cleaning up installation by
stopping Docker (from command line or from tray icon),
next removing machines created by docker-machine tool.
I can also suggest to remove current minikube installation using
minikube stop && minikube delete
and start fresh one with:
minikube start --v=10 --vm-driver=hyperkit"
That will add additional verbose output of building minikube environment.
This will give you the current driver for the current machine. Replace the second "minikube" with the name of your profile if you're using the --profile flag.
$ cat ~/.minikube/machines/minikube/config.json | grep DriverName
Strange, considering Hyperkit is supposed to replace xhyve eventually.
Make sure Hyperkit is built/installed and referenced by tour PATH.
And that you are using the latest docker-ce for Mac.
Use this command to get a list of each hypervisor instance that's running with hyperkit:
$ ps -ef | grep hyperkit
If minikube is running in hyperkit then the name 'minikube' should show up in the output:
0 29305 1 0 Tue06PM ?? 515:01.32 /usr/local/bin/hyperkit -A -u -F /Users/me/.minikube/machines/minikube/hyperkit.pid -c 2 -m 2000M -s 0:0,...
The instance labeled as 'com.docker.hyperkit' is the process that's being used by Docker and is NOT the minikube instance.

CentOS with Systemd on Docker

I am actually working on automated testing of my playbooks with Gitlab-CI, Ubuntu is working very well and getting no issues.
The Problem actually I have is with CentOS and Systemd, first of all the Playbook ( installing Postgres 9.5 inside CentOS7):
- name: Ensure PostgreSQL is running
service:
name: postgresql-9.5
state: restarted
ignore_errors: true
when:
- ansible_os_family == 'RedHat'
so, and this is what I get if i want to start postgres inside the container:
Failed to get D-Bus connection: Operation not permitted\nFailed to get D-Bus connection: Operation not permitted\nFailed to get D-Bus connection: Operation not permitted\nFailed to get D-Bus connection: Operation not permitted\nFailed to get D-Bus connection: Operation not permitted\n
I already had to run the Container in privileged mode, with c-groups and anything else. Already tried differend Docker Containers but nothing is working.
When using docker, i think it would be better just use postgres to start the server.
Command like
postgres -D /opt/postgresql/data/ > /var/log/postgresql/pg_server.log 2>&1 &
When you use docker, you don't have a fully functional systemd.
You can use the solution suggested by #KJocker to make a postgresql functional container. Or instead you can configure systemd to work inside the container here is a document check
I had the same thing when using Ansible on a Docker container.... and I have written a docker-systemctl-replacement for that. It works for PostgreSQL - no need to change the Ansible script, it can stay that way for a deployment on a real machine.
Edit the conf of your gitlab runner instance /etc/gitlab-runner/config.toml
From :
[runners.docker]
privileged = false
volumes = ["/cache"]
To :
[runners.docker]
privileged = true
volumes = ["/sys/fs/cgroup:/sys/fs/cgroup:ro", "/cache"]
Add :
[runners.docker.tmpfs]
"/run" = "rw"
"/tmp" = "rw"
[runners.docker.services_tmpfs]
"/run" = "rw"
"/tmp" = "rw"
Restart gitlab-runner.
On your docker image, edit getty tty1 service to permit autologin of user root after systemd boot up
sed -e 's|/sbin/agetty |/sbin/agetty -a root |g' -i /etc/systemd/system/getty.target.wants/getty\#tty1.service
Use that docker image in image name section of .gitlab-ci.yml and add the following to start systemd. Do not edit entrypoint
script:
- /lib/systemd/systemd --system --log-target=kmsg &
- sleep 5
- systemctl start postgresql-9.5