kubernetes allow privileged local testing cluster - kubernetes

I'm busy testing out kubernetes on my local pc using https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md
which launches a dockerized single node k8s cluster. I need to run a privileged container inside k8s (it runs docker in order to build images from dockerfiles). What I've done so far is add a security context privileged=true to the pod config which returns forbidden when trying to create the pod. I know that you have to enable privileged on the node with --allow-privileged=true and I've done this by adding the parameter arg to step two (running the master and worker node) but it still returns forbidden when creating the pod.
Anyone know how to enable privileged in this dockerized k8s for testing?
Here is how I run the k8s master:
docker run --privileged --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube kubelet --api-servers=http://localhost:8080 --v=2 --address=0.0.0.0 --allow-privileged=true --enable-server --hostname-override=127.0.0.1 --config=/etc/kubernetes/manifests

Update: Privileged mode is now enabled by default (both in the apiserver and in the kubelet) starting with the 1.1 release of Kubernetes.
To enable privileged containers, you need to pass the --allow-privileged flag to the Kubernetes apiserver in addition to the Kubelet when it starts up. The manifest file that you use to launch the Kubernetes apiserver in the single node docker example is bundled into the image (from master.json), but you can make a local copy of that file, add the --allow-privileged=true flag to the apiserver command line, and then change the --config flag you pass to the Kubelet in Step Two to a directory containing your modified file.

Related

Kubernetes Alpha Debug command

I'm trying the debug CLI feature on 1.18 release of Kubernetes but I have an issue while I have executed debug command.
I have created a pod show as below.
kubectl run ephemeral-demo --image=k8s.gcr.io/pause:3.1 --restart=Never
After than when running this command: kubectl alpha debug -it ephemeral-demo --image=busybox --target=ephemeral-demo
Kubernetes is hanging like that :
Defaulting debug container name to debugger-aaaa.
How Can I resolve that issue?
It seems that you need to enable the feature gate on all control plane components as well as on the kubelets. If the feature is enabled partially (for instance, only kube-apiserver and kube-scheduler), the resources will be created in the cluster, but no containers will be created, thus there will be nothing to attach to.
In addition to the answer posted by Konstl
To enable EphemeralContainers featureGate correctly, add on the master nodes to:
/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/kube-controller-manager.yaml
/etc/kubernetes/manifests/kube-scheduler.yaml
the following line to container command:
spec:
containers:
- command:
- kube-apiserver # or kube-controller-manager/kube-scheduler
- --feature-gates=EphemeralContainers=true # < -- add this line
Pods will restart immediately.
For enabling the featureGate for kubelet add on all nodes to:
/var/lib/kubelet/config.yaml
the following lines at the bottom:
featureGates:
EphemeralContainers: true
Save the file and run the following command:
$ systemctl restart kubelet
This was enough in my case to be able to use kubectl alpha debug as it's explained in the documentation
Additional usefull pages:
Ephemeral Containers
Share Process Namespace between Containers in a Pod
I'm seeing similar behaviour. Although i'm running 1.17 k8s with 1.18 kubectl. But I was under the impression that the feature was added in 1.16 in k8s and in 1.18 in kubectl.

adding master to Kubernetes cluster: cluster doesn't have a stable controlPlaneEndpoint address

How can I add a second master to the control plane of an existing Kubernetes 1.14 cluster?
The available documentation apparently assumes that both masters (in stacked control plane and etcd nodes) are created at the same time. I have created my first master already a while ago with kubeadm init --pod-network-cidr=10.244.0.0/16, so I don't have a kubeadm-config.yaml as referred to by this documentation.
I have tried the following instead:
kubeadm join ... --token ... --discovery-token-ca-cert-hash ... \
--experimental-control-plane --certificate-key ...
The part kubeadm join ... --token ... --discovery-token-ca-cert-hash ... is what is suggested when running kubeadm token create --print-join-command on the first master; it normally serves for adding another worker. --experimental-control-plane is for adding another master instead. The key in --certificate-key ... is as suggested by running kubeadm init phase upload-certs --experimental-upload-certs on the first master.
I receive the following errors:
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver.
The recommended driver is "systemd". Please follow the guide at
https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
error execution phase preflight:
One or more conditions for hosting a new control plane instance is not satisfied.
unable to add a new control plane instance a cluster that doesn't have a stable
controlPlaneEndpoint address
Please ensure that:
* The cluster has a stable controlPlaneEndpoint address.
* The certificates that must be shared among control plane instances are provided.
What does it mean for my cluster not to have a stable controlPlaneEndpoint address? Could this be related to controlPlaneEndpoint in the output from kubectl -n kube-system get configmap kubeadm-config -o yaml currently being an empty string? How can I overcome this situation?
As per HA - Create load balancer for kube-apiserver:
In a cloud environment you should place your control plane nodes behind a TCP forwarding load balancer. This load balancer distributes
traffic to all healthy control plane nodes in its target list. The
health check for an apiserver is a TCP check on the port the
kube-apiserver listens on (default value :6443).
The load balancer must be able to communicate with all control plane nodes on the apiserver port. It must also allow incoming traffic
on its listening port.
Make sure the address of the load balancer
always matches the address of kubeadm’s ControlPlaneEndpoint.
To set ControlPlaneEndpoint config, you should use kubeadm with the --config flag. Take a look here for a config file example:
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
Kubeadm config files examples are scattered across many documentation sections. I recommend that you read the /apis/kubeadm/v1beta1 GoDoc, which have fully populated examples of YAML files used by multiple kubeadm configuration types.
If you are configuring a self-hosted control-plane, consider using the kubeadm alpha selfhosting feature:
[..] key components such as the API server, controller manager, and
scheduler run as DaemonSet pods configured via the Kubernetes API
instead of static pods configured in the kubelet via static files.
This PR (#59371) may clarify the differences of using a self-hosted config.
You need to copy the certificates ( etcd/api server/ca etc. ) from the existing master and place on the second master.
then run kubeadm init script. since the certs are already present the cert creation step is skipped and rest of the cluster initialization steps are resumed.

how to enable kubelet logging verbosity

I want to set higher logging verbosity in my k8s setup. While i managed to enable verbosity for API server and Kubectl by --v=4 argument; I am having difficulties finding way to pass in this flag to Kubelet.
I am using kubeadm init method to launch small scale cluster, where master is also tainted so it can serve as minion. can you help in in enabling kubelet logging verbosity ?
1) ssh to the master node
2) append /var/lib/kubelet/kubeadm-flags.env file with --v=4, i.e.
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --v=4
3) restart kubelet service
sudo systemctl restart kubelet

After K8s Master reboot, apiserver throws error “x509: certificate has expired or is not yet valid”

I have multimaster kubernetes cluster setup using Kubespray.
I ran an application using helm on it which increased the load on master drastically. this made master almost inaccessible. After that I shutdown masters one by one and increased RAM and CPU on them. But after rebooting, both apiserver and scheduler pods are failing to start. They are in "CreateContainerError" state.
APIserver is logging lot of errors with the message x509: certificate has expired or is not yet valid.
There are other threads for this error and most of them suggest to fix apiserver or cluster certificates. But this is newly setup cluster and certificates are valid till 2020.
Here are some details of my cluster.
CentOS Linux release: 7.6.1810 (Core)
Docker version: 18.06.1-ce, build e68fc7a
Kubernetes Version
Client Version: v1.13.2
Server Version: v1.13.2
It is very much possible that during shutdown/reboot, docker containers for apiserver and scheduler got exited with non zero exit status like 255. I sugggest you to first remove all containers with non zero exit status using docker rm command. Do this on all masters, rather on worker nodes as well.
By default, kubernetes starts new pods for all services (apiserver, shceduler, controller-manager, dns, pod network etc.) after a reboot. You can see newly started containers for these services using docker commands
example:
docker ps -a | grep "kube-apiserver" OR
docker ps -a | grep "kube-scheduler"
after removing exited containers, I believe, new pods for apiserver and scheduler should run properly in the cluster and should be in "Running" status.

Mount propagation in kubernetes

I am using Mount propagation feature of Kubernetes to check the health of mount points of certain type. I create a daemonset and run a script which would do a simple ls on these mount points. I noticed that new mount points are not getting listed from the pods. Is this the expected behaviour.
volumeMounts:
- mountPath: /host
name: host-kubelet
mountPropagation: HostToContainer
volumes:
- name: host-kubelet
hostPath:
path: /var/lib/kubelet
Related Issue : hostPath containing mounts do not update as they change on the host #44713
In brief, Mount propagation allows sharing volumes mounted by a Container to other Containers in the same Pod, or even to other Pods on the same node.
Mount propagation of a volume is controlled by mountPropagation field in Container.volumeMounts. Its values are:
HostToContainer - one way propagation, from host to container. If you
mount anything inside the volume, the Container will see it there.
Bidirectional - In addition to propagation from host to container,
all volume mounts created by the Container will be propagated back to
the host, so all Containers of all Pods that use the same volume will
see it as well.
Based on documentation the Mount propagation feature is in alpha state for clusters v1.9, and going to be beta on v1.10
I've reproduced your case on kubernetes v1.9.2 and found that it completely ignores MountPropagation configuration parameter. If you try to check current state of the DaemonSet or Deployment, you'll see that this option is missed from the listed yaml configuration
$ kubectl get daemonset --export -o yaml
If you try to run just docker container with mount propagation option you may see it is working as expected:
docker run -d -it -v /tmp/mnt:/tmp/mnt:rshared ubuntu
Comparing docker container configuration with kubernetes pod container in the volume mount section, you may see that the last flag (shared/rshared) is missing in kubernetes container.
And that's why it happens in Google kubernetes clusters and may happen to clusters managed by other providers:
To ensure stability and production quality, normal Kubernetes Engine
clusters only enable features that are beta or higher. Alpha features
are not enabled on normal clusters because they are not
production-ready or upgradeable.
Since Kubernetes Engine automatically upgrades the Kubernetes control
plane, enabling alpha features in production could jeopardize the
reliability of the cluster if there are breaking changes in a new
version.
Alpha level features availability: committed to main kubernetes repo; appears in an official release; feature is disabled by default, but may be enabled by flag (in case you are able to set flags)
Before mount propagation can work properly on some deployments (CoreOS, RedHat/Centos, Ubuntu) mount share must be configured correctly in Docker as shown below.
Edit your Docker’s systemd service file. Set MountFlags as follows:
MountFlags=shared
Or, remove MountFlags=slave if present. Then restart the Docker daemon:
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker