I'm trying the debug CLI feature on 1.18 release of Kubernetes but I have an issue while I have executed debug command.
I have created a pod show as below.
kubectl run ephemeral-demo --image=k8s.gcr.io/pause:3.1 --restart=Never
After than when running this command: kubectl alpha debug -it ephemeral-demo --image=busybox --target=ephemeral-demo
Kubernetes is hanging like that :
Defaulting debug container name to debugger-aaaa.
How Can I resolve that issue?
It seems that you need to enable the feature gate on all control plane components as well as on the kubelets. If the feature is enabled partially (for instance, only kube-apiserver and kube-scheduler), the resources will be created in the cluster, but no containers will be created, thus there will be nothing to attach to.
In addition to the answer posted by Konstl
To enable EphemeralContainers featureGate correctly, add on the master nodes to:
/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/kube-controller-manager.yaml
/etc/kubernetes/manifests/kube-scheduler.yaml
the following line to container command:
spec:
containers:
- command:
- kube-apiserver # or kube-controller-manager/kube-scheduler
- --feature-gates=EphemeralContainers=true # < -- add this line
Pods will restart immediately.
For enabling the featureGate for kubelet add on all nodes to:
/var/lib/kubelet/config.yaml
the following lines at the bottom:
featureGates:
EphemeralContainers: true
Save the file and run the following command:
$ systemctl restart kubelet
This was enough in my case to be able to use kubectl alpha debug as it's explained in the documentation
Additional usefull pages:
Ephemeral Containers
Share Process Namespace between Containers in a Pod
I'm seeing similar behaviour. Although i'm running 1.17 k8s with 1.18 kubectl. But I was under the impression that the feature was added in 1.16 in k8s and in 1.18 in kubectl.
Related
I am currently working in Minikube cluster and looking to change some flags of kubernetes scheduler configuration, but I can't find it. The file looks something like-
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
algorithmSource:
provider: DefaultProvider
...
disablePreemption: true
What is it's name and where can I find it?
Posting this answer as a community wiki to set a baseline and to provide additional resources/references rather than giving a definitive solution.
Feel free to edit and expand.
I haven't found the file that you are referencing (KubeSchedulerConfiguration) in minikube.
The minikube provisioning process does not create it nor references it in the configuration files (/etc/kubernetes/manifests/kube-scheduler.yaml and the --config=PATH parameter).
I'd reckon you could take a look on other Kubernetes solutions where you can configure how your cluster is created (how kube-scheduler is configured). Some of the options are:
Kubernetes.io: Docs: Setup: Production environment: Tools: Kubeadm: Create cluster and also:
Kubernetes.io: Docs: Setup: Production environment: Tools: Kubeadm: Control plane flags
Github.com: Kubernetes sigs: Kubespray
A side note!
Both: kubespray and minikube are using kubeadm as a bootstrapper!
I would also consider creating additional scheduler that would be responsible for spawning your workload (by referencing in the YAML manifests):
Kubernetes.io: Docs: Tasks: Extend Kubernetes: Configure multiple schedulers
I haven't tested it extensively and in the long term but I've managed to include the YAML manifest that you are referencing for the kube-scheduler.
Disclaimers!
Please consider below example as a workaround!
The method described below is not persistent.
Steps:
Start your minikube instance with the --extra-config
Connect to your minikube instance and edit/add files:
/etc/kubernetes/manifests/kube-scheduler.yaml
newly created KubeSchedulerConfiguration
Delete the failing kube-scheduler Pod and wait for it to be recreated.
Start your minikube instance with the --extra-config
As previously said you can add some additional parameters for your $ minikube start to be passed down to the provisioning process.
In this setup you can either pass it with $ minikube start ... or do it manually later on.
$ minikube start --extra-config=scheduler.config="/etc/kubernetes/sched.yaml"
Above parameter will add the - --config=/etc/kubernetes/sched.yaml to the command of your kube-scheduler. It will look for the file in the mentioned location.
Connect to your minikube instance ($ minikube ssh) and edit/add files:
Your kube-scheduler will fail as you've passed an argument (config) that is incorrect (lack of file). To work around this you will need to:
add: /etc/kubernetes/sched.yaml with your desired configuration
modify: /etc/kubernetes/manifests/kube-scheduler.yaml:
add to: volumeMounts:
- mountPath: /etc/kubernetes/sched.yaml
name: scheduler
readOnly: true
add to volumes:
- hostPath:
path: /etc/kubernetes/sched.conf
type: FileOrCreate
name: scheduler
Delete the failing kube-scheduler Pod and wait for it to be recreated.
You will need to redeploy modified scheduler to get its new config running:
$ kubectl delete pod -n kube-system kube-scheduler-minikube
After some time you should see your kube-scheduler in Ready state.
Additional resources:
Kubernetes.io: Docs: Concepts: Scheduling eviction: Kube-scheduler
Kubernetes.io: Docs: Reference: Command line tools reference: Kube-scheduler
Looking at documentation for installing Knative requires a Kubernetes cluster v1.11 or newer with the MutatingAdmissionWebhook admission controller enabled. So checking the documentation for this I see the following command:
kube-apiserver -h | grep enable-admission-plugins
However, kube-apiserver is running inside a docker container on master. Logging in as admin to master, I am not seeing this on the command line after install. What steps do I need to take to to run this command? Its probably a basic docker question but I dont see this documented anywhere in Kubernetes documentation.
So what I really need to know is if this command line is the best way to set these plugins and also how exactly to enter the container to execute the command line.
Where is kube-apiserver located
Should I enter the container? What is name of container and how do I enter it to execute the command?
I think that answer from #embik that you've pointed out in the initial question is quite decent, but I'll try to shed light on some aspects that can be useful for you.
As #embik mentioned in his answer, kube-apiserver binary actually resides on particular container within K8s api-server Pod, therefore you can free to check it, just execute /bin/sh on that Pod:
kubectl exec -it $(kubectl get pods -n kube-system| grep kube-apiserver|awk '{print $1}') -n kube-system -- /bin/sh
You might be able to propagate the desired enable-admission-plugins through kube-apiserver command inside this Pod, however any modification will disappear once api-server Pod re-spawns, i.e. master node reboot, etc.
The essential api-server config located in /etc/kubernetes/manifests/kube-apiserver.yaml. Node agent kubelet controls kube-apiserver runtime Pod, and each time when health checks are not successful kubelet sents a request to K8s Scheduler in order to re-create this affected Pod from primary kube-apiserver.yaml file.
This is old, still if its in the benefit of a needy. The a #Nick_Kh's answer is good enough, just want to extend it.
In case the api-server pod fails to give you the shell access, you may directly execute the command using kubectl exec like this:
kubectl exec -it kube-apiserver-rhino -n kube-system -- kube-apiserver -h | grep enable-admission-plugins
In this case, I wanted to know what are the default admission plugins enabled and every time I tried accessing pod's shell (bash, sh, etc.), ended up with error like this:
[root#rhino]# kubectl exec -it kube-apiserver-rhino -n kube-system -- /bin/sh
OCI runtime exec failed: exec failed: container_linux.go:367: starting container process caused: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown
command terminated with exit code 126
I am running kubernetes inside 'Docker Desktop' on Mac OS High Sierra.
Is it possible to change the flags given to the kubernetes api-server with this setup?
I can see that the api-server is running.
I am able to exec into the api-server container. When I kill the api-server so I could run it with my desired flags, the container is immediately killed.
Try this to find the name of apiserver deployment:
kubectl -n kube-system get deploy | grep apiserver
Grab the name of deployment and edit its configuration:
kubectl -n kube-system edit deploy APISERVER_DEPLOY_NAME
When you do that the editor will open and from there you can change apiserver command line flags. After editing you should save and close editor, then your changes will be applied.
I there is no a deployment for kube-apiserver since those pods are static so they are created and managed by kubelet.
The way to change kube-api's parameters is like #hanx mentioned:
ssh into the master node (not a container);
update the file under - /etc/kubernetes/manifests/;
restart kubelet - systemctl restart kubelet;
I am using Mount propagation feature of Kubernetes to check the health of mount points of certain type. I create a daemonset and run a script which would do a simple ls on these mount points. I noticed that new mount points are not getting listed from the pods. Is this the expected behaviour.
volumeMounts:
- mountPath: /host
name: host-kubelet
mountPropagation: HostToContainer
volumes:
- name: host-kubelet
hostPath:
path: /var/lib/kubelet
Related Issue : hostPath containing mounts do not update as they change on the host #44713
In brief, Mount propagation allows sharing volumes mounted by a Container to other Containers in the same Pod, or even to other Pods on the same node.
Mount propagation of a volume is controlled by mountPropagation field in Container.volumeMounts. Its values are:
HostToContainer - one way propagation, from host to container. If you
mount anything inside the volume, the Container will see it there.
Bidirectional - In addition to propagation from host to container,
all volume mounts created by the Container will be propagated back to
the host, so all Containers of all Pods that use the same volume will
see it as well.
Based on documentation the Mount propagation feature is in alpha state for clusters v1.9, and going to be beta on v1.10
I've reproduced your case on kubernetes v1.9.2 and found that it completely ignores MountPropagation configuration parameter. If you try to check current state of the DaemonSet or Deployment, you'll see that this option is missed from the listed yaml configuration
$ kubectl get daemonset --export -o yaml
If you try to run just docker container with mount propagation option you may see it is working as expected:
docker run -d -it -v /tmp/mnt:/tmp/mnt:rshared ubuntu
Comparing docker container configuration with kubernetes pod container in the volume mount section, you may see that the last flag (shared/rshared) is missing in kubernetes container.
And that's why it happens in Google kubernetes clusters and may happen to clusters managed by other providers:
To ensure stability and production quality, normal Kubernetes Engine
clusters only enable features that are beta or higher. Alpha features
are not enabled on normal clusters because they are not
production-ready or upgradeable.
Since Kubernetes Engine automatically upgrades the Kubernetes control
plane, enabling alpha features in production could jeopardize the
reliability of the cluster if there are breaking changes in a new
version.
Alpha level features availability: committed to main kubernetes repo; appears in an official release; feature is disabled by default, but may be enabled by flag (in case you are able to set flags)
Before mount propagation can work properly on some deployments (CoreOS, RedHat/Centos, Ubuntu) mount share must be configured correctly in Docker as shown below.
Edit your Docker’s systemd service file. Set MountFlags as follows:
MountFlags=shared
Or, remove MountFlags=slave if present. Then restart the Docker daemon:
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
I've been trying to shut down kubernetes cluster , but I couldn't managed to do it.
When I type
kubectl cluster-info
I can see that my cluster is still running.
I tried commands like running script
kube-down.sh
but it didn't work.
I deleted all pods. How can I shut it down ?
The tear down section of the official documentation says:
To undo what kubeadm did, you should first drain the node and make sure that the node is empty before shutting it down.
Talking to the master with the appropriate credentials, run:
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
Then, on the node being removed, reset all kubeadm installed state:
kubeadm reset
You cannot use kubectl stop command as it has been deprecated. If you have created pods using a yaml file, I suggest you use
kubectl delete -f <filename>.yml to stop any running pod.
You can also delete service associated with running pods by using the following command:
# Delete pods and services with same names "baz" and "foo"
kubectl delete pod,service baz foo
When using kube-down.sh you've to make sure that all the environment variables which were adjusted for the kube-up.sh are also used during the shut down. See also