Can't run kube-apiserver --enable-admission-plugins=DefaultStorageClass - kubernetes

After installation three node cluster
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
I've entered into ApiServer container using
sudo docker exec --user root -it 1ea54fd4cd683 /bin/sh
and executed
kube-apiserver --enable-admission-plugins=DefaultStorageClass
but it writes
I0923 14:37:58.270848 90 server.go:703] external host was not
specified, using 192.168.41.29 W0923 14:37:58.271386 90
authentication.go:378] AnonymousAuth is not allowed with the
AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should
use a different authorizer Error: --etcd-servers must be specified
Could smb say why it happens and how fix it?

First of all, I'm pretty sure that's not the recommended way to add flags to the apiserver.
Those changes will not persist.
You probably want to edit /etc/kubernetes/manifests/kube-apiserver.json on the master, kill the kube-apiserver pod, and wait for it to respawn.
I'm guessing here, but try adding --anonymous-auth=false ?

Related

Kubelet config yaml is missing when restart work node docker service

When I restart the docker service in work node, the logs of kubelet in master node report a no such file error.
# in work node
# systemctl restart docker service
# in master node
# journalctl -u kubelet
# failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
Arghya is right but I would like to add some info you should be aware of:
You can execute kubeadm init phase kubelet-start to only invoke a particular step that will write the kubelet configuration file and environment file and then start the kubelet.
After performing a restart there is a chance that swap would re-enable. Make sure to run swapoff -a in order to turn it off.
If you encounter any token validation problems than simply run kubeadm token create --print-join-command and than do the join process with the provided info. Remember that tokens expire after 24 hours by default.
If you wish to know more about kubeadm init phase you can find it here and here.
Please let me know if that helped.
You might have done kubeadm reset which cleans up all files.
Just do kubeadm reset --force to reset the node and then kubeadm init in master node and kubeadm join in woker node thereafter.

Nginx ingress controller at kubernetes not allowing installation of some package

I am looking to execute
apt install tcpdump
but facing permission denial, upon looking to set the directory to root, it is asking me for password and I don't know from where to get that password.
I installed nginx helm chart from stable/nginx repository with no RBAC
Please see snapshot for details on error, while I tried installing tcpdump in the pod after doing ssh into it.
In Using GDB with Nginx, you can find troubleshooting section:
Shortly:
find the node where your pod is running (kubectl get pods -o wide)
ssh into the node
find the docker_ID for this image (docker ps | grep pod_name)
run docker exec -it --user=0 --privileged docker_ID bash
Note: Runtime privilege and Linux capabilities
When the operator executes docker run --privileged, Docker will enable access to all devices on the host as well as set some configuration in AppArmor or SELinux to allow the container nearly all the same access to the host as processes running outside containers on the host. Additional information about running with --privileged is available on the Docker Blog.
Additional resources:
ROOT IN CONTAINER, ROOT ON HOST
Hope this help.

kubernetes: pods cannot connect to internet

I cannot connect to internet from pods. My kubernetes cluster is behind proxy.
I have already set /env/environment and /etc/systemd/system/docker.service.d/http_proxy.conf, and confirmed that environment variables(http_proxy, https_proxy, HTTP_PROXY, HTTPS_PROXY, no_proxy, NO_PROXY) are correct.
But in the pod, when I tried echo $http_proxy, answer is empty. I also tried curl -I https://rubygems.org but it returned curl: (6) Could not resolve host: rubygems.org.
So I think pod doesn't receive environment values correctly or there is something I forget to do what I should do. How should I do to solve it?
I tried to export http_proxy=http://xx.xx.xxx.xxx:xxxx; export https_proxy=....
After that, I tried again curl -I https://rubygems.org and I can received header with 200.
What I see is that you have wrong proxy.conf name.
As per official documention the name should be /etc/systemd/system/docker.service.d/http-proxy.confand not /etc/systemd/system/docker.service.d/http_proxy.conf.
Next you add proxies, reload daemon and restart docker, as mentioned in provided in comments another answer
/etc/systemd/system/docker.service.d/http_proxy.conf:
Content:
[Service]
Environment="HTTP_PROXY=http://x.x.x:xxxx"
Environment="HTTPS_PROXY=http://x.x.x.x:xxxx"
# systemctl daemon-reload
# systemctl restart docker
Or, as per #mk_ska answer you can
add http_proxy setting to your Docker machine in order to forward
packets from the nested Pod container through the target proxy server.
For Ubuntu based operating system:
Add export http_proxy='http://:' record to the file
/etc/default/docker
For Centos based operating system:
Add export http_proxy='http://:' record to the file
/etc/sysconfig/docker
Afterwards restart Docker service.
Above will set proxy for all containers what will be used by docker engine

The connection to the server 10.0.x.x:6443 was refused after restarting the VM where kubernetes master was installed using kubeadm

I installed a Kubernetes master using kubeadm sucessfully on a VM (VirtualBox). The problem is that if I stop the machine and restart it the master node seems to be down:
kubectl get nodes
The connection to the server 10.0.x.x:6443 was refused - did you specify the right host or port?
How can I make sure it will always be up after restarting the VM?
UPDATE:
After restarting VM this is what I have to do to make the master node start:
sudo swapoff -a
sudo systemctl restart kubelet.service
Why? How can I fix it so that it starts without having to input that?
The problem is that if I stop the machine and restart it the master node seems to be down
Since it was kubeadm installation that worked properly before restarts, seems like Env var is missing after restart. Try to run this before kubectl get nodes:
export KUBECONFIG=/etc/kubernetes/admin.conf
If it starts normally, then you need to make sure that KUBECONFIG environment variable is properly configured upon restart either adding it to .bashrc or similar...
Edited:
Why? How can I fix it so that it starts without having to input that?
Ah, swap file is teasing you. By default kubelet will not start if swap is enabled. You have two options:
Remove swap: That's easy, just disable it as you already listed but make it permanent by commenting swap line in /etc/fstab file. Add # before line creating swap mount point and next time you restart you won't have it.
Allow kubelet to run with swap enabled: I know, not recommended by documentation, but if you like to live dangerous, you can add/edit in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf following line:
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
and next restart you will be able to run kubelet with swap enabled.
I got my problem fixed by clearing some space on the HDD. It seems that the space is low. Then, I restarted the server and it fixed my problem.
I encountered a similar issue where the kubectl commands are working in my master node but the same executed in slave node give me this error:
The connection to the server 10.0.x.x:6443 was refused - did you specify the right host or port?
The solution that worked for me is as below:
I copied the $KUBECONFIG file of Master and places in the slave nodes .kube/ location and it worked (I have only 2 nodes, one master and one slave.)
You just need to kill kubelet service and restart again. pods and container will be running as well as before reboot.
pkill kubelet
and
systemctl restart kubelet
good luck

connect to shell terminal of other container in a pod

When I define multiple containers in a pod/pod template like one container running agent and another php-fpm, how can they access each other? I need the agent container to connect to php-fpm by shell and need to execute few steps interactively through agent container.
Based on my understanding, we can package kubectl into the agent container and use kubectl exec -it <container id> sh to connect to the container. But I don't want Agent container to have more privilege than to connect to the target container with is php-fpm.
Is there a better way for agent container to connect to php-fpm by a shell and execute commands interactively?
Also, I wasn't successful in running kubectl from a container when using minikube due to following errors
docker run -it -v ~/.kube:/root/.kube lachlanevenson/k8s-kubectl get nodes
Error in configuration:
* unable to read client-cert /Users/user/.minikube/apiserver.crt for minikube due to open /Users/user/.minikube/apiserver.crt: no such file or directory
* unable to read client-key /Users/user/.minikube/apiserver.key for minikube due to open /Users/user/.minikube/apiserver.key: no such file or directory
* unable to read certificate-authority /Users/user/.minikube/ca.crt for minikube due to open /Users/user/.minikube/ca.crt: no such file or directory
docker run -it -v ~/.kube:/root/.kube lachlanevenson/k8s-kubectl get nodes
First off, every Pod within a k8s cluster has its own k8s credentials provided by /var/run/secrets/kubernetes.io/serviceaccount/token, and thus there is absolutely no need to attempt to volume mount your home directory into a docker container
The reason you are getting the error about client-cert is because the contents of ~/.kube are merely strings that point to the externally defined ssl key, ssl certificate, and ssl CA certificate defined inside ~/.kube/config -- but I won't speak to fixing that problem further since there is no good reason to be using that approach