kubernetes: pods cannot connect to internet - kubernetes

I cannot connect to internet from pods. My kubernetes cluster is behind proxy.
I have already set /env/environment and /etc/systemd/system/docker.service.d/http_proxy.conf, and confirmed that environment variables(http_proxy, https_proxy, HTTP_PROXY, HTTPS_PROXY, no_proxy, NO_PROXY) are correct.
But in the pod, when I tried echo $http_proxy, answer is empty. I also tried curl -I https://rubygems.org but it returned curl: (6) Could not resolve host: rubygems.org.
So I think pod doesn't receive environment values correctly or there is something I forget to do what I should do. How should I do to solve it?
I tried to export http_proxy=http://xx.xx.xxx.xxx:xxxx; export https_proxy=....
After that, I tried again curl -I https://rubygems.org and I can received header with 200.

What I see is that you have wrong proxy.conf name.
As per official documention the name should be /etc/systemd/system/docker.service.d/http-proxy.confand not /etc/systemd/system/docker.service.d/http_proxy.conf.
Next you add proxies, reload daemon and restart docker, as mentioned in provided in comments another answer
/etc/systemd/system/docker.service.d/http_proxy.conf:
Content:
[Service]
Environment="HTTP_PROXY=http://x.x.x:xxxx"
Environment="HTTPS_PROXY=http://x.x.x.x:xxxx"
# systemctl daemon-reload
# systemctl restart docker
Or, as per #mk_ska answer you can
add http_proxy setting to your Docker machine in order to forward
packets from the nested Pod container through the target proxy server.
For Ubuntu based operating system:
Add export http_proxy='http://:' record to the file
/etc/default/docker
For Centos based operating system:
Add export http_proxy='http://:' record to the file
/etc/sysconfig/docker
Afterwards restart Docker service.
Above will set proxy for all containers what will be used by docker engine

Related

Location of Kubernetes config directory with Docker Desktop on Windows

I am running a local Kubernetes cluster through Docker Desktop on Windows. I'm attempting to modify my kube-apiserver config, and all of the information I've found has said to modify /etc/kubernetes/manifests/kube-apiserver.yaml on the master. I haven't been able to find this file, and am not sure what the proper way is to do this. Is there a different process because the cluster is through Docker Desktop?
Is there a different process because the cluster is through Docker Desktop?
You can get access to the kubeapi-server.yaml with a Kubernetes that is running on Docker Desktop but in a "hacky" way. I've included the explanation below.
For setups that require such reconfigurations, I encourage you to use different solution like for example minikube.
Minikube has a feature that allows you to pass the additional options for the Kubernetes components. You can read more about --extra-config ExtraOption by following this documentation:
Minikube.sigs.k8s.io: Docs: Commands: Start
As for the reconfiguration of kube-apiserver.yaml with Docker Desktop
You need to run following command:
docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
Above command will allow you to run:
vi /etc/kubernetes/manifests/kube-apiserver.yaml
This lets you edit the API server configuration. The Pod running kubeapi-server will be restarted with new parameters.
You can check below StackOverflow answers for more reference:
Stackoverflow.com: Answer: Where are the Docker Desktop for Windows kubelet logs located?
Stackoverflow.com: Answer: How to change the default nodeport range on Mac (docker-desktop)?
I've used this answer without $ screen command and I was able to reconfigure kubeapi-server on Docker Desktop in Windows

"kubectl get pods -A" command not working

I installed Kubernetes in virtual BOX previously it was working properly but not it is showing The connection to the server 192.168.42.141:6443 was refused - did you specify the right host or port?, Please help.
The connection to the server 192.168.42.141:6443 was refused - did you specify the right host or port?
According to issue there might be kube-apiserver not running state. To check the apiserver status run following command
$ docker ps
# If above is not sowing apiserver container, then it is stopped, To see the stopped container run
$ docker ps -a
P.S: From the comment there is also a version mismatch. To update kubectl follow this
kubectl on any machine reads the current context from kubeconfig file. The file is located at the path $USER_HOME/.kube/config
There are clusters configured inside this file alongwith the IP or domain name of the cluster. If the IP is invalid or not reachable OR the domain name can not be resolved and is unreachable OR the config file is corrupted or the config file is empty, then this error occurs.
In brief, you need to check your config file. It will save you a lot of effort.

Nginx ingress controller at kubernetes not allowing installation of some package

I am looking to execute
apt install tcpdump
but facing permission denial, upon looking to set the directory to root, it is asking me for password and I don't know from where to get that password.
I installed nginx helm chart from stable/nginx repository with no RBAC
Please see snapshot for details on error, while I tried installing tcpdump in the pod after doing ssh into it.
In Using GDB with Nginx, you can find troubleshooting section:
Shortly:
find the node where your pod is running (kubectl get pods -o wide)
ssh into the node
find the docker_ID for this image (docker ps | grep pod_name)
run docker exec -it --user=0 --privileged docker_ID bash
Note: Runtime privilege and Linux capabilities
When the operator executes docker run --privileged, Docker will enable access to all devices on the host as well as set some configuration in AppArmor or SELinux to allow the container nearly all the same access to the host as processes running outside containers on the host. Additional information about running with --privileged is available on the Docker Blog.
Additional resources:
ROOT IN CONTAINER, ROOT ON HOST
Hope this help.

kubectl : Unable to connect to the server : dial tcp 192.168.214.136:6443: connect: no route to host

I recently installed kubernetes on VMware and also configured few pods , while configuring those pods , it automatically used IP of the VMware and configured. I was able to access the application during that time but then recently i rebooted VM and machine which hosts the VM, during this - IP of the VM got changed i guess and now - I am getting below error when using command kubectl get pod -n <namspaceName>:
userX#ubuntu:~$ kubectl get pod -n NameSpaceX
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
userX#ubuntu:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
kubectl cluster-info as well as other related commands gives same output.
in VMware workstation settings, we are using network adapter which is sharing host IP address setting. We are not sure if it has any impact.
We also tried to add below entry in /etc/hosts , it is not working.
127.0.0.1 localhost \n
192.168.214.136 localhost \n
127.0.1.1 ubuntu
I expect to run the pods back again to access the application.Instead of reinstalling all pods which is time consuming - we are looking for quick workaround so that pods will get back to running state.
If you use minikube sometimes all you need is just to restart minikube.
Run:
minikube start
I encountered the same issue - the problem was that the master node didn't expose port 6443 outside.
Below are the steps I took to fix it.
1 ) Check IP of api-server.
This can be verified via the .kube/config file (under server field) or with: kubectl describe pod/kube-apiserver-<master-node-name> -n kube-system.
2 ) Run curl https://<kube-apiserver-IP>:6443 and see if port 6443 is open.
3 ) If port 6443 you should get something related to the certificate like:
curl: (60) SSL certificate problem: unable to get local issuer certificate
4 ) If port 6443 is not open:
4.A ) SSH into master node.
4.B ) Run sudo firewall-cmd --add-port=6443/tcp --permanent (I'm assuming firewalld is installed).
4.C ) Run sudo firewall-cmd --reload.
4.D ) Run sudo firewall-cmd --list-all and you should see port 6443 is updated:
public
target: default
icmp-block-inversion: no
interfaces:
sources:
services: dhcpv6-client ssh
ports: 6443/tcp <---- Here
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
The common practice is to copy config file to the home directory
sudo cp /etc/kubernetes/admin.conf ~/.kube/config && sudo chown $(id -u):$(id -g) $HOME/.kube/config
Also, make sure that api-server address is valid.
server: https://<master-node-ip>:6443
If not, you can manually edit it using any text editor.
You need to export the admin.conf file as kubeconfig before running the kubectl commands. You may put this as your env variable
export kubeconfig=<path>/admin.conf
after this you should be able to run the kubectl command. I am hoping that your setup of K8S cluster is proper.
Last night I had the exact same error installing Kubernetes using this puppet module: https://forge.puppet.com/puppetlabs/kubernetes
Turns out that it is an incorrect iptables setting in the master that blocks all non-local requests towards the api.
The way I solved it (bruteforce solution) is by
completely remove alle installed k8s related software (also all config files, etcd data, docker images, mounted tmpfs filesystems, ...)
wipe the iptables completely https://serverfault.com/questions/200635/best-way-to-clear-all-iptables-rules
reinstall
This is what solved the problem in my case.
There is probably a much nicer and cleaner way to do this (i.e. simply change the iptables rules to allow access).
if you getting the below error then you also check once the token validity.
Unable to connect to the server: dial tcp 192.168.93.10:6443: connect: no route to host
Check your token validity by using the command kubeadm token list if your token is expired then you have to reset the cluster using kubeadm reset and than initialize again using command kubeadm init --token-ttl 0.
Then again check the status of the token using kubeadm token list. Note here the TTL value will be <forever> and Expire value will be <never>.
example:-
[root#master1 ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
nh48tb.d79ysdsaj8bchms9 <forever> <never> authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
Ubuntu 22.04 LTS Screenshot
Select docker-desktop and run again your command, e.g kubectl apply -f <myimage.yaml>
Run minikube start command
The reason behind that is your minikube cluster with driver docker stopped
when you shutdown the system
To all those who are trying to learn and experiment kubernetes using Ubuntu on Oracle VM:
IP address is assigned to Guest OS/VM based on the network adapter selection. Based on your network adapter selection, you need to configure the settings in Oracle VM network section or your router settings.
See the link for most common Oracle VM network adapter.
https://www.nakivo.com/blog/virtualbox-network-setting-guide/
I was using bridge adapter which put VM and host OS in parallel. So the my router was randomly assigning IP to my VM after every restart and my cluster stopped working and getting the same exact error message posted in the question.
> k get pods -A
> Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
> systemctl status kubelet
> ........
> ........ "Error getting node" err="node \"node\" not found"
Cluster started working again after reserving static IP address to my VM in router settings.(if you are using NAT adapter, you should configure it in VM network settings)
When you are reserving IP address to your VM, make sure to assign the same old IP address which was used for configuring kubelet.

Can't run kube-apiserver --enable-admission-plugins=DefaultStorageClass

After installation three node cluster
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
I've entered into ApiServer container using
sudo docker exec --user root -it 1ea54fd4cd683 /bin/sh
and executed
kube-apiserver --enable-admission-plugins=DefaultStorageClass
but it writes
I0923 14:37:58.270848 90 server.go:703] external host was not
specified, using 192.168.41.29 W0923 14:37:58.271386 90
authentication.go:378] AnonymousAuth is not allowed with the
AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should
use a different authorizer Error: --etcd-servers must be specified
Could smb say why it happens and how fix it?
First of all, I'm pretty sure that's not the recommended way to add flags to the apiserver.
Those changes will not persist.
You probably want to edit /etc/kubernetes/manifests/kube-apiserver.json on the master, kill the kube-apiserver pod, and wait for it to respawn.
I'm guessing here, but try adding --anonymous-auth=false ?