I installed Kubernetes in virtual BOX previously it was working properly but not it is showing The connection to the server 192.168.42.141:6443 was refused - did you specify the right host or port?, Please help.
The connection to the server 192.168.42.141:6443 was refused - did you specify the right host or port?
According to issue there might be kube-apiserver not running state. To check the apiserver status run following command
$ docker ps
# If above is not sowing apiserver container, then it is stopped, To see the stopped container run
$ docker ps -a
P.S: From the comment there is also a version mismatch. To update kubectl follow this
kubectl on any machine reads the current context from kubeconfig file. The file is located at the path $USER_HOME/.kube/config
There are clusters configured inside this file alongwith the IP or domain name of the cluster. If the IP is invalid or not reachable OR the domain name can not be resolved and is unreachable OR the config file is corrupted or the config file is empty, then this error occurs.
In brief, you need to check your config file. It will save you a lot of effort.
I cannot connect to internet from pods. My kubernetes cluster is behind proxy.
I have already set /env/environment and /etc/systemd/system/docker.service.d/http_proxy.conf, and confirmed that environment variables(http_proxy, https_proxy, HTTP_PROXY, HTTPS_PROXY, no_proxy, NO_PROXY) are correct.
But in the pod, when I tried echo $http_proxy, answer is empty. I also tried curl -I https://rubygems.org but it returned curl: (6) Could not resolve host: rubygems.org.
So I think pod doesn't receive environment values correctly or there is something I forget to do what I should do. How should I do to solve it?
I tried to export http_proxy=http://xx.xx.xxx.xxx:xxxx; export https_proxy=....
After that, I tried again curl -I https://rubygems.org and I can received header with 200.
What I see is that you have wrong proxy.conf name.
As per official documention the name should be /etc/systemd/system/docker.service.d/http-proxy.confand not /etc/systemd/system/docker.service.d/http_proxy.conf.
Next you add proxies, reload daemon and restart docker, as mentioned in provided in comments another answer
/etc/systemd/system/docker.service.d/http_proxy.conf:
Content:
[Service]
Environment="HTTP_PROXY=http://x.x.x:xxxx"
Environment="HTTPS_PROXY=http://x.x.x.x:xxxx"
# systemctl daemon-reload
# systemctl restart docker
Or, as per #mk_ska answer you can
add http_proxy setting to your Docker machine in order to forward
packets from the nested Pod container through the target proxy server.
For Ubuntu based operating system:
Add export http_proxy='http://:' record to the file
/etc/default/docker
For Centos based operating system:
Add export http_proxy='http://:' record to the file
/etc/sysconfig/docker
Afterwards restart Docker service.
Above will set proxy for all containers what will be used by docker engine
I recently installed kubernetes on VMware and also configured few pods , while configuring those pods , it automatically used IP of the VMware and configured. I was able to access the application during that time but then recently i rebooted VM and machine which hosts the VM, during this - IP of the VM got changed i guess and now - I am getting below error when using command kubectl get pod -n <namspaceName>:
userX#ubuntu:~$ kubectl get pod -n NameSpaceX
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
userX#ubuntu:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
kubectl cluster-info as well as other related commands gives same output.
in VMware workstation settings, we are using network adapter which is sharing host IP address setting. We are not sure if it has any impact.
We also tried to add below entry in /etc/hosts , it is not working.
127.0.0.1 localhost \n
192.168.214.136 localhost \n
127.0.1.1 ubuntu
I expect to run the pods back again to access the application.Instead of reinstalling all pods which is time consuming - we are looking for quick workaround so that pods will get back to running state.
If you use minikube sometimes all you need is just to restart minikube.
Run:
minikube start
I encountered the same issue - the problem was that the master node didn't expose port 6443 outside.
Below are the steps I took to fix it.
1 ) Check IP of api-server.
This can be verified via the .kube/config file (under server field) or with: kubectl describe pod/kube-apiserver-<master-node-name> -n kube-system.
2 ) Run curl https://<kube-apiserver-IP>:6443 and see if port 6443 is open.
3 ) If port 6443 you should get something related to the certificate like:
curl: (60) SSL certificate problem: unable to get local issuer certificate
4 ) If port 6443 is not open:
4.A ) SSH into master node.
4.B ) Run sudo firewall-cmd --add-port=6443/tcp --permanent (I'm assuming firewalld is installed).
4.C ) Run sudo firewall-cmd --reload.
4.D ) Run sudo firewall-cmd --list-all and you should see port 6443 is updated:
public
target: default
icmp-block-inversion: no
interfaces:
sources:
services: dhcpv6-client ssh
ports: 6443/tcp <---- Here
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
The common practice is to copy config file to the home directory
sudo cp /etc/kubernetes/admin.conf ~/.kube/config && sudo chown $(id -u):$(id -g) $HOME/.kube/config
Also, make sure that api-server address is valid.
server: https://<master-node-ip>:6443
If not, you can manually edit it using any text editor.
You need to export the admin.conf file as kubeconfig before running the kubectl commands. You may put this as your env variable
export kubeconfig=<path>/admin.conf
after this you should be able to run the kubectl command. I am hoping that your setup of K8S cluster is proper.
Last night I had the exact same error installing Kubernetes using this puppet module: https://forge.puppet.com/puppetlabs/kubernetes
Turns out that it is an incorrect iptables setting in the master that blocks all non-local requests towards the api.
The way I solved it (bruteforce solution) is by
completely remove alle installed k8s related software (also all config files, etcd data, docker images, mounted tmpfs filesystems, ...)
wipe the iptables completely https://serverfault.com/questions/200635/best-way-to-clear-all-iptables-rules
reinstall
This is what solved the problem in my case.
There is probably a much nicer and cleaner way to do this (i.e. simply change the iptables rules to allow access).
if you getting the below error then you also check once the token validity.
Unable to connect to the server: dial tcp 192.168.93.10:6443: connect: no route to host
Check your token validity by using the command kubeadm token list if your token is expired then you have to reset the cluster using kubeadm reset and than initialize again using command kubeadm init --token-ttl 0.
Then again check the status of the token using kubeadm token list. Note here the TTL value will be <forever> and Expire value will be <never>.
example:-
[root#master1 ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
nh48tb.d79ysdsaj8bchms9 <forever> <never> authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
Ubuntu 22.04 LTS Screenshot
Select docker-desktop and run again your command, e.g kubectl apply -f <myimage.yaml>
Run minikube start command
The reason behind that is your minikube cluster with driver docker stopped
when you shutdown the system
To all those who are trying to learn and experiment kubernetes using Ubuntu on Oracle VM:
IP address is assigned to Guest OS/VM based on the network adapter selection. Based on your network adapter selection, you need to configure the settings in Oracle VM network section or your router settings.
See the link for most common Oracle VM network adapter.
https://www.nakivo.com/blog/virtualbox-network-setting-guide/
I was using bridge adapter which put VM and host OS in parallel. So the my router was randomly assigning IP to my VM after every restart and my cluster stopped working and getting the same exact error message posted in the question.
> k get pods -A
> Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
> systemctl status kubelet
> ........
> ........ "Error getting node" err="node \"node\" not found"
Cluster started working again after reserving static IP address to my VM in router settings.(if you are using NAT adapter, you should configure it in VM network settings)
When you are reserving IP address to your VM, make sure to assign the same old IP address which was used for configuring kubelet.
I have been following this tutorial for a moment but I don't know why it isn't working:
https://github.com/anapsix/zabbix-haproxy/blob/master/README.md
To make a long story short:
I have a Zabbix server on Amazon EC2 and I want to monitor a HAproxy server which is inside my network. The HAProxy Server has a Zabbix Agent working on it.
The tutorial explain how to setup a script for the zabbix-agent to explore what's behind it (what's the haproxy is load-balancing) and send it back to the Zabbix Server.
However everything is working fine but nothing shows up on the Zabbix server, no host are discovered despite the zabbix agent and server are communicating.
1 - I did place the userparameter_haproxy.conf into /etc/zabbix/zabbix_agentd.d/ and
set it in the zabbix_agend.conf file.
2 - I did place the haproxy_discovery.sh into /usr/local/bin/ and gave it the +x rights
3 - I did import haproxy_zbx_template.xml
4 - Configure HAProxy control socket: I assume there is my mistake.
5- The scripts are working because I get result when I execute this commands:
zabbix_agentd -t haproxy.list.discovery[FRONTEND]
zabbix_agentd -t haproxy.list.discovery[BACKEND]
zabbix_agentd -t haproxy.list.discovery[SERVERS]
6 - I added the host with HAproxy on it to the right template
7 - I can wait forever nothing is showing up, no new hosts.
I think the step 4 is where I am doing wrong. In the tutorial they say:
Configure HAProxy to listen on /var/run/haproxy/info.sock or set
custom socket path in checks (set {$HAPROXY_SOCK} template macro to
your custom socket path) or update userparameter_haproxy.conf and
haproxy_discovery.sh with your socket path
I did make the haproxy.cfg file listen to the file /var/lib/haproxy/stats
and set a custom socket path in the template macro.
Additionnal info:
Version of Zabbix: 3.4
Zabbix Server: RHEL 7.4
Zabbix Agent: Centos 7.2
No errors when I restart zabbix-agent
No errors in haproxy.log
UPDATE: I did add Zabbix to the root group.
Now, in Zabbix server logs I can see this message:
changed: Value "which: no nc in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
2" of type "string" is not suitable for value type "Numeric (unsigned)"
And I'm lost again.
UPDATE: I was missing netcat, I installed it on the zabbix server and client.
UPDATE: It's working
According to your update, I guess netcat (nc) is not installed on your system.
Install it and try again
I have setup a basic infrastructure using chef. This includes a local chef server(ubuntu based), workstation and an ubuntu based server(to be used as the node). Please note that the entire infrastructure lies behind the firewall in my office network. And I have made necessary proxy settings for the servers to access the internet.
So here is the problem - When I try to bootstrap the node using -
knife bootstrap <node's ip> --sudo -x <username> -P <password> -N "<name>"
i get the following error
<node's ip> --2014-02-19 10:47:10-- https://www.opscode.com/chef/install.sh
<node's ip> Resolving www.opscode.com (www.opscode.com)... 184.106.28.91
<node's ip>1 Connecting to www.opscode.com (www.opscode.com)|184.106.28.91|:443... failed:Connection refused.
<node's ip> bash: line 83: chef-client: command not found
I was not able to find a solution to this. However I came across the knife[:bootstrap_proxy] = "http://username:password#proxyIP:port/" setting that can be added to knife.rb . I did this (by entering my office proxy details) and then the connection during bootstrap was successfull and the chef client was downloaded on the node. However this setting only defines the proxy that should be used by the node. So, this led to the http_proxy = "http://username:password#proxyIP:port/" being set in client.rb. But because I have already made all the proxy settings in my server, the chef client failed to launch. So I manually removed the http_proxy and https_proxy settings from client.rb and ran the command chef-client which was then successful.
I have two questions -
1) why did knife[:bootstrap_proxy] = "http://username:password#proxyIP:port/" work? because it only defines the proxy that should be used by the node.
2) Also, alll the proxy setting for the node has already been done. I do not want any proxy settings in client.rb. How do I achieve this?
Please help!
When it comes to your client.rb I'd suggest looking into https://github.com/opscode-cookbooks/chef-client
It's a wrapper script for client.rb(s).
Not sure about your knife[:bootstrap_proxy] though. Ideally that cookbook should take care of it. If you are still stumpped you can run chef-client -VV and knife -VV to see exactly what it's doing.