kubefwd doesn't work for specific ports/hostnames - kubernetes

I am using this command to run kubefwd (https://github.com/txn2/kubefwd)
sudo kubefwd services -x <context> -n <namespace> -c <kube_file_path> -l "app in (idm, sbb-amq)"
That is the log:
INFO[14:13:58] Port-Forward: 127.1.27.1 sbb-amq:8161 to pod sbb-amq-0:8161
INFO[14:13:58] Port-Forward: 127.1.27.1 sbb-amq:61616 to pod sbb-amq-0:61616
INFO[14:13:58] Port-Forward: 127.1.27.1 sbb-amq:61616 to pod sbb-amq-0:61616
INFO[14:13:58] Port-Forward: 127.1.27.7 idm:9006 to pod idm-0:9006
INFO[14:13:58] Port-Forward: 127.1.27.7 idm:80 to pod idm-0:9006
ERRO[14:14:01] ForwardPorts error: unable to listen on any of the requested ports: [{80 9006}]
I got an error in the last line of the log, so I have realized that all the hostnames related to ports 80 and 9006 were not attached to the IP, which means:
http://idm:9006 doesn't work
http://127.1.27.7:9006 works
However:
http://sbb-amq:8161 works as well (not using the port 9006)
Has anyone seen this before?
EDIT: I am using Ubuntu and the ports 80/9006 are not in use.

unable to listen on any of the requested ports usually means that you have something on your local workstation listening on those ports, 0.0.0.0:80 and 0.0.0.0:9006 in this case. The IP 0.0.0.0 means it's listening to those ports on all interfaces.
kubefwd uses local loopback IP addresses to allow the same port for multiple services so that you can have 127.27.1.1:80 -> someservice:80 and 127.27.1.2:80 -> someotherservice:80.
Unfortunately, when you have local applications using 0.0.0.0:80, you will not be able to bind 80 to any interface, as it is already being used. I don't know why applications often bind to every interface since localhost almost always points to 127.0.0.1.
You have a couple of options:
change the configuration of those other services to use a single IP such as 127.0.0.1
use the mapping flag and have kubefwd use a differnt port: -m 80:8080 -m 9006:9007

Related

Microk8s at windows host-access doesnt work. Why connection refused when i trying to connect from vm to host?

I can't connect with local mysql server that placed on host machine.
microk8s 2.0.1
multipass 1.6.1
windows 10
windows defender disabled :)
All commands are work fine: apply, get pods, get nodes, get events, secrets, services...
inspect not found warnings.
Addons enabled: dns, host-access, storage - all running.
For my cluster the default ip given by the host-access is "10.0.0.1".
I see that adapter is exists with :
multipass shell microk8s-vm
ifconfig
...
lo:microk8s: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 10.0.1.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
...
I put ip addr 10.0.1.1 to secrets
And my spring boot application can't connect (refused) with mysql uses this ip.
Mysql is running locally, i can connect with mysql-client, all dbs are exists, all grants are full priveleged...
Other example:
microk8s kubectl apply -f https://k8s.io/examples/application/shell-demo.yaml
microk8s kubectl exec --stdin --tty shell-demo -- /bin/bash
curl 10.0.1.1:3306 (or 10.0.1.1:8080 )
curl says: "connection refused"
Please, help! And tnx!!
After few days i dont know how to resolve this problem.
Windows & kubernetes (maybe other product) is not a good couple...
My microk8s stop working after rebooting system... "microk8s start" - working, but "microk8s kubectl get nodes " - are not.
I just moved to linux and all works fine! Sad

Unable to access application through minikube tunnel

I'm currently using minikube and I'm trying to access my application by utilizing the minikube tunnel since the service type is LoadBalancer.
I'm able to obtain an external IP when I execute the minikube tunnel, however, when I try to check it on the browser it doesn't work. I've also tried Postman and curl, they both don't work.
To add to this, if I shell into the pod I can use curl and it does work. Furthermore, I executed kubectl port-forward and I was able to access my application through localhost.
Does anyone have any idea as to why I'm not being able to access my application even though everything seems to be running correctly?
Your service is probably bound to localhost. Minikube starts the cluster in a VM or docker (depending on the driver you are using) that is bound to an external IP, $(minikube ip).
When you are running a minikube tunnel you're tunneling from minikube cluster external IP to the internal IP of the load balancer, the LB service in Kubernete the External IP goes from "Pending" to an actual internal IP and something like this should work:
curl -H 'Host: localhost' -v $(minikube ip)
However, it doesn't in the browser, since in the above command you are sending the request to the minikube's IP, not localhost. What I do for this to work is a ssh tunnel like this one:
ssh -i $(minikube ssh-key) docker#$(minikube ip) -L 8008:localhost:80
This maps the LB listener in port 80, in minikube's cluster, to 8008 in localhost. The external IP of the service remains pending but it works since the Kube controller can still find it. If you want to map port 80 then you will need to add sudo.
If the version of ssh on your system (the one in your path) is less than 8.0, 'minikube tunnel' will silently fail to instantiate the ssh tunnel for some port forwards. (e.g. privileged ports)
Open a command prompt as administrator, and type 'where.exe ssh'. Navigate to that location in windows explorer, and right-click on 'ssh.exe'. Choose Properties->Details to see the version.
If this is less than version 8.0 you must upgrade that to at least version 8.0 to prevent this silent failure of ssh by 'minikube tunnel'.
After upgrading, ssh, ensure that the newer version is the one that will be executed by using the 'where.exe' command again. If there are two on your system, then reorder the paths in your path environment variable. Restart your shell (or better) reboot the system so that all processes environments pick up the path changes.
Then try 'minikube tunnel' again. When it is working, you should see an ssh instance in the task manager for each tunnel that minikube creates.
In my case minikube service <serviceName> solved this issue.
For further details look here in minikube docs.

kubectl : Unable to connect to the server : dial tcp 192.168.214.136:6443: connect: no route to host

I recently installed kubernetes on VMware and also configured few pods , while configuring those pods , it automatically used IP of the VMware and configured. I was able to access the application during that time but then recently i rebooted VM and machine which hosts the VM, during this - IP of the VM got changed i guess and now - I am getting below error when using command kubectl get pod -n <namspaceName>:
userX#ubuntu:~$ kubectl get pod -n NameSpaceX
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
userX#ubuntu:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
kubectl cluster-info as well as other related commands gives same output.
in VMware workstation settings, we are using network adapter which is sharing host IP address setting. We are not sure if it has any impact.
We also tried to add below entry in /etc/hosts , it is not working.
127.0.0.1 localhost \n
192.168.214.136 localhost \n
127.0.1.1 ubuntu
I expect to run the pods back again to access the application.Instead of reinstalling all pods which is time consuming - we are looking for quick workaround so that pods will get back to running state.
If you use minikube sometimes all you need is just to restart minikube.
Run:
minikube start
I encountered the same issue - the problem was that the master node didn't expose port 6443 outside.
Below are the steps I took to fix it.
1 ) Check IP of api-server.
This can be verified via the .kube/config file (under server field) or with: kubectl describe pod/kube-apiserver-<master-node-name> -n kube-system.
2 ) Run curl https://<kube-apiserver-IP>:6443 and see if port 6443 is open.
3 ) If port 6443 you should get something related to the certificate like:
curl: (60) SSL certificate problem: unable to get local issuer certificate
4 ) If port 6443 is not open:
4.A ) SSH into master node.
4.B ) Run sudo firewall-cmd --add-port=6443/tcp --permanent (I'm assuming firewalld is installed).
4.C ) Run sudo firewall-cmd --reload.
4.D ) Run sudo firewall-cmd --list-all and you should see port 6443 is updated:
public
target: default
icmp-block-inversion: no
interfaces:
sources:
services: dhcpv6-client ssh
ports: 6443/tcp <---- Here
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
The common practice is to copy config file to the home directory
sudo cp /etc/kubernetes/admin.conf ~/.kube/config && sudo chown $(id -u):$(id -g) $HOME/.kube/config
Also, make sure that api-server address is valid.
server: https://<master-node-ip>:6443
If not, you can manually edit it using any text editor.
You need to export the admin.conf file as kubeconfig before running the kubectl commands. You may put this as your env variable
export kubeconfig=<path>/admin.conf
after this you should be able to run the kubectl command. I am hoping that your setup of K8S cluster is proper.
Last night I had the exact same error installing Kubernetes using this puppet module: https://forge.puppet.com/puppetlabs/kubernetes
Turns out that it is an incorrect iptables setting in the master that blocks all non-local requests towards the api.
The way I solved it (bruteforce solution) is by
completely remove alle installed k8s related software (also all config files, etcd data, docker images, mounted tmpfs filesystems, ...)
wipe the iptables completely https://serverfault.com/questions/200635/best-way-to-clear-all-iptables-rules
reinstall
This is what solved the problem in my case.
There is probably a much nicer and cleaner way to do this (i.e. simply change the iptables rules to allow access).
if you getting the below error then you also check once the token validity.
Unable to connect to the server: dial tcp 192.168.93.10:6443: connect: no route to host
Check your token validity by using the command kubeadm token list if your token is expired then you have to reset the cluster using kubeadm reset and than initialize again using command kubeadm init --token-ttl 0.
Then again check the status of the token using kubeadm token list. Note here the TTL value will be <forever> and Expire value will be <never>.
example:-
[root#master1 ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
nh48tb.d79ysdsaj8bchms9 <forever> <never> authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
Ubuntu 22.04 LTS Screenshot
Select docker-desktop and run again your command, e.g kubectl apply -f <myimage.yaml>
Run minikube start command
The reason behind that is your minikube cluster with driver docker stopped
when you shutdown the system
To all those who are trying to learn and experiment kubernetes using Ubuntu on Oracle VM:
IP address is assigned to Guest OS/VM based on the network adapter selection. Based on your network adapter selection, you need to configure the settings in Oracle VM network section or your router settings.
See the link for most common Oracle VM network adapter.
https://www.nakivo.com/blog/virtualbox-network-setting-guide/
I was using bridge adapter which put VM and host OS in parallel. So the my router was randomly assigning IP to my VM after every restart and my cluster stopped working and getting the same exact error message posted in the question.
> k get pods -A
> Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
> systemctl status kubelet
> ........
> ........ "Error getting node" err="node \"node\" not found"
Cluster started working again after reserving static IP address to my VM in router settings.(if you are using NAT adapter, you should configure it in VM network settings)
When you are reserving IP address to your VM, make sure to assign the same old IP address which was used for configuring kubelet.

Can't connect to mongodb replicaset via kubectl port-forward

I'm trying to get access to mongodb replicaset via kubectl, so I won't expose it to internet, I can't use OpenVPN since Calico blocks it.
So I'm using this script:
export MONGO_POD_NAME1=$(kubectl get pods --namespace develop -l "app=mongodb-replicaset" -o jsonpath="{.items[0].metadata.name}")
export MONGO_POD_NAME2=$(kubectl get pods --namespace develop -l "app=mongodb-replicaset" -o jsonpath="{.items[1].metadata.name}")
export MONGO_POD_NAME3=$(kubectl get pods --namespace develop -l "app=mongodb-replicaset" -o jsonpath="{.items[2].metadata.name}")
echo $MONGO_POD_NAME1, $MONGO_POD_NAME2, $MONGO_POD_NAME3
kubectl port-forward --namespace develop $MONGO_POD_NAME1 27020:27017 & p3=$!
kubectl port-forward --namespace develop $MONGO_POD_NAME2 27021:27017 & p4=$!
kubectl port-forward --namespace develop $MONGO_POD_NAME3 27022:27017 & p5=$!
wait -n
[ "$?" -gt 1 ] || kill "$p3" "$p4" "$p5"
wait
And my connection string looks like this:
mongodb://LOGIN:PW#localhost:27020,localhost:27021,localhost:27022/animedb?replicaSet=rs0
However, I still can't connect to my mongodb replicaset, it says:
connection error: { MongoNetworkError: failed to connect to server
[anime-data-develop-mongodb-replicaset-0.anime-data-develop-mongodb-replicaset.develop.svc.cluster.local:27017]
on first connect [MongoNetworkError: getaddrinfo ENOTFOUND
anime-data-develop-mongodb-replicaset-0.anime-data-develop-mongodb-replicaset.develop.svc.cluster.local
anime-data-develop-mongodb-replicaset-0.anime-data-develop-mongodb-replicaset.develop.svc.cluster.local:27017]
But if I use direct connection, I still can connect to each node!
What might be a problem here? How can I connect to mongodb for development?
Port Forwarding will make a local port on your machine redirect (forward) traffic to some pod. In your case, you've asked Kubernetes to forward traffic on 127.0.0.1:27020 to your pod's 27017 port.
The issue happen because the Replica Set configuration points to the other nodes using your internal cluster IPs, so you will see something like [ReplicaSetMonitor-TaskExecutor] changing hosts to rs0/<ClusterIP-1>:27017,<ClusterIP-2>:27017,<ClusterIP-3>:27017 from rs/localhost:27020,localhost:27021,localhost:27022 on your mongo client session, and your machine can't reach your Cluster's IPs, of course.
For development purposes, you'd have to connect to your primary Mongo node only (as in mongodb://localhost:27020/animedb), which will replicate your data into your secondaries. That's safe enough for development/debugging, but not suitable for production!
If you need to set it up for permanent/production access, you should update your replicaSet settings so they find each other using public IPs or hostnames, see https://docs.mongodb.com/manual/tutorial/change-hostnames-in-a-replica-set/.

L3 miss and Route not Found for flannel

So I have a Kubernetes cluster, and I am using Flannel for an overlay network. It has been working fine (for almost a year actually) then I modified a service to have 2 ports and all of a sudden I get this about a completely different service, one that was working previously and I did not edit:
<Timestamp> <host> flanneld[873]: I0407 18:36:51.705743 00873 vxlan.go:345] L3 miss: <Service's IP>
<Timestamp> <host> flanneld[873]: I0407 18:36:51.705865 00873 vxlan.go:349] Route for <Service's IP> not found
Is there a common cause to this? I am using Kubernetes 1.0.X and Flannel 0.5.5 and I should mention only one node is having this issue, the rest of the nodes are fine. The bad node's kube-proxy is also saying it can't find the service's endpoint.
Sometime flannel will change it's subnet configuration... you can tell this if the IP and MTU from cat /run/flannel/subnet.env doesn't match ps aux | grep docker (or cat /etc/default/docker)... in which case you will need to reconfigure docker to use the new flannel config.
First you have to delete the docker network interface
sudo ip link set dev docker0 down
sudo brctl delbr docker0
Next you have to reconfigure docker to use the new flannel config.
Note: sometimes this step has to be done manually (i.e. read the contents of /run/flannel/subnet.env and then alter /etc/default/docker)
source /run/flannel/subnet.env
echo DOCKER_OPTS=\"-H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}\" > /etc/default/docker
Finally, restart docker
sudo service docker restart