I'm trying to get access to mongodb replicaset via kubectl, so I won't expose it to internet, I can't use OpenVPN since Calico blocks it.
So I'm using this script:
export MONGO_POD_NAME1=$(kubectl get pods --namespace develop -l "app=mongodb-replicaset" -o jsonpath="{.items[0].metadata.name}")
export MONGO_POD_NAME2=$(kubectl get pods --namespace develop -l "app=mongodb-replicaset" -o jsonpath="{.items[1].metadata.name}")
export MONGO_POD_NAME3=$(kubectl get pods --namespace develop -l "app=mongodb-replicaset" -o jsonpath="{.items[2].metadata.name}")
echo $MONGO_POD_NAME1, $MONGO_POD_NAME2, $MONGO_POD_NAME3
kubectl port-forward --namespace develop $MONGO_POD_NAME1 27020:27017 & p3=$!
kubectl port-forward --namespace develop $MONGO_POD_NAME2 27021:27017 & p4=$!
kubectl port-forward --namespace develop $MONGO_POD_NAME3 27022:27017 & p5=$!
wait -n
[ "$?" -gt 1 ] || kill "$p3" "$p4" "$p5"
wait
And my connection string looks like this:
mongodb://LOGIN:PW#localhost:27020,localhost:27021,localhost:27022/animedb?replicaSet=rs0
However, I still can't connect to my mongodb replicaset, it says:
connection error: { MongoNetworkError: failed to connect to server
[anime-data-develop-mongodb-replicaset-0.anime-data-develop-mongodb-replicaset.develop.svc.cluster.local:27017]
on first connect [MongoNetworkError: getaddrinfo ENOTFOUND
anime-data-develop-mongodb-replicaset-0.anime-data-develop-mongodb-replicaset.develop.svc.cluster.local
anime-data-develop-mongodb-replicaset-0.anime-data-develop-mongodb-replicaset.develop.svc.cluster.local:27017]
But if I use direct connection, I still can connect to each node!
What might be a problem here? How can I connect to mongodb for development?
Port Forwarding will make a local port on your machine redirect (forward) traffic to some pod. In your case, you've asked Kubernetes to forward traffic on 127.0.0.1:27020 to your pod's 27017 port.
The issue happen because the Replica Set configuration points to the other nodes using your internal cluster IPs, so you will see something like [ReplicaSetMonitor-TaskExecutor] changing hosts to rs0/<ClusterIP-1>:27017,<ClusterIP-2>:27017,<ClusterIP-3>:27017 from rs/localhost:27020,localhost:27021,localhost:27022 on your mongo client session, and your machine can't reach your Cluster's IPs, of course.
For development purposes, you'd have to connect to your primary Mongo node only (as in mongodb://localhost:27020/animedb), which will replicate your data into your secondaries. That's safe enough for development/debugging, but not suitable for production!
If you need to set it up for permanent/production access, you should update your replicaSet settings so they find each other using public IPs or hostnames, see https://docs.mongodb.com/manual/tutorial/change-hostnames-in-a-replica-set/.
Related
Currently...
I have a GKE/kubernetes/k8s cluster in GCP. I have a bastion host (Compute Engine VM Instance) in GCP. I have allowlisted my bastion host's IP in the GKE cluster's Master authorized networks section. Hence, in order to run kubectl commands to my GKE, I first need to SSH into my bastion host by running the gcloud beta compute ssh command; then I run the gcloud container clusters get-credentials command to authenticate with GKE, then from there I can run kubectl commands like usual.
Later...
I want to be able to run kubectl commands to my GKE cluster directly from my local development CLI. In order to do that, I can add my local development machine IP as an allowlisted entry into my GKE's Master authorized networks, and that should be it. Then i can run the gcloud container clusters get-credentials first and then run kubectl commands like usual.
However...
I am looking for a way to avoid having to allowlist my local development machine IP. Every time i take my laptop somewhere new, i have to update the allowlist my new IP from there before i can run the gcloud container clusters get-credentials command before running kubectl commands.
I wonder...
Is there a way to assign a port number in the bastion-host that can be used to invoke kubectl commands to the remote GKE cluster securely? And then, i can just use the gcloud compute start-iap-tunnel command (which BTW takes care of all permission issues using Cloud IAM) from my local dev CLI to establish a ssh-tunnel to that specific port number in the bastion host. That way, for the GKE cluster, it is receiving kubectl commands from the bastion host (which is already allowlisted in its Master authorized networks). But behind the scene, i am authenticating with the bastion host from my local dev CLI (using my glcoud auth credentails) and invoking kubectl commands from there securely.
Is this possible? Any ideas from anyone?
This would help accessing to your secured GKE cluster from localhost
https://github.com/GoogleCloudPlatform/gke-private-cluster-demo
Once the bastion host is setup with tinyproxy as in the above doc, we can use the below shell functions to quickly enable/disable the bastion host access
enable_secure_kubectl() {
# Aliasing kubectl/helm commands to use local proxy
alias kubectl="HTTPS_PROXY=localhost:8888 kubectl"
alias helm="HTTPS_PROXY=localhost:8888 helm"
# Open SSH tunnel for 1 hour
gcloud compute ssh my-bastion-host -- -o ExitOnForwardFailure=yes -M -S /tmp/sslsock -L8888:127.0.0.1:8888 -f sleep 3600
# Get kubernetes credentials with internal ip for kube-apiserver in kubeconfig
gcloud container clusters get-credentials my-gke-cluster --region us-east1 --project myproject --internal-ip
}
disable_secure_kubectl() {
unalias kubectl
unalias helm
ssh -S /tmp/sslsock -O exit my-bastion-host
}
I'm currently using minikube and I'm trying to access my application by utilizing the minikube tunnel since the service type is LoadBalancer.
I'm able to obtain an external IP when I execute the minikube tunnel, however, when I try to check it on the browser it doesn't work. I've also tried Postman and curl, they both don't work.
To add to this, if I shell into the pod I can use curl and it does work. Furthermore, I executed kubectl port-forward and I was able to access my application through localhost.
Does anyone have any idea as to why I'm not being able to access my application even though everything seems to be running correctly?
Your service is probably bound to localhost. Minikube starts the cluster in a VM or docker (depending on the driver you are using) that is bound to an external IP, $(minikube ip).
When you are running a minikube tunnel you're tunneling from minikube cluster external IP to the internal IP of the load balancer, the LB service in Kubernete the External IP goes from "Pending" to an actual internal IP and something like this should work:
curl -H 'Host: localhost' -v $(minikube ip)
However, it doesn't in the browser, since in the above command you are sending the request to the minikube's IP, not localhost. What I do for this to work is a ssh tunnel like this one:
ssh -i $(minikube ssh-key) docker#$(minikube ip) -L 8008:localhost:80
This maps the LB listener in port 80, in minikube's cluster, to 8008 in localhost. The external IP of the service remains pending but it works since the Kube controller can still find it. If you want to map port 80 then you will need to add sudo.
If the version of ssh on your system (the one in your path) is less than 8.0, 'minikube tunnel' will silently fail to instantiate the ssh tunnel for some port forwards. (e.g. privileged ports)
Open a command prompt as administrator, and type 'where.exe ssh'. Navigate to that location in windows explorer, and right-click on 'ssh.exe'. Choose Properties->Details to see the version.
If this is less than version 8.0 you must upgrade that to at least version 8.0 to prevent this silent failure of ssh by 'minikube tunnel'.
After upgrading, ssh, ensure that the newer version is the one that will be executed by using the 'where.exe' command again. If there are two on your system, then reorder the paths in your path environment variable. Restart your shell (or better) reboot the system so that all processes environments pick up the path changes.
Then try 'minikube tunnel' again. When it is working, you should see an ssh instance in the task manager for each tunnel that minikube creates.
In my case minikube service <serviceName> solved this issue.
For further details look here in minikube docs.
I´m running windows 10 with WSL1 and ubuntu as distrib.
My windows version is Version 1903 (Build 18362.418)
I´m trying to connect to kubernetes using kubectl proxy within ubuntu WSL. I get a connection refused when trying to connect to the dashboard with my browser.
I have checked in windows with netstat -a to see active connections.
If i run kubectl within the windows terminal i have no problem to connect to kubernetes, so the problem is only happening when i try to connect with ubuntu WSL1.
I have also tried to run the following command
kubectl proxy --address='0.0.0.0' --port=8001 --accept-hosts='.*'
... but the connection is refused although i see that windows is listening to the port. Changing port to another port didn´t fix the proble. Disabling the firewall didnt´fix the problem as well.
Any idea ?
First thing to do would be to check if you able to safely talk to your cluster: (kubectl get svc -n kube-system, kubectl cluster-info)
If not check if $HOME/.kube folder was created. If not, run:
gcloud container clusters get-credentials default --region=<your_region>
I recently installed kubernetes on VMware and also configured few pods , while configuring those pods , it automatically used IP of the VMware and configured. I was able to access the application during that time but then recently i rebooted VM and machine which hosts the VM, during this - IP of the VM got changed i guess and now - I am getting below error when using command kubectl get pod -n <namspaceName>:
userX#ubuntu:~$ kubectl get pod -n NameSpaceX
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
userX#ubuntu:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
kubectl cluster-info as well as other related commands gives same output.
in VMware workstation settings, we are using network adapter which is sharing host IP address setting. We are not sure if it has any impact.
We also tried to add below entry in /etc/hosts , it is not working.
127.0.0.1 localhost \n
192.168.214.136 localhost \n
127.0.1.1 ubuntu
I expect to run the pods back again to access the application.Instead of reinstalling all pods which is time consuming - we are looking for quick workaround so that pods will get back to running state.
If you use minikube sometimes all you need is just to restart minikube.
Run:
minikube start
I encountered the same issue - the problem was that the master node didn't expose port 6443 outside.
Below are the steps I took to fix it.
1 ) Check IP of api-server.
This can be verified via the .kube/config file (under server field) or with: kubectl describe pod/kube-apiserver-<master-node-name> -n kube-system.
2 ) Run curl https://<kube-apiserver-IP>:6443 and see if port 6443 is open.
3 ) If port 6443 you should get something related to the certificate like:
curl: (60) SSL certificate problem: unable to get local issuer certificate
4 ) If port 6443 is not open:
4.A ) SSH into master node.
4.B ) Run sudo firewall-cmd --add-port=6443/tcp --permanent (I'm assuming firewalld is installed).
4.C ) Run sudo firewall-cmd --reload.
4.D ) Run sudo firewall-cmd --list-all and you should see port 6443 is updated:
public
target: default
icmp-block-inversion: no
interfaces:
sources:
services: dhcpv6-client ssh
ports: 6443/tcp <---- Here
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
The common practice is to copy config file to the home directory
sudo cp /etc/kubernetes/admin.conf ~/.kube/config && sudo chown $(id -u):$(id -g) $HOME/.kube/config
Also, make sure that api-server address is valid.
server: https://<master-node-ip>:6443
If not, you can manually edit it using any text editor.
You need to export the admin.conf file as kubeconfig before running the kubectl commands. You may put this as your env variable
export kubeconfig=<path>/admin.conf
after this you should be able to run the kubectl command. I am hoping that your setup of K8S cluster is proper.
Last night I had the exact same error installing Kubernetes using this puppet module: https://forge.puppet.com/puppetlabs/kubernetes
Turns out that it is an incorrect iptables setting in the master that blocks all non-local requests towards the api.
The way I solved it (bruteforce solution) is by
completely remove alle installed k8s related software (also all config files, etcd data, docker images, mounted tmpfs filesystems, ...)
wipe the iptables completely https://serverfault.com/questions/200635/best-way-to-clear-all-iptables-rules
reinstall
This is what solved the problem in my case.
There is probably a much nicer and cleaner way to do this (i.e. simply change the iptables rules to allow access).
if you getting the below error then you also check once the token validity.
Unable to connect to the server: dial tcp 192.168.93.10:6443: connect: no route to host
Check your token validity by using the command kubeadm token list if your token is expired then you have to reset the cluster using kubeadm reset and than initialize again using command kubeadm init --token-ttl 0.
Then again check the status of the token using kubeadm token list. Note here the TTL value will be <forever> and Expire value will be <never>.
example:-
[root#master1 ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
nh48tb.d79ysdsaj8bchms9 <forever> <never> authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
Ubuntu 22.04 LTS Screenshot
Select docker-desktop and run again your command, e.g kubectl apply -f <myimage.yaml>
Run minikube start command
The reason behind that is your minikube cluster with driver docker stopped
when you shutdown the system
To all those who are trying to learn and experiment kubernetes using Ubuntu on Oracle VM:
IP address is assigned to Guest OS/VM based on the network adapter selection. Based on your network adapter selection, you need to configure the settings in Oracle VM network section or your router settings.
See the link for most common Oracle VM network adapter.
https://www.nakivo.com/blog/virtualbox-network-setting-guide/
I was using bridge adapter which put VM and host OS in parallel. So the my router was randomly assigning IP to my VM after every restart and my cluster stopped working and getting the same exact error message posted in the question.
> k get pods -A
> Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
> systemctl status kubelet
> ........
> ........ "Error getting node" err="node \"node\" not found"
Cluster started working again after reserving static IP address to my VM in router settings.(if you are using NAT adapter, you should configure it in VM network settings)
When you are reserving IP address to your VM, make sure to assign the same old IP address which was used for configuring kubelet.
How to ssh to the node inside the cluster in local. I am using docker edge version which has kubernetes inbuilt. If i run
kubectl ssh node
I am getting
Error: unknown command "ssh" for "kubectl"
Did you mean this?
set
Run 'kubectl --help' for usage.
error: unknown command "ssh" for "kubectl"
Did you mean this?
set
There is no "ssh" command in kubectl yet, but there are plenty of options to access Kubernetes node shell.
In case you are using cloud provider, you are able to connect to nodes directly from instances management interface.
For example, in GCP: Select Menu -> Compute Engine -> VM instances, then press SSH button on the left side of the desired node instance.
In case of using local VM (VMWare, Virtualbox), you can configure sshd before rolling out Kubernetes cluster, or use VM console, which is available from management GUI.
Vagrant provides its own command to access VMs - vagrant ssh
In case of using minikube, there is minikube ssh command to connect to minikube VM. There are also other options.
I found no simple way to access docker-for-desktop VM, but you can easily switch to minikube for experimenting with node settings.
How to ssh to the node inside the cluster in local
Kubernetes is aware of nodes on level of secure communication with kubelets on nodes (geting hostname and ip from node), and as such, does not provide cluster-level ssh to nodes out of the box. Depending on your actual provide/setup there are different ways of connecting to nodes and they all boil down to locate your ssh key, open appropriate ports on firewall/security groups and issue ssh -i key user#node_instance_ip command to access node. If you are running locally with virtual machines you can setup your own ssh keypairs and do the trick..
You can effectively shell into a pod using exec(I know its not exactly what the question asks, but might be helpful).
An example usage would be kubectl exec -it name-of-your-pod -- /bin/bash. assuming you have bash installed.
Hope that helps.
You have to first Extend kubectl with plugins adding https://github.com/luksa/kubectl-plugins.
Basically, to "install" ssh, e.g.:
wget https://raw.githubusercontent.com/luksa/kubectl-plugins/master/kubectl-ssh
Then make sure the file is in kubectl-ssh your path.