I have a trouble that i can't solve. So, I have k8s cluster on GCP. I can use kubectl from shell That opened directly to cluster. But when I use kubectl from node "I have The connection to the server localhost:8080 was refused - did you specify the right host or port?".
Also I use ./kube/config and it works about 5 minutes, and then again fail.
Maybe someone use GCP and help me. Thank you.
irvifa- I use Kubernetes cluster that provides GCP. When I connect directly from cloud shell. But when I connect from instance kubectl shows for client but for server it has mistake The connection to the server localhost:8080 was refused - did you specify the right host or port?"
If you didn't use COS(Container-Optimized OS), your need to
gcloud container clusters get-credentials "CLUSTER NAME"
By default COS will get credentials when you access to the node.
Related
I tried to install kubernetes with Docker desktop. However, as soon as I type in
kubectl get nodes
I get Remote kubernetes server unreachable error.
I0217 23:42:56.224000 26220 versioner.go:56] Remote kubernetes server unreachable
Unable to connect to the server: dial tcp 172.28.112.98:6443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
Any ideas on how to fix this?
Have you got more than one context in your kubeconfig?
You can check this with kubectl config get-contexts.
If necessary change your context to Docker Desktop Kubernetes using kubectl config use-context docker-desktop.
Is it possible that you may have tried minikube and this has left cluster/context in your .kube\config?
Configure Access to Multiple Clusters
go to your HOME/.kube directory and check the config file. There is a possibility that the server mentioned there is old or not reachable.
you can copy the new config file(from remote server of the other directory of tools like k3s) and add/replace it in your HOME/.kube / config file.
Same error message may also happen when you switch from local k8s cluster to a remote cluster that requires VPN to connect and you are not connected to VPN.
I am new to Google Cloud Platform and the following context:
I have a Compute Engine VM running as a MongoDB server and a Compute Engine VM running as a NodeJS server already with Docker. Then the NodeJS application connects to Mongo via the default VPC internal IP. Now, I'm trying to migrate the NodeJS application to Google Kubernetes Engine, but I can't connect to the MongoDB server when I deploy the NodeJS application Docker image to the cluster.
All services like GCE and GKE are in the same region (us-east-1).
I did a hard test accessing a kubernetes cluster node via SSH and deploying a simple MongoDB Docker image and trying to connect to the remote MongoDB server via command line, but the problem is the same, time out when trying to connect.
I have also checked the firewall settings on GCP as well as the bindIp setting on the MongoDB server and it has no blocking on that.
Does anyone know what may be happening? Thank you very much.
In my case traffic from GKE to GCE VM was blocked by Google Firewall even thou both are in the same network (default).
I had to whitelist cluster pod network listed in cluster details:
Pod address range 10.8.0.0/14
https://console.cloud.google.com/kubernetes/list
https://console.cloud.google.com/networking/firewalls/list
By default, containers in a GKE cluster should be able to access GCE VMs of the same VPC through internal IPs. It is just like you access the internet (e.g., google.com) from GKE containers, GKE and VPC know how to route the traffic. The problem must be with other configurations (firewall or your application).
You can do a test, start a simple HTTP server in the GCE VM, say the internal IP is 10.138.0.5:
python -m SimpleHTTPServer 8080
then create a GKE container and try to access the service:
kubectl run my-client -it --image=tutum/curl --generator=run-pod/v1 -- curl http://10.138.0.5:8080
I have created a new GCP Kubernetes cluster. The cluster is private with NAT - not have connection to the internet. I also deploy bastion machine which allow my to connect into my private network (vpc) from the internet. This is the tutorial I based on. SSH into bastion - working currently.
The kubernetes master is not exposed outside. The result:
$ kubectl get pods
The connection to the server 172.16.0.2 was refused - did you specify the right host or port?
So i install kubectl on bastion and run:
$ kubectl proxy --port 1111
Starting to serve on 127.0.0.1:3128
now I want to connect my local kubectl to the remote proxy server. I installed secured tunnel to the bastion server and mapped the remote port into the local port. Also tried it with CURL and it's working.
Now I looking for something like
$ kubectl --use-proxy=1111 get pods
(Make my local kubectl pass tru my remote proxy)
How to do it?
kubectl proxy acts exactly as an apiserver, exactly like the target apiserver - but the queries trough it are already authenticated. From your description, 'works with curl', it sounds like you've set it up correctly, you just need to target the client kubectl to it:
kubectl --server=http://localhost:1111
(Where port 1111 on your local machine is where kubectl proxy is available; in your case trough a tunnel)
If you need exec or attach trough kubectl proxy you'll need to run it with either --disable-filter=true or --reject-paths='^$'. Read the fine print and consequences for those options.
Safer way
All in all, this is not how I access clusters trough a bastion. The problem with above approach is if someone gains access to the bastion they immediately have valid Kubernetes credentials (as kubectl proxy needs those to function). It is also not the safest solution if the bastion is shared between multiple operators. One of the main points of a bastion would be that it never has credentials on it. What I fancy doing is accessing the bastion from my workstation with:
ssh -D 1080 bastion
That makes ssh act as SOCKS proxy. You need GatewayPorts yes in your sshd_config for this to work. Thereafter from the workstation I can use
HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl get pod
I have a kubernetes api-server IP address and im trying to check if it's possible to use kubectl to access the environment when you have only api server ip address and not the master/node address.
Thanks!
It Isn't possible connecting straight to the api-server using kube without using the Master. Plus in order to login you must have the user, pass and certificate in most cases.
The scenario will be: connecting into the environment using the master than all the commands executed in kube will automaticall query the api-server.
We deployed a containerized app by pulling a public docker image from docker hub and were able to get a pod running at a server running at 172.30.105.44. Hitting this IP from a rest client or curl/pinging the IP gives no response. Can someone please guide us where we are going wrong?
Firstly, find out the IP of your node by executing the command
kubectl get nodes
Get the information related to the pod running by executing the command kubectl describe services <pod-name>
Make a note of the field NodePort from here.
To access your service that is already running, hit the endpoint - nodeIP:NodePort.
You can now access your service successfully!
I am not sure where you have deployed (AWS, GKE, Bare) but you should make sure you have the following:
https://kubernetes.io/docs/user-guide/ingress/
https://kubernetes.io/docs/user-guide/services/
Ingress will work out of the box on GKE, but with an AWS installation, you may need to make sure you have nginx-ingress pods running.