SSH to Kubernetes pod using Bastion - kubernetes

I have deployed Google cloud Kubernetes cluster. The cluster has internal IP only.
In order to access it, I created a virtual machine bastion-1 which has external IP.
The structure:
My Machine -> bastion-1 -> Kubernetes cluster
The connection to the proxy station:
$ ssh bastion -D 1080
now using kubectl using proxy:
$ HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl get pods
No resources found.
The Kubernetes master server is responding, which is a good sign.
Now, trying to ssh a pod:
$ HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl exec -it "my-pod" -- /bin/bash
error: error sending request: Post https://xxx.xxx.xxx.xxx/api/v1/namespaces/xxx/pods/pod-xxx/exec?command=%2Fbin%2Fbash&container=xxx&container=xxx&stdin=true&stdout=true&tty=true: EOF
Question:
How to allow ssh connection to pod via bastion? What I'm doing wrong?

You can't do this right now.
The reason is because the connections used for commands like exec and proxy use SPDY2.
There's a bug report here with more information.
You'll have to switch to using a HTTP proxy

Related

Access Kubernetes Dashboard using kubectl proxy remotly for multiple user

I have setup kubernetes cluster in EKS. API server access is in private mode. I have bastion host from which i can run kubectl commands. I want to access kubernetes dashboard remotly.
One thing i can do is ssh -L localhost:8001:127.0.0.1:8001 # kubectl proxy. this wil provide me an access remotly.
If somone else will execute ssh -L localhost:8001:127.0.0.1:8001 # kubectl proxy then it will get an error. "error: listen tcp 127.0.0.1:8001: bind: address already in use". Because somebody else is accessing kubectl proxy.
How to solve this issue. I want to access kubernetes dashboard on multiple machine at the same time.

How to locally access influx db deployed on GCP kubernetes

Influxdb 1.8 is deployed on kubernets using helm charts. influx db is deployed as Stateful Set that exposes a service with one running pods. Am able to ssh into running pods using kubectl exec command and its running fine. I can also see databases using influx cli after logging into pods
But i need to access this influx db on my local system to execute queries directly from my system using curl command. Deployed influxdb has no external IP/DNS. It ha internal endpoint that usually starts with 10...*
Can anybody guide me on how can i access influxdb on my local system using curl command?
You can use the kubectl port-forward command. You can use it to either map a Pod or a Service TCP port to a port on your local machine:
> kubectl port-forward service/your-influxdb-service 8086:8086
^ ^
| |
local port remote/service port
While that command is running, kubectl will forward all connections to your local port 8086 to the same port of your InfluxDB service. All traffic will be funneled through kubectl and your API server, so this is not exactly suited for high-throughput scenarios, but should be sufficient for occasional debugging and testing.

Kubernetes, Unable to connect to the server: EOF

Environment of kubectl: Windows 10.
Kubectl version: https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/windows/amd64/kubectl.exe
Hello. I've just installed Kubernetes cluster at Google Cloud Platform. Then applied the next command:
gcloud container clusters get-credentials my-cluster --zone europe-west1-b --project my-project
It successfully added the credentials at %UserProfile%\.kube\config
But when I try kubectl get pods it returns Unable to connect to the server: EOF. My computer accesses the internet through corporate proxy. How and where could I provide cert file for the kubectl so it could use the cert with all the requests? Thanx.
You would get EOF if there is no response from kubectl API calls in a certain time(Idle time is set 300 sec by default).
Try increasing cluster Idle time or maybe you might need VPN to access those pods (something like those)

How to access to the services in kubernetes cluster when ssh to another VMs via proxy?

Consider if we build two VMs in a bare-metal server through a network, one is master and another is worker. I ssh to the master and construct a cluster using kubeadm which has three pods and a service with type: ClusterIP. So when I want access to the cluster I do kubectl proxy in the master. Now we can explore the API with curl and wget in the VM which we ssh to it, like this :
$ curl http://localhost:8080/api/
So far, so good! but I want access to the services by my laptop? The localhost which comes above is refer to the bare-metal server! How can access to the services through proxy by my laptop when cluster is placed in another machine?
When I do $ curl http://localhost:8080/api/ in my laptop it says :
127.0.0.1 refused to connect
which make sense! But what is the solution to this?
If you forward the port 8080 when sshing to master, you can use localhost on your laptop to access the apis on the cluster.
You can try adding the -L flag to your ssh command:
$ ssh -L 8080:localhost:8080 your.master.host.com
Then the curl to localhost will work.
You can also specify an extra arguments to the kubectl proxy command, to let your reverse-proxy server listening on non-default ip address (127.0.0.1) - expose outside
kubectl proxy --port=8001 --address='<MASTER_IP_ADDRESS>' --accept-hosts="^.*$"
You can get your Master IP address by issuing following command: kubectl cluster-info

How does kubectl detect configuration in GCE?

I created a Kube cluster using the kube-up script. If I ssh into the intances, kubectl is configured for the local cluster. My question, how is kubectl detecting the kubeconfig when a cluster is created using kube-up script?
I tried to do this using a cluster built from HEAD on GCE and didn't have the same experience. On the master instance, kubectl works. But on the nodes, it isn't configured to communicate with the master:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.8.82+c9d33ec1b4044e", GitCommit:"c9d33
ec1b4044e2a330a9b8b7a9204a99b6c6eec", GitTreeState:"clean"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The reason that it works out of the box on the master is that by default kubectl tries to connect to port 8080 on localhost, which is also the insecure port used on the master (until kubernetes#13598 is resolved).