Connecting GCP Kubernetes in private vpc and NAT - kubernetes

I have created a new GCP Kubernetes cluster. The cluster is private with NAT - not have connection to the internet. I also deploy bastion machine which allow my to connect into my private network (vpc) from the internet. This is the tutorial I based on. SSH into bastion - working currently.
The kubernetes master is not exposed outside. The result:
$ kubectl get pods
The connection to the server 172.16.0.2 was refused - did you specify the right host or port?
So i install kubectl on bastion and run:
$ kubectl proxy --port 1111
Starting to serve on 127.0.0.1:3128
now I want to connect my local kubectl to the remote proxy server. I installed secured tunnel to the bastion server and mapped the remote port into the local port. Also tried it with CURL and it's working.
Now I looking for something like
$ kubectl --use-proxy=1111 get pods
(Make my local kubectl pass tru my remote proxy)
How to do it?

kubectl proxy acts exactly as an apiserver, exactly like the target apiserver - but the queries trough it are already authenticated. From your description, 'works with curl', it sounds like you've set it up correctly, you just need to target the client kubectl to it:
kubectl --server=http://localhost:1111
(Where port 1111 on your local machine is where kubectl proxy is available; in your case trough a tunnel)
If you need exec or attach trough kubectl proxy you'll need to run it with either --disable-filter=true or --reject-paths='^$'. Read the fine print and consequences for those options.
Safer way
All in all, this is not how I access clusters trough a bastion. The problem with above approach is if someone gains access to the bastion they immediately have valid Kubernetes credentials (as kubectl proxy needs those to function). It is also not the safest solution if the bastion is shared between multiple operators. One of the main points of a bastion would be that it never has credentials on it. What I fancy doing is accessing the bastion from my workstation with:
ssh -D 1080 bastion
That makes ssh act as SOCKS proxy. You need GatewayPorts yes in your sshd_config for this to work. Thereafter from the workstation I can use
HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl get pod

Related

How to make two local minikube clusters communicate with each other?

I have two minikube clusters(two separate profiles) running locally call it minikube cluster A and minikube Cluster B. Each of these cluster also have an ingress and a dns associated with it locally. The dns are hello.dnsa and hello.dnsb . I am able to do ping on both of them and nslookup just like this https://minikube.sigs.k8s.io/docs/handbook/addons/ingress-dns/#testing
I want pod A in cluster A to be able to communicate with pod B in cluster B. How can I do that? I logged into pod A cluster A and I did telnet hello.dnsb 80 and it doesn't get connected because I suspect there is no route. similarly I logged into pod B of cluster B and did telnet hello.dnsb 80 and it doesnt get connected. However If I do telnet hello.dnsb 80 or telnet hello.dnsb 80 from my host machine, telnet works!
Any simple way to solve this problem for now? I am ok with any solution like even adding routes manually using ip route add if needed
Skupper is a plugin available for performing these actions. It is a service interconnect that facilitates secured communication between the clusters, for more information on skupper go through this documentation.
There are multiple examples in which minikube is integrated with skupper, go through this configuration documentation for more details.

cannot communicate to localserver from minikube pod

I have a local server I'm running that I'm trying to send requests to from a pod running in a single node local minikube cluster, but I'm getting connection refused. I can curl the service locally and it works find. What can I do to allow outbound connections to hit my localserver? I do minikube ssh and I can curl google.com or example.com fine.
I found from their posts on Github that
host.minikube.internal is a new variable usable to access the host machine. Curling this using ssh minikube proves access.

Kubectl don't allow view resources in cluster from node in GCP

I have a trouble that i can't solve. So, I have k8s cluster on GCP. I can use kubectl from shell That opened directly to cluster. But when I use kubectl from node "I have The connection to the server localhost:8080 was refused - did you specify the right host or port?".
Also I use ./kube/config and it works about 5 minutes, and then again fail.
Maybe someone use GCP and help me. Thank you.
irvifa- I use Kubernetes cluster that provides GCP. When I connect directly from cloud shell. But when I connect from instance kubectl shows for client but for server it has mistake The connection to the server localhost:8080 was refused - did you specify the right host or port?"
If you didn't use COS(Container-Optimized OS), your need to
gcloud container clusters get-credentials "CLUSTER NAME"
By default COS will get credentials when you access to the node.

How to access minikube machine from outside?

I have a server running on ubuntu where I need to expose my app using kubernetes tools. I created a cluster using minikube with virtualbox machine and with the command kubectl expose deployment I was able tu expose my app... but only in my local network. It's mean that when I run minikube ip I receive a local ip. My question is how can I access my minikube machine from outside ?
I think the answer will be "port-forwarding".But how can I do that ?
You can use SSH port forwarding to access your services from host machine in the following way:
ssh -R 30000:127.0.0.1:8001 $USER#192.168.0.20
In which 8001 is port on which your service is exposed, 192.168.0.20 is minikube IP.
Now you'll be able to access your application from your laptop pointing the browser to http://192.168.0.20:30000
If you mean to access your machine from the internet, then the answer is yes "port-forwarding" and use the external ip address [https://www.whatismyip.com/]. The configurations go into your router settings. Check your router manual.

GKE: secured access to services from outside the cluster

Is there any way to access the 'internal' services (those not exposed outside) of the cluster in a secure way from the outside.
The goal is simple: I need to debug clients of those services and need to access them, but don't want to expose them outside.
On a regular single host I would normally tunnel to the host with SSH and map the ports to localhost; I tried using a SSHD container but that didn't get me very far: the services are not directly on that container so I'm not sure how to get to the next hop on the network since the services are dynamically managing IPs.
Ideally a VPN would be much more convenient, but GKE doesn't seem to support VPN for road warrior situation.
Is there any solution for this use-case?
Thanks for your input.
EDIT:
I see here:
https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/accessing-the-cluster.md#ways-to-connect
that the only way to connect supported right now is HTTP/HTTPS meaning I can proxy HTTP calls but not to any port
You can do this with a combination of running kubectl proxy on your dev machine and using the proxying functionality built into the master (that's a lot of proxying, but bear with me).
First, run kubectl proxy. Note the port that is bound locally (it should be 8001 by default). This will cause kubectl to create a tunnel to your master instance that you can hit locally without needing to pass any authentication (technically, you can do all of the following steps without doing this first by hitting the master directly, but this is simpler for debugging).
Next, point a client (web browser, curl, etc) at http://localhost:8001/api/v1/proxy/namespaces/<ns>/services/<svc>/, replacing <ns> with the namespace in which your service is configured and <svc> with the name of your service. You can also append a particular request path to the end of the URL, so if your pods behind the service are hosting a file called data.json you would append that to the end of the request path.
This is how the update-demo tutorial works, so if you get stuck I'd recommend walking through that example and taking a close look at what the javascript does (it isn't too complicated).
After trying the many methods explained in the doc mentioned above, the thing that works for me was:
1) Create a SSHD daemon container to SSH to the cluster
2) Create a ssh Service with a type: NodePort
3) get the port number with
kubectl describe service sshd
4) use ssh port forwarding to get to the service with:
ssh -L <local-port>:<my-k8s-service-name>:<my-k8s-service-port> -p <sshd-port> user#sshd-container
for example
ssh -L 2181:zookeeper:2181 -p 12345 root#sshd-container
Then I have my zookeeper service on localhost:2181
For more port mappings, use alternate ports.
You can also try using kubectl port-forward:
http://kubernetes.io/docs/user-guide/connecting-to-applications-port-forward/
http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/
Example:
kubectl port-forward POD_NAME LOCAL_PORT:REMOTE_PORT
kubectl port-forward redis-master 6379:6379