Fail to connect the GKE with GCE on the same VPC? - kubernetes

I am new to Google Cloud Platform and the following context:
I have a Compute Engine VM running as a MongoDB server and a Compute Engine VM running as a NodeJS server already with Docker. Then the NodeJS application connects to Mongo via the default VPC internal IP. Now, I'm trying to migrate the NodeJS application to Google Kubernetes Engine, but I can't connect to the MongoDB server when I deploy the NodeJS application Docker image to the cluster.
All services like GCE and GKE are in the same region (us-east-1).
I did a hard test accessing a kubernetes cluster node via SSH and deploying a simple MongoDB Docker image and trying to connect to the remote MongoDB server via command line, but the problem is the same, time out when trying to connect.
I have also checked the firewall settings on GCP as well as the bindIp setting on the MongoDB server and it has no blocking on that.
Does anyone know what may be happening? Thank you very much.

In my case traffic from GKE to GCE VM was blocked by Google Firewall even thou both are in the same network (default).
I had to whitelist cluster pod network listed in cluster details:
Pod address range 10.8.0.0/14
https://console.cloud.google.com/kubernetes/list
https://console.cloud.google.com/networking/firewalls/list

By default, containers in a GKE cluster should be able to access GCE VMs of the same VPC through internal IPs. It is just like you access the internet (e.g., google.com) from GKE containers, GKE and VPC know how to route the traffic. The problem must be with other configurations (firewall or your application).
You can do a test, start a simple HTTP server in the GCE VM, say the internal IP is 10.138.0.5:
python -m SimpleHTTPServer 8080
then create a GKE container and try to access the service:
kubectl run my-client -it --image=tutum/curl --generator=run-pod/v1 -- curl http://10.138.0.5:8080

Related

Access Kubernetes applications via localhost from the host system

Is there any other way except port-forwarding, I can access the apps running inside my K8s cluster via http://localhost:port from my host operating system.
For example
I am running minikube setup to practise the K8s and I deployed three pods along with their services, I choose three different service type, Cluster IP, nodePort and LoadBalancer.
For Cluster IP, I can use port-forward option to access my app via localhost:port, but the problem is, I have to leave that command running and if for some reason, it is distributed, connection will be dropped, so is there any alternate solution here ?
For nodePort, I can only access this via minikube node IP not with the localhost, therefore, if I have to access this remotely, I wont have a route to this node IP address
For LoadBalancer, not a valid option as I am running minikube in my local system not in cloud.
Please let me know if there is any other solution to this problem, the reason why I am asking this when I deploy same application via docker compose, I can access all these services via localhost:port and I can even call them via VM_IP:port from other systems.
Thanks,
-Rafi

how to connect to k8s cluster with bitnami postgresql-ha deployed?

My setup (running locally in two minikubes) is I have two k8s clusters:
frontend cluster is running a golang api-server,
backend cluster is running an ha bitnami postgres cluster (used bitnami postgresql-ha chart for this)
Although if i set the pgpool service to use nodeport and i get the ip + port for the node that the pgpool pod is running on i can hardwire this (host + port) to my database connector in the api-server (in the other cluster) this works.
However what i haven't been able to figure out is how to generically connect to the other cluster (e.g. to pgpool) without using the ip address?
I also tried using Skupper, which also has an example of connecting to a backend cluster with postgres running on it, but their example doesn't use bitnami ha postgres helm chart, just a simple postgres install, so it is not at all the same.
Any ideas?
For those times when you have to, or purposely want to, connect pods/deployments across multiple clusters, Nethopper (https://www.nethopper.io/) is a simple and secure solution. The postgresql-ha scenario above is covered under their free tier. There is a two cluster minikube 'how to' tutorial at https://www.nethopper.io/connect2clusters which is very similar to your frontend/backend use case. Nethopper is based on skupper.io, but the configuration is much easier and user friendly, and is centralized so it scales to many clusters if you need to.
To solve your specific use case, you would:
First install your api server in the frontend and your bitnami postgresql-ha chart in the backend, as you normally would.
Go to https://mynethopper.com/ and
Register
Clouds -> define both clusters (clouds), frontend and backend
Application Network -> create an application network
Application Network -> attach both clusters to the network
Application Network -> install nethopper-agent in each cluster with copy paste instructions.
Objects -> import and expose pgpool (call the service 'pgpool') in your backend.
Objects -> distribute the service 'pgpool' to frontend, using a distribution rule.
Now, you should see 'pgpool' service in the frontend cluster
kubectl get service
When the API server pods in the frontend request service from pgpool, they will connect to pgpool in the backend, magically. It's like the 'pgpool' pod is now running in the frontend.
The nethopper part should only take 5-10 minutes, and you do NOT need IP addresses, TLS certs, K8s ingresses or loadbalancers, a VPN, or an istio service mesh or sidecars.
After moving to the one cluster architecture, it became easier to see how to connect to the bitnami postgres-ha cluster, by trying a few different things finally this worked:
-postgresql-ha-postgresql-headless:5432
(that's the host and port I'm using to call from my golang server)
Now i believe it should be fairly straightforward to also run the two cluster case using skupper to bind to the headless service.

Is it possible to perform ssh to a VM within a pod?

I have a pod inside a Kubernetes cluster on GKE that remotely creates a Kubernetes cluster on Azure and I want to ssh into the master VM of the Azure cluster from the pod so I can remotely run some commands on it. However, I encountered a timeout problem whenever I run ssh / scp inside the pod:
ssh: connect to host port 22: Connection timed out
I already installed OpenSSH-client/server in my pod. I ensured that the VM has a public IP address and the pod also has access to the private key of the VM. I tried ssh into the Azure master VM on my laptop and it works just fine. Any ideas?
If you are running a private cluster in GKE, check their docs:
it says:
Private nodes do not have outbound Internet access because they don't have external IP addresses. Private Google Access provides private nodes and their workloads with limited outbound access to Google Cloud Platform APIs and services over Google's private network. For example, Private Google Access makes it possible for private nodes to pull container images from Google Container Registry, and to send logs to Stackdriver.
Check this other question => Kubernetes: Connect to the outside world from pod
Follow the below steps
deploy a test pod that has ssh binary in azure cluster.
update ssh certificates on the cluster nodes ( ignore if you already have certs )
copy ssh certs into test pod using kubectl cp command
get into test pod and ssh to any of the cluster nodes
you should be able to run commands on cluster node

Connecting ignite server that runs as docker container inside kubernetes from outside client process

I have minikube running kubernetes inside a virtual box.
one of the docker container it runs is an ignite server.
during my development I try to access the ignite server from outside java client but the discovery fails with all configurations I tried.
is it possible at all?
If yes can someone give an example?
To enable Apache Ignite nodes auto-discovery in Kubernetes, you need to enable TcpDiscoveryKubernetesIpFinder in IgniteConfiguration. Read more about this on https://apacheignite.readme.io/docs/kubernetes-deployment. Your Kubernetes service definitions should have the container exposed port specified, then minikube should give you service URL after successful deployment.

How can I expose kubernetes services from within a local VM?

I'm running kubernetes locally in a guest VM. How can I expose services to the host?
On GKE, for example, exposing a service requisitions an IP from Google and wires it up, so that requests to that IP end up at the right service. How can I do that for my VM?
Not the wanted answer but you have to do it manually for now. I usually start a nginx container with --net=host to proxy from kubernetes network to VM network.
I hope to be wrong but it was true some month ago.