I've setup a kubernetes cluster in my network and a postgres database in same network. I am able to connect my java app to my postgres database if I run it over in a container or a VM. But some how when I deploy the same app in Kubernetes it is not able to connect to my database.
All my services are in my private network and my cluster is a bare metal setup in my home lab using calico network.
I also tried ping my database ip from a busybox pod but that also fails.
Related
How can I connect from an application that runs inside a pod in a Kubernetes cluster (minikube) to a local database (installed locally, not running in docker)?
I got several replicas of mongodb pods into my cluster. And got a bastion server through which I can connect to each mongodb pod running in a private subnet. I can do that by mongodb pod IP into the connection string.
mongodb://username:password#xx.xxx.xxx.112:27017/
But I want to connect to the database using a pod name / service name instead of a dynamic pod IP which gets changed every time I recreate the pod. Using the pod default DNS with service name doesn't work in this case ( e.g. mongodb-0.mongo.default.svc.cluster.local)
Any idea how to connect to the mongodb pods using mongo client without using their IPs?
I need to connect a service running in a local container inside Docker on my machine to a database that's running on a Kubernetes cluster.
Everything I found on port forwarding allowed me to connect my machine to the cluster, but not the local container to the cluster (unless I install kubectl on my container, which I cannot do).
Is there a way to do this?
https://www.telepresence.io/ is what you're looking for. It will hook into the cluster network like a VPN and patch the services so traffic will get routed through the tunnel.
How can I connect to VM's running in GCP compute engine from Kubernetes pod? I have setup a proxy server in Compute Engine and I need to use that from within pods.
This communication needs to be using internal IP. I have allowed firewall rules to allow all internal IP.
Any suggestions on how to connect from pods to gcp vm's?
You can create an internal load balancer in GCP and connect VM or you can use the VPC peering if in a different network.
If your GKE and VM are in the same network you can use the internal IP of your VM to connect with.
From inside of POD you can send curl requests to the VM over internal IP.
OR
If your GKE and GCP VM both are in the different networks you can use the VPC peering to connect both networks and use the internal IP of VM from POD.
I have a Kubernetes cluster with 2 containers running in a single workload.
One container is running a Flask server application and the other is running an angular application. I need to have this pod set up in a way where both applications can communicate with each other within the localhost. I need the angular container which is exposed in port 4200 to communicate with the unexposed flask server which is on port 5000. I am stuck when it comes to having these containers communicate within the pod.
Rather than localhost (127.0.0.1), make sure your flask server is reachable via any local IP, that is, app.run(host='0.0.0.0').
You should be able to communicate with each other using localhost:<port-number> as all containers in a Kubernetes pod share the same network namespace.