How to reach hosted postgres in GCP from Kubernetes cluster, directly to private IP - postgresql

So, I created a postgreSQL instance in Google Cloud, and I have a Kubernetes Cluster with containers that I would like to connect to it. I know that the cloud sql proxy sidecar is one method, but the documentation says that I should be able to connect to the private IP as well.
I notice that a VPC peering connection was automatically created for me. It's set for a destination network of 10.108.224.0/24, which is where the instance is, with a "Next hop region" of us-central1, where my K8s cluster is.
And yet when I try the private IP via TCP on port 5432, I time out. I see nothing in the documentation about have to modify firewall rules to make this work, but I tried that anyway, finding the firewall interface in GCP rather clumsy and confusing compared with writing my own rules using iptables, but my attempts failed.
Beyond going to the cloud sql sidecar, does anyone have an idea why this would not work?
Thanks.

Does your GKE cluster meet the environment requirements for private IP? It needs to be a VPC enabled cluster on the same VPC and region as your Cloud SQL instance.

In the end, the simplest thing to do was to just use the google cloud sql proxy. As opposed to a sidecar, I have multiple containers needing db access so I put the proxy into my cluster as its own container with a service, and it seems to just work.

If your instance of cloud SQL or compute both in the same VPC then only you can create a VPC peering over private IP.
From cloud SQL compute VM you can choose the VPC and subnet and also setup same for the GKE and you can make the connection from pod to cloud sql.

Related

Unable to access Kubernetes service from one cluster to another (over VPC peerng)

I'm wondering if anyone can help with my issue, here's the setup:
We have 2 separate kubernetes clusters in GKE, running on v1.17, and they each sit in a separate project
We have set up VPC peering between the two projects
On cluster 1, we have 'service1' which is exposed by an internal HTTPS load balancer, we don't want this to be public
On cluster 2, we intend on being able to access 'service1' via the internal load balancer, and it should do this over the VPC peering connection between the two projects
Here's the issue:
When I'm connected via SSH on a GKE node on cluster 2, I can successfully run a curl request to access https://service1.domain.com running on cluster 1, and get the expected response, so traffic is definitely routing from cluster 2 > cluster 1. However, when I'm running the same curl command from a POD, running on a GKE node, the same curl request times out.
I have run as much troubleshooting as I can including telnet, traceroute etc and I'm really stuck why this might be. If anyone can shed light on the difference here that would be great.
I did wonder whether pod networking is somehow forwarding traffic over the clusters public IP rather than over the VPC peering connection.
So it seems you're not using a "VPC-native" cluster and what you need is "IP masquerading".
From this document:
"A GKE cluster uses IP masquerading so that destinations outside of the cluster only receive packets from node IP addresses instead of Pod IP addresses. This is useful in environments that expect to only receive packets from node IP addresses."
You can use ip-masq-agent or k8s-custom-iptables. After this, it will work since it will be like you're making a call from node, not inside of pod.
As mentioned in one of the answers IP aliases (VPC-native) should work out of the box. If using a route based GKE cluster rather than VPC-native you would need to use custom routes.
As per this document
By default, VPC Network Peering with GKE is supported when used with
IP aliases. If you don't use IP aliases, you can export custom routes
so that GKE containers are reachable from peered networks.
This is also explained in this document
If you have GKE clusters without VPC native addressing, you might have
multiple static routes to direct traffic to VM instances that are
hosting your containers. You can export these static routes so that
the containers are reachable from peered networks.
The problem your facing seems similar to the one mentioned in this SO question, perhaps your pods are using IPs outside of the VPC range and for that reason cannot access the peered VPC?
UPDATE: In Google cloud, I tried to access the service from another cluster which had VPC native networking enabled, which I believe allows pods to use the VPC routing and possibly the internal IPs.
Problem solved :-)

Kubernetes call service exposed with ambasador in cluster cluster-a from a different cluster cluster-b, same prohect but different vpc

I have two Kubernetes clusters cluster-a, cluster-b in Google Cloud GCP.
Can i call a service exposed with ambasador in cluster (cluster-a) from a different cluster (cluster-b) in the same GCP project but different VPC's ?
Right now i can call the service by the ambasador service name (when I do it in the same cluster).
I have read about Internal TCP/UDP Load Balancing, but it only works when cluster-a and cluster-b are in the same VPC network and my clusters are in different VPC's.
There is a different approach to accomplish it ?
VPCs on GCP aren't routed to each other by default, so your requests won't be reaching the remote CIDRs. For that, you want to use VPC Network Peering to make each VPC reachable to each other.
Note that firewall rules still apply for both VPCs, so you have to create them in order to establish full communication.
Finally, this will only allow network communication between your VPCs. If you rule out this as the issue and you're still experiencing lack of connectivity, it might be related to your Ambassador configuration, in which case, I'd recommend posting either information about that or create another question for that specifically.

How can I access external MongoDB server on ec2 instance from an app running inside Kubernetes cluster created with kops?

I am having a situation where my MongoDB in running on a separate ec2 instance and my app is running inside a kubernetes cluster created by kops. Now I want to access the DB from the app running inside k8s.
For this, I tried VPC peering between k8s VPC and ec2 instance' VPC. I tried setting requester VPC as k8s VPC and acceptor VPC as instance' VPC. After that, I've also added an ingress rule in ec2 instance' security group for allowing access from k8s cluster's security group on port 27017.
But, when I ssh'd into the k8s node and tried with telnet, the connection failed.
Is there anything incorrect in the procedure? Is there any better way to handle this?
CIDR blocks:
K8S VPC - 172.20.0.0/16
MongoDB VPC - 172.16.0.0/16
What are the CIDR blocks of the two VPCs? They mustn't overlap. In addition, you need to make sure that communication is allowed to travel both ways when modifying the security groups. That is, in addition to modifying your MongoDB VPC to allow inbound traffic from the K8s VPC, you need to make sure the K8s VPC allows inbound traffic from the MongDB VPC.
First , this does not seems to be kubernetes issue.
Make sure you have the proper route from kubernetes to mongodb node and vice versa
Make sure the required ports are open in security groups of VPCs
Allow inbound traffic from kubernetes vpc to monogdb vpc
Allow inbound traffic from mongodb vpc to kubernetes vpc
Make sure the namespace security allows the inbound and bound traffic

Whitelist traffic to mysql from a kubernetes service

I have a Cloud MySQL instance which allows traffic only from whitelisted IPs. How do I determine which IP I need to add to the ruleset to allow traffic from my Kubernetes service?
The best solution is to use the Cloud SQL Proxy in a sidecar pattern. This adds an additional container into the pod with your application that allows for traffic to be passed to Cloud SQL.
You can find instructions for setting it up here. (It says it's for GKE, but the principles are the same)
If you prefer something a little more hands on, this codelab will walk you through taking an app from local to on a Kubernetes Cluster.
I am using Google Cloud Platform, so my solution was to add the Google Compute Engine VM instance External IP to the whitelist.

AWS Lambda To Atlas

I want to connect my Lambda function to Mongo Atlas. It was all working fine but I needed to move my function inside a VPC so I could use redis.
Now I cannot connect to my database.
I looked at the security group on the VPC and added the mongo db port but with no joy
Also - the mongo IP Whitelist is as follows for now
0.0.0.0/0 (includes your current IP address)
Is there anything else I should try?
Thank you
I needed to move my function inside a VPC so I could use redis.
If you are:
Using dedicated MongoDB Atlas instances (i.e. not shared instances M0, M2 and M5 clusters).
And, the MongoDB Atlas deployment is hosted on AWS
Then, you could follow this instruction to set up VPC peering connection.
Please note that MongoDB Atlas supports VPC peering with other AWS VPCs in the same region. AWS does not support cross-region VPC peering. For multi-region clusters, you must create VPC peering connections per-region.
See also the tutorial shown on: Introducing VPC peering to MongoDB Atlas
struggled for days
using this tutorial
https://www.mongodb.com/blog/post/introducing-vpc-peering-for-mongodb-atlas
but it work in the end, when i found a missing step in the tutorial; when i used default VPC.
When crating custom VPC, rout table, and subnet; the subnet was what got me... auto assign IP needs to be enabled on the subnet;
PS you need at least two subnets for Lambda, so create one more if you haven't