Consistent IP Addresses for Auto Scaling / Load Balanced Instances - mongodb

The Setup
ECS (Containerized) Application (Node.js, API Only)
Auto Scaling Group for ECS Container Instances
Load Balancer in front of auto scaling group
VPC covering all instances and ELB
Database hosted in another VPC, not managed explicitly (MongoDB Atlas), likely not the same region.
The Problem
I want my database to use good security policies, therefore I opt for whitelisting IPs as Atlas recommends - rather than opening up my database to the world with 0.0.0.0/0.
Each server has its own IP address, and in an autoscaling event it would need to be added by automation to the Atlas security rules (which is possible, not ideal).
How can I (using NAT Gateways? Elastic IPs?) get one IP for all of my load balanced instances.
Failed Solutions?
I tried using a NAT Gateway, essentially scenario 2 where all my of instances were in a private subnet, the NAT was in a public subnet with internet access, and the instances went through it to get to the database. This worked! Elastic IP on the NAT and I was able to authorize it on Atlas however it had weird issues where the instance wouldn't respond for 65 - 75 seconds, intermittently when pinged. I suspect this is due to the fact that it's not technically available on the internet and there's some routing that I don't fully understand happening. Once you got a 200 though everything would work fine, for a bit, then another 70 second latency and back to good again...
Really appreciate the input, have been searching for a while with no luck!

Have you tried a VPC peering connection? As long as the VPC CIDR blocks do not overlap, this is a good option because you can use security groups and private IPs between the peered VPCs.
http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/Welcome.html

Related

Restrict IP-range in GKE cluster when using VPN?

We're integrating with a new partner that requires us to use VPN when communicating with them (over HTTPS). We're running all of our services in a (non-private) Google Kubernetes Engine (GKE) cluster and it's only a single pod that needs to communicate with the partner's API.
The problem we face is that our partner's VPN provider won't allow us to use the private IP-range provided by GKE, 10.244.0.0/14, because the subnet is too large.
Preferably, we don't want to deploy something outside our GKE cluster, like a Compute Engine instance, that is somehow used to proxy our traffic (we will of course do it if this is the only/best way to proceed). We're hoping that, perhaps, it'll be possible to create a new node pool in the same cluster with a different (smaller) subnet, but so far we haven't found a way to do this. We've also looked briefly at CloudVPN, but if we understand it correctly, it only works with private GKE clusters.
Question:
What's the recommended way to obtain a smaller subnet/IP-range for a pod in an existing (public) GKE cluster to allow it to communicate with a third-party API over VPN?
The problem I see is that you have to maintain your VPN connection within your pod, it is possible but looks like an antipattern.
I would recommend using CloudVPN in a separate GCP project (due to cost separation and security) to establish the connection with a specific and limited VPC and then route that traffic to the pod, that might be in a specific ip range as you mentioned.
Take a look at the docs on how to create the vpn:
https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview
Redirect traffic between VPCs:
https://cloud.google.com/vpc/docs/vpc-peering
Create the nodepool with an IP range: https://cloud.google.com/sdk/gcloud/reference/container/node-pools/create
Assign your deployment to that nodepool
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector

Access restrictions when using Gcloud vpn with Kubernetes

This is my first question on Stack Overflow:
We are using Gcloud Kubernetes.
A customer specifically requested a VPN Tunnel to scrape a single service in our Cluster (I know ingress would be more suited for this).
Since VPN is IP based and Kubernetes changes these, I can only configure the VPN to the whole IP range of services.
I'm worried that the customer will get full access to all services if I do so.
I have been searching for days on how to treat incoming VPN traffic, but haven't found anything.
How can I restrict the access? Or is it restricted and I need netpols to unrestrict it?
Incoming VPN traffic can either be terminated at the service itself, or at the ingress - as far as I see it. Termination at the ingress would probably be better though.
I hope this is not too confusing, thanks you so much in advance
As you mentioned, an external Load Balancer would be ideal here as you mentioned, but if you must use GCP Cloud VPN then you can restrict access into your GKE cluster (and GCP VPC in general) by using GCP Firewall rules along with GKE internal LBs HTTP or TCP.
As a general picture, something like this.
Second, we need to add two firewall rules to the dedicated networks (project-a-network and project-b-network) we created. Go to Networking-> Networks and click the project-[a|b]-network. Click “Add firewall rule”. The first rule we create allows SSH traffic from the public so that we can SSH into the instances we just created. The second rule allows icmp traffic (ping uses the icmp protocol) between the two networks.

How to assign a single static source IP address for all pods of a service or deployment in kubernetes?

Consider a microservice X which is containerized and deployed in a kubernetes cluster. X communicates with a Payment Gateway PG. However, the payment gateway requires a static IP for services contacting it as it maintains a whitelist of IP addresses which are authorized to access the payment gateway. One way for X to contact PG is through a third party proxy server like QuotaGuard which will provide a static IP address to service X which can be whitelisted by the Payment Gateway.
However, is there an inbuilt mechanism in kubernetes which can enable a service deployed in a kube-cluster to obtain a static IP address?
there's no mechanism in Kubernetes for this yet.
other possible solutions:
if nodes of the cluster are in a private network behind a NAT then just add your network's default gateway to the PG's whitelist.
if whitelist can accept a cidr apart from single IPs (like 86.34.0.0/24 for example) then add your cluster's network cidr to the whitelist
If every node of the cluster has a public IP and you can't add a cidr to the whitelist then it gets more complicated:
a naive way would be to add ever node's IP to the whitelist, but it doesn't scale above tiny clusters few just few nodes.
if you have access to administrating your network, then even though nodes have pubic IPs, you can setup a NAT for the network anyway that targets only packets with PG's IP as a destination.
if you don't have administrative access to the network, then another way is to allocate a machine with a static IP somewhere and make it act as a proxy using iptables NAT similarly like above again. This introduces a single point of failure though. In order to make it highly available, you could deploy it on a kubernetes cluster again with few (2-3) replicas (this can be the same cluster where X is running: see below). The replicas instead of using their node's IP to communicate with PG would share a VIP using keepalived that would be added to PG's whitelist. (you can have a look at easy-keepalived and either try to use it directly or learn from it how it does things). This requires high privileges on the cluster: you need be able to grant to pods of your proxy NET_ADMIN and NET_RAW capabilities in order for them to be able to add iptables rules and setup a VIP.
update:
While waiting for builds and deployments during last few days, I've polished my old VIP-iptables scripts that I used to use as a replacement for external load-balancers on bare-metal clusters, so now they can be used as well to provide egress VIP as described in the last point of my original answer. You can give them a try: https://github.com/morgwai/kevip
There are two answers to this question: for the pod IP itself, it depends on your CNI plugin. Some allow it with special pod annotations. However most CNI plugins also involve a NAT when talking to the internet so the pod IP being static on the internal network is kind of moot, what you care about is the public IP the connection ends up coming from. So the second answer is "it depends on how your node networking and NAT is set up". This is usually up to the tool you used to deploy Kubernetes (or OpenShift in your case I guess). With Kops it's pretty easy to tweak the VPC routing table.

Egress IP address selection

We are running a SaaS service that we are looking to migrate to Kubernetes, preferably at one of the hyperscalars. One specific issue I have not yet found a clean solution for is the need for Egress IP address selection from within the application.
We deal with a large amount of upstream providers that have access control and rate limiting based on source IP adres. Also a partition of our customers are using their own accounts with some of the upstream providers. To access the upstream providers in the context of their account we need to control the source IP used for the connection from within the application.
We are running currently our services in a DMZ behind a load balancer, so direct network interface selection is already impossible. We use some iptables rules on our load balancers/gateways to do address selection based on mapped port numbers. (e.g. egress connections to port 1081 are mapped to source address B and target port 80, port 1082 to source address C port 80)
This however is quite a fragile setup that also does not map nicely when trying to migrate to more standardized *aaS offerings.
Looking for suggestions for a better setup.
One of the things that could help you solve it is Istio Egress Gateway so I suggest you look into it.
Otherwise, it is still dependent on particular platform and way to deploy your cluster. For example on AWS you can make sure your egress traffic always leaves from predefined, known set of IPs by using instances with Elastic IPs assigned to forward your traffic (be it regular EC2s or AWS NAT Gateways). Even with Egress above, you need some way to define a fixed IP for this, so AWS ElasticIP (or equivalent) is a must.

Kubernetes (on GKE) external connection through NAT for specific kube services?

I currently have two clusters on GKE - one in eu-west1-b and another in us-east1-b. The pods deployed to the nodes in these clusters need to make location-based requests (for latency testing purposes).
I also need to connect to my postgres instance on RDS, which uses IP-based whitelisting for external connections. The nodes in my clusters have ephemeral IPs so I can't use them.
I have done a lot of research and gone through lots of SO answers and docs and tutorials and come to the solution that routing traffic through a NAT is pretty much the best/only way to do this right now on GKE.
https://serverfault.com/questions/835425/kubernetes-external-connection-through-single-ip
Similar to that question above, I don't want to route all of my traffic through the NAT. My reason is because I need my requests to come from the internet gateway associated with the current node so it is coming from a particular region.
The above question has some answers that almost get me there, but doesn't include any kube-specific configuaration. This is a great tutorial:
https://docs.tenable.com/pvs/deployment/Content/GoogleCloudInstructionsNatGateway.htm
But again, is not based on kube.
My thinking is that I need to define a service for postgres in my kube cluster, and then tell that to route to the external service through the NAT. Not entirely sure where to start and would appreciate help.
A solution:
Tag your instances in different zones/regions with different tags
Create static IP addresses for each zone/region
Create NAT exit nodes (GCE instances or instance groups) using the external address from above
Create a route trough each of the NAT exit nodes. Restrict each route with destination IP range for your RDS ingress IP/32 and network tags from Step 1 (so the instances use the correct gateway)