Access Redshift cluster deployed in a VPC - amazon-redshift

I have my Redshift cluster deployed in a VPC inside private subnets . I need to allow an IP address to access the cluster from outside the VPC . To add that IP as a whitelist and access the cluster I tried the below .
Created an inbound rule in the security group which is attached to the redshift cluster . Added the ip-address/32 as source , port 5439 , protocol tcp , type redshift.
Added the redshift cluster in the public subnet .
I did check in https://forums.aws.amazon.com/thread.jspa?threadID=134301 . He faced the same issue too .
The steps I tried didn't work . Appreciate any suggestion which can make that IP address to access the cluster.
Thanks in advance.

As the second step you did, I assume you've already put the Redshift cluster to public subnet in your VPC, then make sure your networkACL allows ingress port 5439 and egress ephemeral ports.

I think you need to make your redshift cluster "publicly accessible".
After that, just modify your associated VPC security group to allow access from specific IP addresses, and you should be able to connect to the cluster from outside the VP.
AWS forum
AWS documentation

If the IP address which is outside the VPC of Redshift is in your AWS account, or in an other account; the VPC peering between two VPC can be an option.
If you peer two VPCs; one with Redshift and the other is the VPC of the other IP address, then it is possible two enable network traffic between two
You should enable traffic by routing tables entries for new IP ranges too.
And the security group entries should be added into Redshift's Inbound rules

Related

Unable to access Kubernetes service from one cluster to another (over VPC peerng)

I'm wondering if anyone can help with my issue, here's the setup:
We have 2 separate kubernetes clusters in GKE, running on v1.17, and they each sit in a separate project
We have set up VPC peering between the two projects
On cluster 1, we have 'service1' which is exposed by an internal HTTPS load balancer, we don't want this to be public
On cluster 2, we intend on being able to access 'service1' via the internal load balancer, and it should do this over the VPC peering connection between the two projects
Here's the issue:
When I'm connected via SSH on a GKE node on cluster 2, I can successfully run a curl request to access https://service1.domain.com running on cluster 1, and get the expected response, so traffic is definitely routing from cluster 2 > cluster 1. However, when I'm running the same curl command from a POD, running on a GKE node, the same curl request times out.
I have run as much troubleshooting as I can including telnet, traceroute etc and I'm really stuck why this might be. If anyone can shed light on the difference here that would be great.
I did wonder whether pod networking is somehow forwarding traffic over the clusters public IP rather than over the VPC peering connection.
So it seems you're not using a "VPC-native" cluster and what you need is "IP masquerading".
From this document:
"A GKE cluster uses IP masquerading so that destinations outside of the cluster only receive packets from node IP addresses instead of Pod IP addresses. This is useful in environments that expect to only receive packets from node IP addresses."
You can use ip-masq-agent or k8s-custom-iptables. After this, it will work since it will be like you're making a call from node, not inside of pod.
As mentioned in one of the answers IP aliases (VPC-native) should work out of the box. If using a route based GKE cluster rather than VPC-native you would need to use custom routes.
As per this document
By default, VPC Network Peering with GKE is supported when used with
IP aliases. If you don't use IP aliases, you can export custom routes
so that GKE containers are reachable from peered networks.
This is also explained in this document
If you have GKE clusters without VPC native addressing, you might have
multiple static routes to direct traffic to VM instances that are
hosting your containers. You can export these static routes so that
the containers are reachable from peered networks.
The problem your facing seems similar to the one mentioned in this SO question, perhaps your pods are using IPs outside of the VPC range and for that reason cannot access the peered VPC?
UPDATE: In Google cloud, I tried to access the service from another cluster which had VPC native networking enabled, which I believe allows pods to use the VPC routing and possibly the internal IPs.
Problem solved :-)

Mongodb atlas dedicated cluster: how to create peering connection with AWS and then access the cluster without whitelisting ips

We have a dedicated M10 cluster in Mongodb Atlas, on which I have created a peering connection with AWS to incorporate security using VPC. I have followed this Mongodb document for configuring peering connection between AWS and cluster.
https://docs.atlas.mongodb.com/security-vpc-peering/
The peering connection is created successfully and is active now. But the thing is, I am unable to connect to cluster without whitelisting my IP. When I try to connect without whitelisting the IP, it gives below error:
Something went wrong MongooseServerSelectionError: Could not connect
to any servers in your MongoDB Atlas cluster. One common reason is
that you're trying to access the database from an IP that isn't
whitelisted. Make sure your current IP address is on your Atlas
cluster's IP whitelist:
https://docs.atlas.mongodb.com/security-whitelist/
While after whitelisting the IP, I am able to connect to cluster successfully from local environment.
What do I need to access a cluster within VPC using application? I can not use the option IP whitelisting as every user's IP can not be whitelisted.
I have already whitelisted CIDR block as mentioned by the above documentation.
IP whitelisting is separate from peering. Peering determines the network, whitelisting determines who on the network is allowed access.
If you want to allow access from anything that has physical connectivity to the database, whitelist the entire world (0.0.0.0/0).

Mongodb Atlas Google Cloud peering fails with an ip range in the local network overlaps with an ip range in an active peer

I have a GCP Project "A" where I had previously added VPC peering with MongoDB Atlas:
This way my development GKE cluster (whose VPC range is 10.7.0.0/16) will support peering when accessing MongoDB. So far everything works as expected.
Now I've provisioned another GKE cluster for preproduction, with its own VPC range on 10.221.0.0/16. I've also created another database in the same Mongo cluster "app-pre" and wanted to add VPC peering for the new cluster.
I followed the same steps:
Mongo Atlas: add peering connection for GCP project "A", VCP name and CIDR 192.168.0.0/16
GCP Create Peering Connection
The problem is I get the following error:
An ip range in the local network (10.221.0.0/16) overlaps with an ip range (10.221.0.0/16) in an active peer of the peer network
Posting this as an answer in order to help other people.
What #john-hanley mentions is correct, basically you can't have 2 or more VPC peerings when they are using overlapping IP ranges, this is because GCP routes would be created with the same "priority" and therefore it would be confusion as to where to send a packet to those routes.
The message you are getting is basically that you are already using one range this way and intend to use this very same range "10.221.0.0/16" again.

How to reach hosted postgres in GCP from Kubernetes cluster, directly to private IP

So, I created a postgreSQL instance in Google Cloud, and I have a Kubernetes Cluster with containers that I would like to connect to it. I know that the cloud sql proxy sidecar is one method, but the documentation says that I should be able to connect to the private IP as well.
I notice that a VPC peering connection was automatically created for me. It's set for a destination network of 10.108.224.0/24, which is where the instance is, with a "Next hop region" of us-central1, where my K8s cluster is.
And yet when I try the private IP via TCP on port 5432, I time out. I see nothing in the documentation about have to modify firewall rules to make this work, but I tried that anyway, finding the firewall interface in GCP rather clumsy and confusing compared with writing my own rules using iptables, but my attempts failed.
Beyond going to the cloud sql sidecar, does anyone have an idea why this would not work?
Thanks.
Does your GKE cluster meet the environment requirements for private IP? It needs to be a VPC enabled cluster on the same VPC and region as your Cloud SQL instance.
In the end, the simplest thing to do was to just use the google cloud sql proxy. As opposed to a sidecar, I have multiple containers needing db access so I put the proxy into my cluster as its own container with a service, and it seems to just work.
If your instance of cloud SQL or compute both in the same VPC then only you can create a VPC peering over private IP.
From cloud SQL compute VM you can choose the VPC and subnet and also setup same for the GKE and you can make the connection from pod to cloud sql.

How can I access external MongoDB server on ec2 instance from an app running inside Kubernetes cluster created with kops?

I am having a situation where my MongoDB in running on a separate ec2 instance and my app is running inside a kubernetes cluster created by kops. Now I want to access the DB from the app running inside k8s.
For this, I tried VPC peering between k8s VPC and ec2 instance' VPC. I tried setting requester VPC as k8s VPC and acceptor VPC as instance' VPC. After that, I've also added an ingress rule in ec2 instance' security group for allowing access from k8s cluster's security group on port 27017.
But, when I ssh'd into the k8s node and tried with telnet, the connection failed.
Is there anything incorrect in the procedure? Is there any better way to handle this?
CIDR blocks:
K8S VPC - 172.20.0.0/16
MongoDB VPC - 172.16.0.0/16
What are the CIDR blocks of the two VPCs? They mustn't overlap. In addition, you need to make sure that communication is allowed to travel both ways when modifying the security groups. That is, in addition to modifying your MongoDB VPC to allow inbound traffic from the K8s VPC, you need to make sure the K8s VPC allows inbound traffic from the MongDB VPC.
First , this does not seems to be kubernetes issue.
Make sure you have the proper route from kubernetes to mongodb node and vice versa
Make sure the required ports are open in security groups of VPCs
Allow inbound traffic from kubernetes vpc to monogdb vpc
Allow inbound traffic from mongodb vpc to kubernetes vpc
Make sure the namespace security allows the inbound and bound traffic