Our Redshift cluster resides in Zone A.
When our Lambda function uses a Zone A subnet, it can connect to Redshift.
When our Lambda function uses a subnet other than Zone A, it times out.
The work around, where we ALLOW connections for Redshift on port 5439 from 0.0.0.0/0, is not desired.
We have our Lambda functions and Redshift cluster in the same VPC.
Lambda functions have 4 dedicated subnets (one per zone)
Redshift has 4 dedicated subnets per zone as well
Lambda functions have their own security group (SG)
The Redshift cluster has it's own SG as well.
Redshift SG ALLOWs port 5439 from Lambda SG and Admin SG
Enhanced VPC Routing is enabled
Cluster Subnet Groups include all 4 Redshift subnets (one per zone)
No issues when allowing port 5439 from 0.0.0.0/0 on Redshift SG
Flow logs for REJECT work fine from Zone A to Zone A, but not from other zones to Zone A when we disable 0.0.0.0/0 rule.
All Lambda subnets use a NAT that exists in Zone A
All Redshift subnets use an IGW that exists in
All Network ACLs currently allow all (default)
I was stuck in a similar situation. Adding the NAT gateway's elastic ip to the inbound rule of Redshift's security group for port 5439 fixed it for me.
Steps:
Check lambda's private subnet using a NAT gateway (subnet-abc)
Go to VPC console > subnets > subnet-abc > route-table
In Route-table routes, you can find the NAT gateway used (nat-abcdefg)
Go to VPC console > NAT Gateways > nat-abcdefg. Get the elastic-ip used by this NAT gateway. (xx.yy.zz.pqr)
Add an inbound rule for this elastic-ip in redshift's security group (port = 5439 CIDR xx.yy.zz.pqr/32)
Volla! Lambda connects to redshift.
Though, before doing this, lambda should be configured in the same VPC as redshift and using the appropriate private subnet (configured to use NAT gateway) as OP suggested.
Related
I have a GCP Project "A" where I had previously added VPC peering with MongoDB Atlas:
This way my development GKE cluster (whose VPC range is 10.7.0.0/16) will support peering when accessing MongoDB. So far everything works as expected.
Now I've provisioned another GKE cluster for preproduction, with its own VPC range on 10.221.0.0/16. I've also created another database in the same Mongo cluster "app-pre" and wanted to add VPC peering for the new cluster.
I followed the same steps:
Mongo Atlas: add peering connection for GCP project "A", VCP name and CIDR 192.168.0.0/16
GCP Create Peering Connection
The problem is I get the following error:
An ip range in the local network (10.221.0.0/16) overlaps with an ip range (10.221.0.0/16) in an active peer of the peer network
Posting this as an answer in order to help other people.
What #john-hanley mentions is correct, basically you can't have 2 or more VPC peerings when they are using overlapping IP ranges, this is because GCP routes would be created with the same "priority" and therefore it would be confusion as to where to send a packet to those routes.
The message you are getting is basically that you are already using one range this way and intend to use this very same range "10.221.0.0/16" again.
I have my Redshift cluster deployed in a VPC inside private subnets . I need to allow an IP address to access the cluster from outside the VPC . To add that IP as a whitelist and access the cluster I tried the below .
Created an inbound rule in the security group which is attached to the redshift cluster . Added the ip-address/32 as source , port 5439 , protocol tcp , type redshift.
Added the redshift cluster in the public subnet .
I did check in https://forums.aws.amazon.com/thread.jspa?threadID=134301 . He faced the same issue too .
The steps I tried didn't work . Appreciate any suggestion which can make that IP address to access the cluster.
Thanks in advance.
As the second step you did, I assume you've already put the Redshift cluster to public subnet in your VPC, then make sure your networkACL allows ingress port 5439 and egress ephemeral ports.
I think you need to make your redshift cluster "publicly accessible".
After that, just modify your associated VPC security group to allow access from specific IP addresses, and you should be able to connect to the cluster from outside the VP.
AWS forum
AWS documentation
If the IP address which is outside the VPC of Redshift is in your AWS account, or in an other account; the VPC peering between two VPC can be an option.
If you peer two VPCs; one with Redshift and the other is the VPC of the other IP address, then it is possible two enable network traffic between two
You should enable traffic by routing tables entries for new IP ranges too.
And the security group entries should be added into Redshift's Inbound rules
I am having a situation where my MongoDB in running on a separate ec2 instance and my app is running inside a kubernetes cluster created by kops. Now I want to access the DB from the app running inside k8s.
For this, I tried VPC peering between k8s VPC and ec2 instance' VPC. I tried setting requester VPC as k8s VPC and acceptor VPC as instance' VPC. After that, I've also added an ingress rule in ec2 instance' security group for allowing access from k8s cluster's security group on port 27017.
But, when I ssh'd into the k8s node and tried with telnet, the connection failed.
Is there anything incorrect in the procedure? Is there any better way to handle this?
CIDR blocks:
K8S VPC - 172.20.0.0/16
MongoDB VPC - 172.16.0.0/16
What are the CIDR blocks of the two VPCs? They mustn't overlap. In addition, you need to make sure that communication is allowed to travel both ways when modifying the security groups. That is, in addition to modifying your MongoDB VPC to allow inbound traffic from the K8s VPC, you need to make sure the K8s VPC allows inbound traffic from the MongDB VPC.
First , this does not seems to be kubernetes issue.
Make sure you have the proper route from kubernetes to mongodb node and vice versa
Make sure the required ports are open in security groups of VPCs
Allow inbound traffic from kubernetes vpc to monogdb vpc
Allow inbound traffic from mongodb vpc to kubernetes vpc
Make sure the namespace security allows the inbound and bound traffic
I am trying to further understand what exactly is happening when I provision a private cluster in Google's Kubernetes Engine.
Google provides this example here of provisioning a private cluster where the control plane services (e.g. Kubernetes API) live on the 172.16.0.16/28 subnet.
https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters
gcloud beta container clusters create pr-clust-1 \
--private-cluster \
--master-ipv4-cidr 172.16.0.16/28 \
--enable-ip-alias \
--create-subnetwork ""
When I run this command, I see that:
I now have a few gke subnets in my VPC belong to the cluster subnets for nodes and services. These are in the 10.x.x.x/8 range.
I don't have any subnets in the 172.16/16 address space.
I do have some new pairing rules and routes that seem to be related. For example, there is a new route peering-route-a08d11779e9a3276 with a destination address range of 172.16.0.16/28 and next hop gke-62d565a060f347e0fba7-3094-3230-peer. This peering role then points to gke-62d565a060f347e0fba7-3094-bb01-net
gcloud compute networks subnets list | grep us-west1
#=>
default us-west1 default 10.138.0.0/20
gke-insti3-subnet-62d565a0 us-west1 default 10.2.56.0/22
gcloud compute networks peerings list
#=>
NAME NETWORK PEER_PROJECT PEER_NETWORK AUTO_CREATE_ROUTES STATE STATE_DETAILS
gke-62d565a060f347e0fba7-3094-3230-peer default gke-prod-us-west1-a-4180 gke-62d565a060f347e0fba7-3094-bb01-net True ACTIVE [2018-08-23T16:42:31.351-07:00]: Connected.
Is gke-62d565a060f347e0fba7-3094-bb01-net a peered VPC in which the Kubernetes management endpoints live (the control plane stuff in the 172.16/16 range) that Google is managing for the GKE service?
Further - how are my requests making it to the Kubernetes API server?
The Private Cluster feature of GKE depends on the Alias IP Ranges feature of VPC networking, so there are multiple things happening when you create a private cluster:
The --enable-ip-alias flag tells GKE to use a subnetwork that has two secondary IP ranges: one for pods and one for services. This allows the VPC network to understand all the IP addresses in your cluster and route traffic appropriately.
The --create-subnetwork flag tells GKE to create a new subnetwork (gke-insti3-subnet-62d565a0 in your case) and choose its primary and secondary ranges automatically. Note that you could instead choose the secondary ranges yourself with --cluster-ipv4-cidr and --services-ipv4-cidr. Or you could even create the subnetwork yourself and tell GKE to use it with the flags --subnetwork, --cluster-secondary-range-name, and --services-secondary-range-name.
The --private-cluster flag tells GKE to create a new VPC network (gke-62d565a060f347e0fba7-3094-bb01-net in your case) in a Google-owned project and connect it to your VPC network using VPC Network Peering. The Kubernetes management endpoints live in the range you specify with --master-ipv4-cidr (172.16.0.16/28 in your case). An Internal Load Balancer is also created in the Google-owned project and this is what your worker nodes communicate with. This ILB allows traffic to be load-balanced across multiple VMs in the case of a Regional Cluster. You can find this internal IP address as the privateEndpoint field in the output of gcloud beta container clusters describe. The important thing to understand is that all communication between master VMs and worker node VMs happens over internal IP addresses, thanks to the VPC peering between the two networks.
Your private cluster also has an external IP address, which you can find as the endpoint field in the output of gcloud beta container clusters describe. This is not used by the worker nodes, but is typically used by customers to manage their cluster remotely, e.g., using kubectl.
You can use the Master Authorized Networks feature to restrict which IP ranges (both internal and external) have access to the management endpoints. This feature is strongly recommended for private clusters, and is enabled by default when you create the cluster using the gcloud CLI.
Hope this helps!
In security group(inbound rules) of ec2 instance i have opened mongo port to security group of lambda function
EC2 inbound rules screenshot
In lambda function also i have selected vpc,subnets and security group.
Lambda vpc configuration screenshot
Also i have configured NAT gateway for vpc.
Still i am not able to access mongodb
You should enter the subnets in your VPC (which is used by your lambda function) to the security group.
e.g.
but in this case, change the 3306 to mongodb port e.g. 27017