I'm trying to setup an VPC Peering from my MongoDB Atlas Cluster to my Kubernetes EKS Cluster on AWS. The Peering is established successfully but i get no connection to the cluster on my pod's.
The peering is setup.
The default entry for the whitelist ist added as well. Once the connection works i will replace it with a security Group.
The peering on AWS is accepted and "DNS resolution from requester VPC to private IP" is enabled.
The route as been added to the Public Route Table of the K8S Cluster.
When i connect to a pod and try to establish a connection with the following command:
# mongo "mongodb://x.mongodb.net:27017,y.mongodb.net:27017,z.mongodb.net:27017/test?replicaSet=Cluster0-shard-0" --ssl --authenticationDatabase admin --username JackBauer
I get "CONNECT_ERROR" for every endpoint.
What am I missing?
NOTE:
I've just created a new paid cluster and the VPC is working perfectly. Might this feature be limited to paid clusters only?
Well... as the documentation states:
You cannot configure Set up a Network Peering Connection on M0 Free
Tier or M2/M5 shared clusters.
Peering is not working on shared Cluster. Which makes, after i think about it, totally sense.
Related
So, I'm trying to set up a MongoDB PrivateLink to my AWS EKS but it seems like my pod is not managing to connect.
I followed this guide https://aws.amazon.com/blogs/apn/connecting-applications-securely-to-a-mongodb-atlas-data-plane-with-aws-privatelink/
and set up the following materials:
Created a new VPC with 3 availability zones
Created EKS and attached it to the VPC (the EKS has Private + Public networking enabled)
I used the private subnets for composing the VPC under the MongoDB privateLink and used the VPC name of my newly created VPC
Afterwards i ran the command that mongodb shows and waited for it to create.
It shows "Available" under "Endpoint Status" and "Ready for connection requests" under endpoint service status
I use my correct username/password and the mongo URI is correct (I whitelisted my computer and tested and it worked)
I'm rather new to the AWS - PrivateLink setup and don't seem to get it working. Should i perhaps use the public subnet ids? What could be the issue?
We have a dedicated M10 cluster in Mongodb Atlas, on which I have created a peering connection with AWS to incorporate security using VPC. I have followed this Mongodb document for configuring peering connection between AWS and cluster.
https://docs.atlas.mongodb.com/security-vpc-peering/
The peering connection is created successfully and is active now. But the thing is, I am unable to connect to cluster without whitelisting my IP. When I try to connect without whitelisting the IP, it gives below error:
Something went wrong MongooseServerSelectionError: Could not connect
to any servers in your MongoDB Atlas cluster. One common reason is
that you're trying to access the database from an IP that isn't
whitelisted. Make sure your current IP address is on your Atlas
cluster's IP whitelist:
https://docs.atlas.mongodb.com/security-whitelist/
While after whitelisting the IP, I am able to connect to cluster successfully from local environment.
What do I need to access a cluster within VPC using application? I can not use the option IP whitelisting as every user's IP can not be whitelisted.
I have already whitelisted CIDR block as mentioned by the above documentation.
IP whitelisting is separate from peering. Peering determines the network, whitelisting determines who on the network is allowed access.
If you want to allow access from anything that has physical connectivity to the database, whitelist the entire world (0.0.0.0/0).
I have a GCP Project "A" where I had previously added VPC peering with MongoDB Atlas:
This way my development GKE cluster (whose VPC range is 10.7.0.0/16) will support peering when accessing MongoDB. So far everything works as expected.
Now I've provisioned another GKE cluster for preproduction, with its own VPC range on 10.221.0.0/16. I've also created another database in the same Mongo cluster "app-pre" and wanted to add VPC peering for the new cluster.
I followed the same steps:
Mongo Atlas: add peering connection for GCP project "A", VCP name and CIDR 192.168.0.0/16
GCP Create Peering Connection
The problem is I get the following error:
An ip range in the local network (10.221.0.0/16) overlaps with an ip range (10.221.0.0/16) in an active peer of the peer network
Posting this as an answer in order to help other people.
What #john-hanley mentions is correct, basically you can't have 2 or more VPC peerings when they are using overlapping IP ranges, this is because GCP routes would be created with the same "priority" and therefore it would be confusion as to where to send a packet to those routes.
The message you are getting is basically that you are already using one range this way and intend to use this very same range "10.221.0.0/16" again.
I am having a situation where my MongoDB in running on a separate ec2 instance and my app is running inside a kubernetes cluster created by kops. Now I want to access the DB from the app running inside k8s.
For this, I tried VPC peering between k8s VPC and ec2 instance' VPC. I tried setting requester VPC as k8s VPC and acceptor VPC as instance' VPC. After that, I've also added an ingress rule in ec2 instance' security group for allowing access from k8s cluster's security group on port 27017.
But, when I ssh'd into the k8s node and tried with telnet, the connection failed.
Is there anything incorrect in the procedure? Is there any better way to handle this?
CIDR blocks:
K8S VPC - 172.20.0.0/16
MongoDB VPC - 172.16.0.0/16
What are the CIDR blocks of the two VPCs? They mustn't overlap. In addition, you need to make sure that communication is allowed to travel both ways when modifying the security groups. That is, in addition to modifying your MongoDB VPC to allow inbound traffic from the K8s VPC, you need to make sure the K8s VPC allows inbound traffic from the MongDB VPC.
First , this does not seems to be kubernetes issue.
Make sure you have the proper route from kubernetes to mongodb node and vice versa
Make sure the required ports are open in security groups of VPCs
Allow inbound traffic from kubernetes vpc to monogdb vpc
Allow inbound traffic from mongodb vpc to kubernetes vpc
Make sure the namespace security allows the inbound and bound traffic
I want to connect my Lambda function to Mongo Atlas. It was all working fine but I needed to move my function inside a VPC so I could use redis.
Now I cannot connect to my database.
I looked at the security group on the VPC and added the mongo db port but with no joy
Also - the mongo IP Whitelist is as follows for now
0.0.0.0/0 (includes your current IP address)
Is there anything else I should try?
Thank you
I needed to move my function inside a VPC so I could use redis.
If you are:
Using dedicated MongoDB Atlas instances (i.e. not shared instances M0, M2 and M5 clusters).
And, the MongoDB Atlas deployment is hosted on AWS
Then, you could follow this instruction to set up VPC peering connection.
Please note that MongoDB Atlas supports VPC peering with other AWS VPCs in the same region. AWS does not support cross-region VPC peering. For multi-region clusters, you must create VPC peering connections per-region.
See also the tutorial shown on: Introducing VPC peering to MongoDB Atlas
struggled for days
using this tutorial
https://www.mongodb.com/blog/post/introducing-vpc-peering-for-mongodb-atlas
but it work in the end, when i found a missing step in the tutorial; when i used default VPC.
When crating custom VPC, rout table, and subnet; the subnet was what got me... auto assign IP needs to be enabled on the subnet;
PS you need at least two subnets for Lambda, so create one more if you haven't