AWS + Elastic Beanstalk + MongoDB - mongodb

I am trying to setup my microservices architecture using AWS Elastic Beanstalk and Docker. That is very easy to do, but when I launch the environment, it launches into the default VPC, thus giving public IP's to the instances. Right now, that's not too much of a concern.
What I am having a problem with is how to set up the MongoDB architecture. I have read: recommended way to install mongodb on elastic beanstalk but still remain unsure on how to set this up.
So far I have tried:
Using the CloudFormation template from AWS here: http://docs.aws.amazon.com/quickstart/latest/mongodb/step2b.html to launch a primary with 2 replica node setup into the default VPC, but this gives and assigns public access to the Mongo nodes. I also am not sure how to connect my application since this does not add a NAT instance - do I simply connect directly to the primary node? In case of failure for this node, will the secondary node's IP become the same as that of the primary node so that all connections remain consistent? Or do I need to add my own NAT instance?
I have also tried launching MongoDB into its own VPC (https://docs.aws.amazon.com/quickstart/latest/mongodb/step2a.html) and giving access via the NAT, but this means having two different VPCs (one for my EB instances and one for the MongoDB). In this case would I connect to the NAT from my EB VPC in order to route requests to the databases?
I have also tried launching a new VPC for the MongoDB architecture first and then trying to launch EB into this VPC. For some reason, the load balancing setup won't let me add into the subnets, giving me the error: "Custom Availability Zones option not supported for VPC environments".
I am trying to launch all this in us-west-1. It's been two days now and I have no idea where to go or what the right way is to tackle this issue. I want the databases to be private (no public access) with a NAT gateway, so ideally my third method seems what I want, but I cannot seem to add the new EB instances/load balancer into the newly-created MongoDB VPC. This is the setup I'm going for: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/images/default-vpc-diagram.png but I am trying to use the templates to do this.
What am I doing wrong here? Any help would be much, much appreciated. I have read up a lot about this but still am not sure where to go from here.
Thanks a lot in advance!

Im having this same issue. There seems to be a complete lack of documentation on how to connect an Elastic Beanstalk node.js / express app with the aws Quickstart mongodb cluster set up documentation.
When I run the aws mongo quickstart though it launches a NAT which is public and also a private primary node... maybe this is part of your issue?

Related

can multiple server access the same mongodb?

I am going to create a load balancer in Azure. I have a VM that already running and going to take a backup of the existing VM and will create another VM using that backup. So two servers will have the same configuration and will use the same credentials.
In the already existing server, I have MongoDB configured, and if I create the same VM that will also have the same configuration as the old VM. Now what I want to know is can I use the same MongoDB which will be accessed by two servers that have the same configurations?
Will it create any mess or any give any error?
can I use like above mentioned?
Do I need to configure another MongoDB for the second server?
can anyone please clarify my questions? it would be great to have some clear explanation. thank you
MongoDB has build in support for horizontal scalability and high availability meaning that you dont need to create a 3th party load balancer , the mongos service part of mongoDB sharding cluster is the load balancer itself. Check the official documentation for mongoDB replication and sharding ...
On your questions:
Will it create any mess or any give any error?
If you just copy data to another VM it will be fine , as far as you dont write to one of the VMs you can loadbalance reads between this independent VMs , but this is strange approch when you have build in mongoDB replication mechanism and you can just add the second VM as a SECONDARY member from replicaSet.
can I use like above mentioned?
Sure , you can use also this approach but why you will need to do it?
Do I need to configure another MongoDB for the second server?
Depends on the use case , but in general you would prefer to create 3x members replicaSet or if your database is large and write performance is strong requirement you may need to distribute the database between multiple servers ( shards ) so you will need more then just 3x servers ...

Connect AWS ElasticBeanstalk to MongoDB Atlas

What is the simplest way to let AWS connect to Atlas without allowing connections from any IP?
Looks like when you create a new environment in EBS you can choose to have an Elastic IP. Assuming this would work, it looks like you can only set this when you create an environment. Doesn't seem I can edit the Network settings of my running environments (and doesn't even look like I can pause the environment, assuming that's the reason why the Network bit is not modifiable).
Thanks!
I realise this is pretty late, but I think the best way would be to set up VPC peering. I think you will have to use a custom VPC within AWS to be able to do that.
See Atlas network access

mongo atlas or aws - Internal or External connection

i am working on my next project currently which works 100% on mongo,
my past projects worked on SQL + Mongo on which i used AWS RDS + AWS EC2 and could connect them both in AWS internal IP which result me with much faster connection.
Now in mongo there is alot of fancy cloud servers like MLab and MongoDB Atlas which is actually cheaper then AWS.
My concern is that moving back to external DB connection will be slower and more network consuming then the internal connection in RDS
Have anyone experienced in such issue? maybe the different isn't that big as i make it but i need it to be optimized
This depends on your setup. Many of the "fancy" services also host stuff on AWS, so latency is minimal. Some even offer "private environments" or such, so you can hide your databases from public view.
The only thing left to care about is the amount of network traffic. But this will be your problem regardless of your database host. You can test this relatively easily (e.g. get a trial from one of the providers and test for throughput, or raise your own MongoDB docker cluster to use as a test etc) just to get an idea of the performance range you'll be in.

How to execute Amazon Lambda functions on dedicated EC2 server?

I am currently developing the backend for my app based on Amazon Web Services. I pretended to use DynamoDB to store the user's data, but finally opted for MongoDB, which I have already installed in my EC2 instance.
I have some code written in Python to update/query... the DB, so that when a Cognito event triggers my lambda function, this code is directly executed on my instance so I can access my DB. Any ideas how can I accomplish this?
As mentioned by Gustavo Tavares, "the whole point of lambda is to run code without the need to deploy EC2 instances". And you do not have to put your EC2 with database to "public" subnets for Lambda to access them. Actually, you should never do that.
When creating/editing Lambda configuration you may select to run it in any of you VPCs (Configuration -> Advanced Settings -> VPC). Then select Subnet(s) to run your Lambda in. This will create ENIs (Elastic Network Interface) for the virtual machines you Lambdas will run on.
Your subnets must have Routing/ACL configured to access the subnets where Database resides. At least one of the SecurityGroups associated with Lambda must also have Outbound traffic allowed to the Database subnet on appropriate ports (27017).
Since you mentioned that your Lambdas are "back-end" then you should probably put them in the same "private" subnets as your MongoDB and avoid any access/routing headache.
One way to accomplish this is to give the Lambda a SAM Template, then use sam local invoke inside of the EC2 instance to execute locally.
OK BUT WHY OH WHY WOULD ANYONE DO THIS?
If your Lambda requires access to both a VPC and the Internet, and doesn't use a lot of memory and doesn't really require scalability, and you already wrote the code (*), it's actually 10x cheaper(**) and higher-performing to launch a t3.nano EC2 Spot Instance on a public subnet than to add a NAT Gateway to the Lambda function.
(*) if you have not written the code yet, don't even bother to make it a Lambda.
(**) 10x cheaper as in $3 vs $30, so this really only applies to hobbyist projects on a shoestring budget. Don't do this at work, because the cost of engineers' time to manage and maintain an EC2 instance will far exceed $30/month over the long term.
If you want Lambda to execute code on your ec2-instances you'll need to use the SDK for the language you're writing your lambda in. Then you can simply use the AWS API to run commands on your EC2 instance.
See: http://docs.aws.amazon.com/systems-manager/latest/userguide/run-command.html
I think you misunderstood the idea of AWS lambda.
The whole point of lambda is to run code without the need to deploy EC2 instances. You upload the code and the infrastructure is provisioned on the fly. If your application does not need the infrastructure anymore (after a brief period), it vanishes and you will not be charged for the idle time. If you need it again a new infrastructure is provisioned.
If you have a service, like your MongoDB, running in EC2 instances your lambda functions can access it like any other code. You just need configure your lambda code to connect to the EC2 instance, like you would be doing if your database were installed in any other internet faced server.
For example: You can put your MongoDB server in a public subnet of your VPC and assign an elastic IP for your server. In your Python lambda code you configure your driver to connect to this elastic IP and update the database.
It will work like every service were deployed in different servers across internet: Cognito connect to Lambda functions across internet and then the python code deployed in lambda connect to your MongoDB across internet.
If I can give you an advice, try DynamoDB a little more. With DynamoDB it will be even more simple to make all this work, because you will not need to configure a public subnet and request an elastic IP. And the API for DynamoDB is not very different of the MongDB API.

Connecting Grails app to MongoDB through NAT instance on AWS

I'm currently trying to setup a mongo replica set in a VPC via this quick start guide https://s3.amazonaws.com/quickstart-reference/mongodb/latest/doc/MongoDB_on_the_AWS_Cloud.pdf.
I have a grails app that runs on an AWS instance which used to also host the mongo instance. Now I want to seperate the web app from mongo. I already followed the guide above and have my replica set setup with a NAT instance that I can access via ssh. Now I'm wondering how to configure grails so that it connects to this mongo instance rather than the one on its own instance.
I used to just set the db host and everything in the DataSource.groovy file and everything was fine, but I can't seem to find how to set it up to go through the NAT to the primary replica node.
Do you guys have any idea what I should do next?
Cheers,
Tobias