Recently, i can't use MongoDB(AWS Amazon EC2) - mongodb

I have MongoDB running on AWS. The setup has been up and running for over several months. The application server(server is also on AWS) has been running the same code for several months.
I received a below alert notification January 5 2017 from Amazon.
Then i can't use MongoDB. the mongodb started failing, i tried to change inboud 27017 port in Security Group. But the result was same.
Notification from Amazon is very principle message. So i don't know what i do exactly.
We are aware of malware that is targeting unauthenticated MongoDB databases which are accessible to the Internet and not configured according to best practices. Based on your security group settings, it appears that your EC2 instances, listed below, may be running a MongoDB database which is accessible to the Internet. If you are not using MongoDB, you may safely ignore this message.
We suggest you utilize EC2 Security Groups to limit connections to your MongoDB databases to only addresses or other security groups that should have access.
...

Related

Spinning up mongo db on Amazon Fargate

We are trying to setup microservice architecture on Amazon ECS using Fargate. When it comes to database, we are not able to spin up instance of mongodb. The database automatically switches off after 3 minutes.
The log states
{"t":{"$date":"2021-03-15T15:34:17.913+00:00"},"s":"I", "c":"REPL", "id":4784900, "ctx":"SignalHandler","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":10000}}
My question is ,
a) What could be the possible reason of auto shutdown of the db after 3 minutes?
b) Is this the right approach to spin up database in Amazon fargate ? Or is there a better way to achieve the same ?
I'm not certain about the core MongoDB issue. My first guess would be a failing or misconfigured health check on ECS; if this were the case, the health check would appear in your ECS event history (in the (new) UI: Cluster > Select the service > Notifications at the bottom).
If there are no bad health check notifications, then that would rule this out, and more information may be necessary to fully diagnose, such as the full ECS/Fargate Service/Task configuration.
Generally, and more to your second question, running databases on Fargate is not a recommended use-case. Fargate is a better fit for stateless services, like web APIs, which are more tolerant to being stopped and started frequently, and receiving different IP addresses upon each start. Within AWS, MongoDB would be a better fit on simple EC2, or via their DocumentDB MongoDB-like managed service. There are also several MongoDB managed hosting providers which can provide the low-management, serverless feel of Fargate, like MongoDB Atlas.

What influence does ejabberd get from disconnection process

CentOS 7.3.1611
ejabberd version 17.04
Using ElastiCache (Redis) for session store
Using Amazon RDS (MySQL) for user database
Install from official installer
We want the advice about the following situation.
We are caring out the performance test to check if we can connect to 3,000,000 users to XMPP cluster, which we constructed on AWS. There is 10 cluster nodes.
The result is all green, no problem with 3,000,000 users continuous connection.
But, when more than half of the 3,000,000 users try to reconnect, the server which ejabberd is in operation, although there are enough capacity for CPU and Memory, can't start new session.
Is the process of reject and connect disturb the new session?
Also, we want the advice about ejabberd setting and Linux setting, to improve the situation we have right now.

mongo atlas or aws - Internal or External connection

i am working on my next project currently which works 100% on mongo,
my past projects worked on SQL + Mongo on which i used AWS RDS + AWS EC2 and could connect them both in AWS internal IP which result me with much faster connection.
Now in mongo there is alot of fancy cloud servers like MLab and MongoDB Atlas which is actually cheaper then AWS.
My concern is that moving back to external DB connection will be slower and more network consuming then the internal connection in RDS
Have anyone experienced in such issue? maybe the different isn't that big as i make it but i need it to be optimized
This depends on your setup. Many of the "fancy" services also host stuff on AWS, so latency is minimal. Some even offer "private environments" or such, so you can hide your databases from public view.
The only thing left to care about is the amount of network traffic. But this will be your problem regardless of your database host. You can test this relatively easily (e.g. get a trial from one of the providers and test for throughput, or raise your own MongoDB docker cluster to use as a test etc) just to get an idea of the performance range you'll be in.

Any disadvantages or security issues for having website and databases on separate servers?

We're about to dive into Odoo (OpenERP). We're planning on using Amazon EC2 for the actual installation, and put the postgreSQL database server on Amazon RDS. (like this guide http://toolkt.com/site/install-openerp-server-and-postgresql-on-separate-servers/ )
If the RDS is only allowed to talk to the EC2 server, does this mitigate any security issues compared to a regular Odoo installation (where database and front facing webserver are on the same machine)? Is this an advisable setup?
Input data in your post is very vague to give you exact answer, but you may consider the following:
RDS can talk to EC2 or any other clients and application servers. Connection only depends on your configuration. You can configure VPC and configure/restrict access to your database and application servers there.
Depending on the size of your system (in terms of I/O, number of users , etc), of course you may want to configure separate database instance and application servers. At scale this separation is important.
In short, Nither any Disadvantage nor any security issues.
In Detail Odoo with AWS EC2,
We "SnippetBucket.com" Team had implemeneted already RDS and better know odoo security.
RDS is bit very expensive.
RDS make private instead of public in AWS
make complete secured.
As well AWS Security helps to make extra protection with inbound and outbound ports. Totally Safe.
Note: AWS "RDS Aurora-Postgresql" is 4X faster than official postgresql. AWS RDS support specific versions by AWS.

Elastic Beanstalk Deployment with MongoDB

Would really appreciate some suggestions for resources on how to properly deploy with Elastic Beanstalk with the following stack:
MongoDB
Rails (Puma)
Sidekiq/Redis
Elasticsearch
Do I need to get all these things setup in ebextension files? Or is it a matter of settings things up manually in AWS and then routing them together properly somewhere?
You definitely don't want to run all those on your Elastic Beanstalk servers. Elastic Beanstalk will automatically add or remove servers based on your traffic/server load. You don't want your database to be on one of those servers when it gets deleted.
Elastic Beanstalk is a Platform as a Service that is great for running web servers. There are other services on AWS such as ElastiCache (Redis/Memcached as a service) and Elasticsearch as a service. There are also third parties that provide services that run on AWS such as RedisLabs (Redis as a service) and MongoLab (MongoDB as a service).
You can decide to use any of these services to reduce the amount of system administration work you have to do yourself. Or you can manually setup EC2 Linux servers (outside of Elastic Beanstalk) and install things like Rails and MongoDB and ElasticSearch on them and manage them yourself.
For your case I would recommend something like the following:
Rails: ElasticBeanstalk
MongoDB: MongoLab
Redis: RedisLabs
Elasticsearch: AWS Elasticsearch Service
You would want to setup each of those services and then simply add the connection information for each of them to your Elastic Beanstalk environment so Rails can use them.
Edit:
Here are the best instructions on setting up MongoDB on EC2 manually: https://docs.mongodb.org/ecosystem/platforms/amazon-ec2/
For ElastiCache and Elasticsearch, you just click around in the AWS console to provision a Redis server and get the URLs to connect to. Once you have set all these things up, you just need to put the connection parameters in your ElasticBeanstalk environments as custom environment variables, something like:
MONGO_DB_URL="Your MongoDB EC2 internal IP address"
REDIS_URL="the url ElastiCache provided you"
Then read those environment variables in your application when creating connections to those services.
Also, you are going to have to learn about setting up your VPN and security groups to enable everything to connect. For example you will want your Elastic Beanstalk servers in one security group, and MongoDB server(s) in another group. Then you will have to configure the MongoDB security group to allow access from the beanstalk group on the MongoDB port. It's similar for ElastiCache. I think for Elasticsearch you will have to create an IAM role with access to the Elasticsearch API, and then assign that role to your Beanstalk servers.
Of course there is also the administrative tasks of setting up Linux servers for your MongoDB cluster, configuring clustering, fail-over, automated backups, log archives, periodic security updates, etc. I know you have all this AWS credit, but you should weigh moving everything over to AWS versus the cost of all the administrative tasks you will be spending time on. Elastic Beanstalk, Elasticsearch and ElasticCache are a no-brainer if you are getting them for free, but my MongoLab bill would have to be fairly high to justify setting all that up and managing it myself.