AMI for EC2 instance with a MongoDB? - mongodb

I am running an Amazon EC2 instance with a MongoDB running on it.
Since I will need to use it only for some time, I was wondering if it is possible to keep only image of the system for the usage time with Amazon Machine Image. Any idea?

You can actually create an AMI from your server and then terminate the server when you don't need it.
When you need it again you can relaunch a new server based on the AMI you created. The downside to this is that your latest data may not be up to date. So I recommend creating the AMI right before you terminate the server.
Another alternative is to just use EBS backed storage/instances and just shutdown the instance when you don't need it. You can just start the instance when you need it. There's little cost associated with keeping an EBS volume around. Certainly much less than keeping your EC2 instance running all the time.
Hope this helps.

A machine stopped it´s a machine that Amazon don´t charge you.
You get charged for:
Online time
Storage space (assumably you store the image on S3 [EBS])
Elastic IP addresses
Bandwidth
But Amazon charge you for your AMI´s created.
So you can stop your machine and just start it when you need to use it.

Related

Instance of MongoDB server shut down when creating an AMI image

I was trying to clone an instance of a MongoDB server on EC2. When I selected the instance, and did 'create image', it shut down our MongoDB server completely. The IP has not changed, and we are unable to connect to it. I tried to reboot the server, as well as start it and end it. The clone of the AMI has not been touched. How am I able to get the server back up?
When we try to start the server, the terminal just says 'failed.'
We need some more information about this to give the exact solution. Please provide how you are taking the AMI. But we can assume what might be happened.
As per AWS, the SNAPSHOT and AMI are not consistent. So to make it as a consistent, they provide an option to reboot while taking the AMI. If you are going to take the AMI from the AWS console they will be a CheckBox to prevent it.
If you are using any Lambda or CLI tools just disable the Reboot option.

Backing up EC2 instance from Ephemeral to Persistent Storage

I'm pretty new with EC2 and backing up data, but currently, the app that I've built has no backup strategy and I want to know how to build a proper one. Currently, I have my RoR app and my MongoDB database on one instance. I've just now read about EBS volumes and snapshots, but I just can't wrap my head around it.
Supposedly EBS can be used as a datastore. If that is so, how do I set up a MongoDB database in EBS and migrate the data I have in my EC2 instance to it? I'm not familiar with configuring EBS and I've read the documentation and have more questions than answers.
In short, my instance is ephemeral storage right now and I want to turn it into persistent storage.
Thank you,
Don
It is pretty simple.
EBS is network disk volumes, it is used to store data.
A snapshot is an compress image backup, so this can apply to EC2 instance, RDS instances, even snapshot EBS volumes itself. After create the snapshot, it must store some where, thus, AWS use to store this backup into EBS.
Configure EBS is not difficult, it is little different that put on a new hard drive. You just need to "attach" an EBS volume to your instance. Then inside the EC2, do the usual OS disk initialisation work.
Because EBS is a dynamic storage, as long as your EC2 instance OS support it, you can extend the disk space anytime you need it (although it is recommended to do backup before doing it).
But from the operation perspective, you may want to consider putting your data into RDS if it is run for 24x7x365. So you don't need to deal with DB installation, complicate replication update,etc. If you run the DB occasionally, then you might want to stick to the EC2 instance mongodb.

Google Compute Engine snapshot of instance with persistent disks attached failed

I have a working VM instance that I'm trying to copy to allow redundancy behind google load balancer.
A test run with a dummy instance worked fine, creating a new instance from a snapshot of a running one.
Now, the real "original" instance have a persistent disk attached and this cause a problem in starting up the cloned instance because of the (obviously) missing persistent disk mount.
Logs from serial console output is as:
* Stopping cold plug devices[74G[ OK ]
* Stopping log initial device creation[74G[ OK ]
* Starting enable remaining boot-time encrypted block devices[74G[ OK ]
The disk drive for /mnt/XXXX-log is not ready yet or not present.
keys:Continue to wait, or Press S to skip mounting or M for manual recovery
As I understand there is no way to send any of this key strokes to the instance, is there any other way to overcome this issue? I know that I could unmount the disk before the snapshot, but the workflow I would like to instate is creating period snapshots of production servers, so un-mounting disks every time before performing it would require instance downtime (plus all the unnecessary risks of doing an action that would seem pointless).
Is there a way to boot this type of cloned instances successfully, and attach a new persistence disk afterwards?
Is this happening because the original persistent disk is in use, or the same problem would occur even if the original instance is offline (for example due to a failure in which case I would try to created a new instance from a snapshot)?
One workaround that I am using to get away from the same issue is that I dont't actually unmount the disk rather just comment out the the mount line in /etc/fstab and take the snapshot. This way my instance has no downtime or down disks while snapshoting. (I am using Ubuntu 14.04 as OS if that matters)
Later I fix and uncomment it when I use that snapshot on a new instance.
However you can also look into adding the nofail option in the commented line to get a better solution.
By the way I am doing a similar task building a load balanced setup with multiple webserver nodes. Each being cloned from the said snapshot with extra persistent disks mounted for eg uploads,data and logs etc
I'm a little unclear as to what you're trying to accomplish. It sounds like you're looking to periodically snapshot the data volumes of a production server so you can clone them later.
In all likelihood, you simply need to sync and fsfreeze to before you make your snapshot, rather than just unmounting/remounting it. The GCP documentation has a basic example of this in the Snapshots documentation.

Aws app with Heroku Postgres database

Is it possible to have an app running at aws EC2 and have it's database running at heroku's postgres?
In case it is, what are the downsides I should consider?
Since heroku is hosted at AWS, is there a way to know where is the location of the machine running my database?
Hosting my app in the same region of the database would help to keep the performance?
I would like to hear some opinions about this, I've been searching the topic without much success.
You can determine the public-facing location of your Heroku DB at any given time with a traceroute ... but there's no guarantee that it'll stay at that location, or that there isn't any internal re-routing going on. You'd probably want to speak directly with Heroku support about ways to make sure your Heroku DB instances are local to your AWS application instances, as that certainly would benefit performance. See if you can find out which availability zone, or at least which major region, they run the DB in, and whether you can "pin" your database instance to a given region/zone.
Amazon's RDS looks OK, but doesn't support PostgreSQL. Please keep nagging them to.
I'd probably just run the DB on AWS if performance wasn't particularly important. Use a raid10 of provisioned IOPS EBS volumes on an EBS-optimized instance and you'll get kind-of-ok performance (but at a really big price); alternately, you can use non-crash-safe ssd-based instance store servers and rely on replication and backups to keep your data safe.
I dont have any experience on Heroku PostgreSQL.
Generally of course you can run your own service on Amazon EC2 and use the managed database services of Heroku.
Downsides might be
nobody guarantees, that Herouku exclusively uses AWS and you probably can't determine the physical Heroku service location within the cloud so you will have to deal with network latencies
in addition to your external traffic fees you'll have to pay for the database traffic unless you talk to a server in the same availability zone in the same region
My suggestion ( without knowing any detail about the pros of Heroku )
Have a look at Amazon RDS if you don't want to run a database server on our own.
http://aws.amazon.com/de/rds/
I am operating around 70 server instances on AWS, both RDS and EC2 for more than a year now and I can't imagine any simpler way to keep your stuff running

Transfer MongoDB to another server?

If I populate a MongoDB instance on my local machine, can I wholesale transfer that database to a server and have it work without too much effort?
The reason I ask is that my server is currently an Amazon EC2 Micro instance and I need to put LOTS of data into a MongoDB and don't think I can spare the transactions and bandwidth on the EC2 instance.
There is copy database command which I guess should be good fit for your need.
Alternatively, you can just stop MongoDb, copy the database files to another server and run an instance of MongoDb there.