VolumeAttachement can't be updated - aws-cloudformation

I have a stack
In this stack there is an EC2 instance with a root device and with another volume created and attached using VolumeAttachement.
I want to be able to change the EC2 AMI without loosing my attached EBS. When I try to change the AMI even if I have manually detach the volume, the update failed due to:
UPDATE_FAILED Update to resource type AWS::EC2::VolumeAttachment is not supported.
How can I change my EC2 AMI without loosing my secondary EBS ? Do I need to update my stack first by removing the VolumeAttachement and recreate it after AMI changed ?
Any help would be appriciated.

Related

Configure MongoDB in EC2 with EBS (SSD Volume)

I am bit confused about !
MongoDB is required for my one of application. Should I go with MongoDB Atlas
OR EC2 built-in MongoDB (by the way I have choose this).
If you go with EC2 built-in MongoDB, then my next question is, let say any instance EC2 of type eg - "MAD5 LARGE" how can i store all my DB data in separate EBS Volume ( Which does not delete in EC2 Termination ), which is not built-in EC2 Storage.
So that if any time I want to terminate my instance, I can do it any time without any worries and attach the volume with new instance ?
First of all, you can choose not to delete your Root EBS volume when you terminate the EC2 instance.
Second you can Attach additional EBS volumes to your EC2 Instance, which wont get deleted when you terminate the EC2 instance
Once you have attached the EBS volume, you need to mount the newly created volume with Linux OS.
Then you need to configure the MongoDB dbpath in /etc/mongod.conf file.
You can check the step-by-step process here in my answer.

Google Compute Engine snapshot of instance with persistent disks attached failed

I have a working VM instance that I'm trying to copy to allow redundancy behind google load balancer.
A test run with a dummy instance worked fine, creating a new instance from a snapshot of a running one.
Now, the real "original" instance have a persistent disk attached and this cause a problem in starting up the cloned instance because of the (obviously) missing persistent disk mount.
Logs from serial console output is as:
* Stopping cold plug devices[74G[ OK ]
* Stopping log initial device creation[74G[ OK ]
* Starting enable remaining boot-time encrypted block devices[74G[ OK ]
The disk drive for /mnt/XXXX-log is not ready yet or not present.
keys:Continue to wait, or Press S to skip mounting or M for manual recovery
As I understand there is no way to send any of this key strokes to the instance, is there any other way to overcome this issue? I know that I could unmount the disk before the snapshot, but the workflow I would like to instate is creating period snapshots of production servers, so un-mounting disks every time before performing it would require instance downtime (plus all the unnecessary risks of doing an action that would seem pointless).
Is there a way to boot this type of cloned instances successfully, and attach a new persistence disk afterwards?
Is this happening because the original persistent disk is in use, or the same problem would occur even if the original instance is offline (for example due to a failure in which case I would try to created a new instance from a snapshot)?
One workaround that I am using to get away from the same issue is that I dont't actually unmount the disk rather just comment out the the mount line in /etc/fstab and take the snapshot. This way my instance has no downtime or down disks while snapshoting. (I am using Ubuntu 14.04 as OS if that matters)
Later I fix and uncomment it when I use that snapshot on a new instance.
However you can also look into adding the nofail option in the commented line to get a better solution.
By the way I am doing a similar task building a load balanced setup with multiple webserver nodes. Each being cloned from the said snapshot with extra persistent disks mounted for eg uploads,data and logs etc
I'm a little unclear as to what you're trying to accomplish. It sounds like you're looking to periodically snapshot the data volumes of a production server so you can clone them later.
In all likelihood, you simply need to sync and fsfreeze to before you make your snapshot, rather than just unmounting/remounting it. The GCP documentation has a basic example of this in the Snapshots documentation.

Accessing mongodb data on aws instance

Due to some hardware issue my aws instance stopped functioning. Team suggested me to stop and and start the instanace.
Now aws provided new IP, where all data is present. I installed mongodb and had couple of databases there.
Now when I checked on new server mongodb was not working. I started mongod and letter I asked to create /data/db directory. Now mongodb is functioning but when I do
"show databases" none of my previous database appearning. Any help on getting this data back.?
A AWS EC2 instance have two types of Storage. A Ephemeral storage and a EBS Volume storage.
The Ephemeral storage should be used for temporary data only. If you restart your EC2 the data in it will not be lost, but if you stop and restart you loose it all. When trying to stop a EC2 AWS gives you this message.
Note that when your instances are stopped: Any data on the ephemeral
storage of your instances will be lost.
This kind of storage is provisioned very close to the instance and because of that it is faster.
EBS is a persistent storage independent of your EC2 instance. It can be attached/dettached from your EC2. This is the kind of storage you want to use when creating a database inside your instance.

EC2 UserData on Child AMI [duplicate]

This question already has answers here:
Amazon EC2 custom AMI not running bootstrap (user-data)
(5 answers)
Closed 7 years ago.
My goal is to create a base ami and then for child ami's to use the base ami.
I bootstrap the base ami via setting a powershell script in the --user-data flag and it works just fine.
However, when I use the create a child ami from the base ami, the child does not automatically run the script in the --user-data flag.
I understand that RunOnceService registry setting can be used to execute the latest userdata via the metadata call, however this seems hacky.
Is there a way to treat the child ami's as a new machine? Or get EC2 to run the script in the --user-data flag? Any other workarounds?
The default behavior of the EC2 Config Service is to NOT persist user-data settings following system startup. When your EC2 instance with the base AMI started up, this setting was toggled off during system startup and did not allow your subsequent child EC2 instances to handle user data.
Easy fix is to add <persist>true</persist> to your user data. An example from the documentation:
<powershell>
insert script here
</powershell>
<persist>true</persist>
Related:
AWS Documentation - Configuring a Windows Instance Using the EC2Config Service

AMI for EC2 instance with a MongoDB?

I am running an Amazon EC2 instance with a MongoDB running on it.
Since I will need to use it only for some time, I was wondering if it is possible to keep only image of the system for the usage time with Amazon Machine Image. Any idea?
You can actually create an AMI from your server and then terminate the server when you don't need it.
When you need it again you can relaunch a new server based on the AMI you created. The downside to this is that your latest data may not be up to date. So I recommend creating the AMI right before you terminate the server.
Another alternative is to just use EBS backed storage/instances and just shutdown the instance when you don't need it. You can just start the instance when you need it. There's little cost associated with keeping an EBS volume around. Certainly much less than keeping your EC2 instance running all the time.
Hope this helps.
A machine stopped it´s a machine that Amazon don´t charge you.
You get charged for:
Online time
Storage space (assumably you store the image on S3 [EBS])
Elastic IP addresses
Bandwidth
But Amazon charge you for your AMI´s created.
So you can stop your machine and just start it when you need to use it.