This question already has answers here:
Amazon EC2 custom AMI not running bootstrap (user-data)
(5 answers)
Closed 7 years ago.
My goal is to create a base ami and then for child ami's to use the base ami.
I bootstrap the base ami via setting a powershell script in the --user-data flag and it works just fine.
However, when I use the create a child ami from the base ami, the child does not automatically run the script in the --user-data flag.
I understand that RunOnceService registry setting can be used to execute the latest userdata via the metadata call, however this seems hacky.
Is there a way to treat the child ami's as a new machine? Or get EC2 to run the script in the --user-data flag? Any other workarounds?
The default behavior of the EC2 Config Service is to NOT persist user-data settings following system startup. When your EC2 instance with the base AMI started up, this setting was toggled off during system startup and did not allow your subsequent child EC2 instances to handle user data.
Easy fix is to add <persist>true</persist> to your user data. An example from the documentation:
<powershell>
insert script here
</powershell>
<persist>true</persist>
Related:
AWS Documentation - Configuring a Windows Instance Using the EC2Config Service
Related
I want to run multiple webapps in separate standalone instances inside the same WildFly 24 server.
I already created multiple copies of the standalone directory and changed the ports accordingly.
But I do have few questions.
In my first standalone I defined a datasource. Unfortunately I can't find it in the other standalone instances. All my apps need this datasource.
How can I use jboss-cli to create the datasource in the right standalone instance?
Or is it possible to define a datasource in a way that is available to all standalone instances?
I created custom scripts for each standalone instance to run the right instance with the right config. But how can I use jboss-cli.sh to connect to a specific standalone instance and to be able to restart the instance (shutdown --restart=true) ?
Thank you for your help
You need to connect to the right standalone instance, that means you need to specify to which you want to connect to.
./jboss-cli.sh -c --controller=remote+http://${host}:${instance-management-port}
Currently creating an RDS per account for several different AWS accounts. I use Cloudformation scripts for this.
When creating these databases I would like for them to have a similar structure. I created an SQL which I can successfully run manually after the script has run. I would like to however execute this automatically as part of running the script.
My solution so far is to create a EC2 instance with a dependency on the RDS to run once and then manually delete it later but this is not a suitable solution. I couldn't find any other way though?
Is it possible to run a query as part of a cloudformation script?
FYI: I'm creating a 11.5 Postgres instance.
The proper way is to use custom resources.
But this requires some new development. But if you have already EC2 instance that does populate the rds from its UserData you can automate its termination as follows:
Set InstanceInitiatedShutdownBehavior to termiante
At the end of UserData execute shutdown -h now to shutdown the instance.
Since your shutdown behavior is terminate, the instance will be automatically terminated.
I was trying to clone an instance of a MongoDB server on EC2. When I selected the instance, and did 'create image', it shut down our MongoDB server completely. The IP has not changed, and we are unable to connect to it. I tried to reboot the server, as well as start it and end it. The clone of the AMI has not been touched. How am I able to get the server back up?
When we try to start the server, the terminal just says 'failed.'
We need some more information about this to give the exact solution. Please provide how you are taking the AMI. But we can assume what might be happened.
As per AWS, the SNAPSHOT and AMI are not consistent. So to make it as a consistent, they provide an option to reboot while taking the AMI. If you are going to take the AMI from the AWS console they will be a CheckBox to prevent it.
If you are using any Lambda or CLI tools just disable the Reboot option.
I have a auto-scaling-group setup. When there are no running instances per that group and my application deploys, the auto-scaling-group will spin up an instance and deploy. Fantastic. ... well sorta...
If there are more than one instances in that auto-scaling-group, then my scripts might point to one instance or another.
How do I deploy to a specific instance without having to setup all the CodeDeploy application, deployment-group, send a new revision, yada, yada, yada...
Or, do you have to take all of those steps each time? How then do you track your deployments? Surely there's a better way to this?
Ideally, I would like to create an Instance based on an AMI, associate that instance with my auto-scaling-group, then deploy specifically to that instance. But I can't create-deployment to an instance, only to a deployment-group.
This is maddening.
The problem you describe can easily be solved with HashiCorp Packer.
With a packerfile you can describe the way your application is supposed to be deployed to an instance. This instance is then snapshotted and turned into an available AMI.
After which you can update your target group for your autoscaling group with a new AMI.
Documentation for Packer can be found here:
I'm using Amazon Web Services to create an autoscaling group of application instances behind an Elastic Load Balancer. I'm using a CloudFormation template to create the autoscaling group + load balancer and have been using Ansible to configure other instances.
I'm having trouble wrapping my head around how to design things such that when new autoscaling instances come up, they can automatically be provisioned by Ansible (that is, without me needing to find out the new instance's hostname and run Ansible for it). I've looked into Ansible's ansible-pull feature but I'm not quite sure I understand how to use it. It requires a central git repository which it pulls from, but how do you deal with sensitive information which you wouldn't want to commit?
Also, the current way I'm using Ansible with AWS is to create the stack using a CloudFormation template, then I get the hostnames as output from the stack, and then generate a hosts file for Ansible to use. This doesn't feel quite right – is there "best practice" for this?
Yes, another way is just to simply run your playbooks locally once the instance starts. For example you can create an EC2 AMI for your deployment that in the rc.local file (Linux) calls ansible-playbook -i <inventory-only-with-localhost-file> <your-playbook>.yml. rc.local is almost the last script run at startup.
You could just store that sensitive information in your EC2 AMI, but this is a very wide topic and really depends on what kind of sensitive information it is. (You can also use private git repositories to store sensitive data).
If for example your playbooks get updated regularly you can create a cron entry in your AMI that runs every so often and that actually runs your playbook to make sure your instance configuration is always up to date. Thus avoiding having "push" from a remote workstation.
This is just one approach there could be many others and it depends on what kind of service you are running, what kind data you are using, etc.
I don't think you should use Ansible to configure new auto-scaled instances. Instead use Ansible to configure a new image, of which you will create an AMI (Amazon Machine Image), and order AWS autoscaling to launch from that instead.
On top of this, you should also use Ansible to easily update your existing running instances whenever you change your playbook.
Alternatives
There are a few ways to do this. First, I wanted to cover some alternative ways.
One option is to use Ansible Tower. This creates a dependency though: your Ansible Tower server needs to be up and running at the time autoscaling or similar happens.
The other option is to use something like packer.io and build fully-functioning server AMIs. You can install all your code into these using Ansible. This doesn't have any non-AWS dependencies, and has the advantage that it means servers start up fast. Generally speaking building AMIs is the recommended approach for autoscaling.
Ansible Config in S3 Buckets
The alternative route is a bit more complex, but has worked well for us when running a large site (millions of users). It's "serverless" and only depends on AWS services. It also supports multiple Availability Zones well, and doesn't depend on running any central server.
I've put together a GitHub repo that contains a fully-working example with Cloudformation. I also put together a presentation for the London Ansible meetup.
Overall, it works as follows:
Create S3 buckets for storing the pieces that you're going to need to bootstrap your servers.
Save your Ansible playbook and roles etc in one of those S3 buckets.
Have your Autoscaling process run a small shell script. This script fetches things from your S3 buckets and uses it to "bootstrap" Ansible.
Ansible then does everything else.
All secret values such as Database passwords are stored in CloudFormation Parameter values. The 'bootstrap' shell script copies these into an Ansible fact file.
So that you're not dependent on external services being up you also need to save any build dependencies (eg: any .deb files, package install files or similar) in an S3 bucket. You want this because you don't want to require ansible.com or similar to be up and running for your Autoscale bootstrap script to be able to run. Generally speaking I've tried to only depend on Amazon services like S3.
In our case, we then also use AWS CodeDeploy to actually install the Rails application itself.
The key bits of the config relating to the above are:
S3 Bucket Creation
Script that copies things to S3
Script to copy Bootstrap Ansible. This is the core of the process. This also writes the Ansible fact files based on the CloudFormation parameters.
Use the Facts in the template.