Creating a CloudFormation Stack from a Existing AMI Snapshot - aws-cloudformation

I am very new to cloudformation, so excuse my ignorance. I have been trying to create a cloudformation template form a existing linux AMI snapshot as a basis for automating my current resources and eventually have my entire infrastructure running on a cloudformation template. is there a way i can use the AMI snapshot as a basis for the stack or do I have to create everything from scratch and then update it?
Any help would be greatly appreciated for this noob.

You will need to create a LaunchConfiguration and one of the properties for the launch config is the ImageID for the AMI snapshot. Note that you will need an AMI snapshot for each region you want to support based on my past experience. See http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig.html

Related

What is the suggested workflow when working on a Kubernetes cluster using Dask?

I have set up a Kubernetes cluster using Kubernetes Engine on GCP to work on some data preprocessing and modelling using Dask. I installed Dask using Helm following these instructions.
Right now, I see that there are two folders, work and examples
I was able to execute the contents of the notebooks in the example folder confirming that everything is working as expected.
My questions now are as follows
What are the suggested workflow to follow when working on a cluster? Should I just create a new notebook under work and begin prototyping my data preprocessing scripts?
How can I ensure that my work doesn't get erased whenever I upgrade my Helm deployment? Would you just manually move them to a bucket every time you upgrade (which seems tedious)? or would you create a simple vm instance, prototype there, then move everything to the cluster when running on the full dataset?
I'm new to working with data in a distributed environment in the cloud so any suggestions are welcome.
What are the suggested workflow to follow when working on a cluster?
There are many workflows that work well for different groups. There is no single blessed workflow.
Should I just create a new notebook under work and begin prototyping my data preprocessing scripts?
Sure, that would be fine.
How can I ensure that my work doesn't get erased whenever I upgrade my Helm deployment?
You might save your data to some more permanent store, like cloud storage, or a git repository hosted elsewhere.
Would you just manually move them to a bucket every time you upgrade (which seems tedious)?
Yes, that would work (and yes, it is)
or would you create a simple vm instance, prototype there, then move everything to the cluster when running on the full dataset?
Yes, that would also work.
In Summary
The Helm chart includes a Jupyter notebook server for convenience and easy testing, but it is no substitute for a full fledged long-term persistent productivity suite. For that you might consider a project like JupyterHub (which handles the problems you list above) or one of the many enterprise-targeted variants on the market today. It would be easy to use Dask alongside any of those.

AWS Elasticsearch domain deployed through CloudFormation. How to update ES version without replacement?

We have an AWS Elasticsearch domain we created through CloudFormation running version 6.3 of ES. When we update the ElasticsearchVersion property in the template, it replaces the Elasticsearch domain with a new one running the new version instead of updating the existing one.
How does anyone upgrade their Elasticsearch domains that were deployed with CF if it doesn't do an in-place upgrade? I am almost thinking at this point I need to create and manage my ES domains through boto3.
Any insight or ideas would be greatly appreciated.
This is now possible (as of 25/11/2019) by setting an UpdatePolicy with EnableVersionUpgrade: True.
For example:
ElasticSearchDomain:
Type: AWS::Elasticsearch::Domain
Properties: ...
UpdatePolicy:
EnableVersionUpgrade: true
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html#cfn-attributes-updatepolicy-upgradeelasticsearchdomain
Received correspondence back from AWS Support regarding an ES in-place upgrade through CloudFormation.
tl;dr It is currently not supported but a feature request is already active for this functionality.
You are correct in saying that ES in-place upgrade is not supported by CFN at this moment. Thus upgrading ES from 6.3 to 6.4 can be done via CLI or AWS Console will keep the existing domain, but with CloudFormation, it will launch a new domain and discard the existing one.
I see that there is already an active feature request for this. I will go ahead and pass your sentiment regards to our internal team about this matter as well.
Unfortunately, AWS Support does not have visibility to service enhancement implementation roadmap, so I would not be able to provide you with an exact time frame.

Orion installation trough Chef Recipes and Blueprints in FIWARE Lab

I was trying to install Orion though Chef Recipes in FIWARE Lab (creating a new template) but I couldn't find the package in the list.
Also, while trying to run it cloning a blueprint template that already exists, it returns an error (image can't be found). I've also realised that in the blueprint template, the Orion Context Broker version is outdated (0.13.0).
Could somebody perform this actions without errors? Is it under maintenance?
Yes It was under maintenance. Now you should be able to create templates with orion and clone the predefined templates.
Sorry for any inconvenience.

Amazon EC2 Auto Scaling in production

I have realized that I have to make Image from EBS Volume everytime when I change my code
and following autoscaling configuration everytime (this is really bad).
I have heard that some people try to load their newest code from github or some similar sort of doing.
So that they can let server to have newest code automatically without making new image every single time.
I already have a private github.
Is it a only way to solve Auto-Scaling code management ?
If so, how can I configure this to work?
Use user-data scripts, which work on a lot of public images including Amazon's. You could have it download puppet manifests/templates/files and run directly. Search for master less puppet.
Yes, you can configure your AMI so that the instance loads the latest software and configuration on first boot before it is put into service in the auto scaling group.
How to set up a startup script may depend on the specific os and version you are running.

How to create a persistent ebs backed AMI?

I would like to create a 64 bit ubuntu AMI that is backed by EBS and is persistent. By persistent, I mean that I want to be able to seamlessly make changes to the AMI without worrying about snapshotting it myself. What is the best way to do this? Are there any services that provide this kind of service?
There are so many blog posts which talk about getting started on ec2, but so few which have any interesting detail in them.
As time goes by more of these services are popping up. Check out Turnkey Linux, which is relatively mature. Another option is DotCloud.
I just set up an instance of CentOs 5.4 where I made an AMI of my server configuration and then make backups of the entire database and files every day with snapshots. I made a detailed tutorial about it here: http://www.sundh.com/blog/2011/02/scheduled-backups-on-ebs-instance/