Automate AMI Update in EC2 - aws-cloudformation

I am using cloudformation template to create 4 EC2 instances behind an ELB. These instances will be associated with a launch config and Auto scaling group.
We update our AMIs every 2 months. Now if I have to update the AMIs without any down time what would be the best strategy. I am using jenkins for orchestration.
The plan that I have in mind is this
Template #1- creates ASG and Launch config
tempate #2 Creates/Updates ELB with new instances created
First execution
1. Create a cloudformation stack comprising of launch config and Autoscaling group.
This will launch 4 EC2 instances and bootstrap the application
2. Create 2nd template that will create ELB and bind the 4 instances created to the ELB. It will also bind the ELB to the ASG
When AMI has to be udated
1. Execute the first template that will create a new (ASG)The idea is to create new and not update the ASG since the ELB has to continue sending traffic to the old ASG until all instances are up and running.
Once the servers are up, the 2nd template will update the ELB with the new instances and update new auto scaling group with the new ELB.
delete the old stack.
Is there anything better to achieve this?

Cloud formation has support for this natively. Take a look at the Update Policy Attribute documentation.

Related

Auto joining newly created VM/Servers as kubernetes node to master

Hi I am working on Google Cloud platform where I am not using GKE. Rather I am creating k8s cluster manually. Following is my setup,
Total 11 server
Out of these 5 servers would be static servers and don't need any scaling
remaining 5 server would need up scaling if CPU or RAM consumption goes beyond certain
limit. i.e. I will spin only 3 servers initially and if CPU/RAM threshold is crossed then I will spin 2 more using Google Cloud Load balancer.
1 k8s Master server
To implement this load balancer I have already created one Custom Image on which I have installed docker and kubernetes. Using this I have create one instance template and then instance group.
Now the problem statement is ,
Although I have created image with everything installed , When I am creating a instance group in which 3 VM are being created , these VMs does not automatically connect to my k8s master. Is there any way to automatically connect newly created VM as a node to k8s master so that I do not have run join command manually on each server ?
Thanks for the help in advance.
so that I do not have run join command manually on each server
I am assuming that you can successfully run the join command to join the newly created VMs to the Kubernetes master manually.
If that is the case, you can use the startup-script feature in the Google Compute Engine.
Here is the documentation:
https://cloud.google.com/compute/docs/instances/startup-scripts
https://cloud.google.com/compute/docs/instances/startup-scripts/linux#passing-directly
In short, startup-script is the feature from Google Compute Engine to automatically run our customized script during start-up.
And, the script could look something like this:
#! /bin/bash
kubeadm join .......

How can I update and ECS service adding an addition load balance to which a service talks to without downtime?

We use terraform to manage our AWS resources and have the code to change the service from one to two load balancers.
However,
terraform wants to destroy the service prior to recreating it. AWS cli docs indicate the reason - the API can only modify LBs during service creation and not on update.
create-service
update-service
It seems we need a blue/green deploy with the one LB and two LB services existing at the same time on the same cluster. I expect we'll need to create multiple task sets and the rest of the blue/green approach prior to this change (we'd planned for this anyway just not at this moment)
Does anyone have a good example for this scenario or know of any other approaches beyond full blue/green deployment?
Alas, it is not possible to change the number of LBs during an update. The service must be destroyed and recreated.
Ideally, one would be doing blue green deploys with multiple ECS clusters and a set of LBs. Then cluster A could have the old service and cluster B have the new service allowing traffic to move from A to B as we go from blue to green.
We aren't quite there yet but plan to be soon. So, for now, we will do the classic parking lot switch approach:
In this example the service that needs to go from 1 LB to 2 LBs is called target_service
clone target_service to become target_service2
deploy microservices that can talk to either target_service or target_service2
verify that target_service and target_service2 are both handling incoming data and requests
modify target_service infra-as-code to go from 1 to 2 LBs
deploy modified target_service (terraform deployment tool will destroy target_service leaving target_service2 to cover the gap, and then it will deploy target_service with 2 LBs
verify that target_service with 2 LBS works and handle requests
destroy and remove target_service2 as it is no longer needed
So, this is a blue-green like deploy albeit less elegant.
To update an Amazon ECS service with multiple load balancers, you need to ensure that you are using version 2 of the AWS CLI. Then, you can run the following command:
aws ecs update-service \
--cluster <cluster-name> \
--service <service-name> \
--load-balancers "[{\"containerName\": \"<container-name>\", \"containerPort\": <container-port>, \"targetGroupArn\": \"<target-group-arn1>\"}, {\"containerName\": \"<container-name>\", \"containerPort\": <container-port>, \"targetGroupArn\": \"<target-group-arn2>\"}]"
In the above command, replace with the name of your ECS cluster, with the name of your ECS service, with the name of the container in your task definition, with the port number the container listens to, and and with the ARN of your target groups.
Update 2022
ECS now supports updation of load balancers of a service. For now, this can be achieved using AWS CLI, AWS SDK, or AWS Cloudformation (not from AWS Console). From the documentation:
When you add, update, or remove a load balancer configuration, Amazon ECS starts new tasks with the updated Elastic Load Balancing configuration, and then stops the old tasks when the new tasks are running.
So, you don't need to create a new service. AWS update regarding this here. Please refer Update Service API doc on how to make the request.

Using Cloudformation to build environment on new account

I'm trying to write some Cloudformation templates to setup a new account with all the resources needed for running our site. In this case we'll be setting up a UAT/test environment.
I have setup:
VPC
Security groups
ElastiCache
ALB
RDS
Auto scaling group
What I'm struggling with is, when I bring up my auto scaling group with our silver AMI, it fails health checks and the auto scaling group gets rolled back.
I have our code in a git repo, which is to be deployed via CodeDeploy, but it seems I can't add a CodeDeploy deployment without an auto scaling group and I can't setup the auto scaling group without CodeDeploy.
Should I modify our silver AMI to fake the health checks so the auto scaling group can be created? Or should I create the auto scaling group without health checks until a later step?
How can I programmatically setup CodeDeploy with Cloudformation so it pulls the latest code from our git repo?
Create the deployment app, group, etc. when you create the rest of the infrastructure, via CloudFormation.
One of the parameters to your template would be the app package already found in an S3 code deploy bucket, or the Github commit id to a working release of your app.
In addition to the other methods available to you in CodeDeploy, you can use AWS CloudFormation templates to perform the following tasks: Create applications, Create deployment groups and specify a target revision, Create deployment configurations, Create Amazon EC2 instances.
See https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-cloudformation-templates.html
With this approach you can launch a working version of your app as you create your infrastructure. Use normal health checks so you can be sure your app is properly configured.

ECS auto scailing cluster with ec2 count

To deploy my docker-compose, I using AWS ECS.
Everything works fine, except auto scailing.
When create ECS cluster,
I can decide number of instances.
So I defined it to 1.
Next, when creating service on my cluster,
Also can decide number of tasks.
I know that tasks running on the instance, so I defined it to 1.
And to specify auto scailing policy like this.
As you know that, if cpu percentage up to 50 in 5 minutes, it automatically adds a task.
So finish configure it, I run benchmark to test.
In the service describe, desired tasks is increase to 2.
But instance didn't added automatically.
In the event log,
Maybe I defined number of instances to 1 in my cluster, So it can't start new task.
Why auto scailing do not automatically add new instance on my cluster?
Is there any problem on my configuration?
Thanks.
Your ecs cluster Is not autoscaling the number of instances. It autoscales number of tasks that are running inside your existing cluster. An ec2 instance can have multiple tasks running. To autoscale instance count, you will need to use cloudwatch alarms:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cloudwatch_alarm_autoscaling.html
You are receiving this issue because of the port conflict when ECS attempts to use the "closest matching container instance" which in this case is the one which ends in 9e5e.
When attempting to spin up a task on that instance it notices that this instance "is already using a port required by your task"
In order to resolve this issue,
You need to use dynamic porting for your ECS cluster.
There is a tutorial on how to do this that Amazon provides here:
https://aws.amazon.com/premiumsupport/knowledge-center/dynamic-port-mapping-ecs/
Essentially,
You will need to modify the port mapping in the task definition that has the docker container you are trying to run and scale.
The port mapping should be 0 for the host port and then the port number that your application uses for the container port.
the zero value will make each docker instance in the ECS cluster that is ran use a different number for its host port, eliminating the port conflict you are experiencing.

Can I create a GCP cluster with different machine types?

I'd like to create a cluster with two different machine types.
How would I go about doing this? What documentation is available?
I assume you are talking about a Google Container Engine cluster.
You can have machines of different types by having more than one node pool.
If you are creating the cluster in the Console, start by creating it with one node pool and after it is created edit the cluster to add a second node pool with different instance configuration. This is necessary because the UI only allows one node pool at creation.