I'm trying to write some Cloudformation templates to setup a new account with all the resources needed for running our site. In this case we'll be setting up a UAT/test environment.
I have setup:
VPC
Security groups
ElastiCache
ALB
RDS
Auto scaling group
What I'm struggling with is, when I bring up my auto scaling group with our silver AMI, it fails health checks and the auto scaling group gets rolled back.
I have our code in a git repo, which is to be deployed via CodeDeploy, but it seems I can't add a CodeDeploy deployment without an auto scaling group and I can't setup the auto scaling group without CodeDeploy.
Should I modify our silver AMI to fake the health checks so the auto scaling group can be created? Or should I create the auto scaling group without health checks until a later step?
How can I programmatically setup CodeDeploy with Cloudformation so it pulls the latest code from our git repo?
Create the deployment app, group, etc. when you create the rest of the infrastructure, via CloudFormation.
One of the parameters to your template would be the app package already found in an S3 code deploy bucket, or the Github commit id to a working release of your app.
In addition to the other methods available to you in CodeDeploy, you can use AWS CloudFormation templates to perform the following tasks: Create applications, Create deployment groups and specify a target revision, Create deployment configurations, Create Amazon EC2 instances.
See https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-cloudformation-templates.html
With this approach you can launch a working version of your app as you create your infrastructure. Use normal health checks so you can be sure your app is properly configured.
Related
I want to use an Azure DevOps YAML pipeline to deploy to an AWS stack with EC2 instances and a Load Balancer. I've read here that I can use the AWS userdata script to join new EC2 instances to the Azure DevOps Environment.
My question is, how can I get my Azure DevOps Environment or YAML build to deploy to new servers that join that group? For example, if I use auto-scaling and a new server spins up.
I know that Deployment Groups which are used in the Classic Pipelines had a feature that allowed you to enable a Post Deployment Trigger that could redeploy the last successful build when a new server joined like this.
Is this possible to do with YAML Environments? If so, how?
If it matters, I hope to be able to share the AWS stack and have several separate applications that will get deployed to the same stack with their own YAML builds.
We use terraform to manage our AWS resources and have the code to change the service from one to two load balancers.
However,
terraform wants to destroy the service prior to recreating it. AWS cli docs indicate the reason - the API can only modify LBs during service creation and not on update.
create-service
update-service
It seems we need a blue/green deploy with the one LB and two LB services existing at the same time on the same cluster. I expect we'll need to create multiple task sets and the rest of the blue/green approach prior to this change (we'd planned for this anyway just not at this moment)
Does anyone have a good example for this scenario or know of any other approaches beyond full blue/green deployment?
Alas, it is not possible to change the number of LBs during an update. The service must be destroyed and recreated.
Ideally, one would be doing blue green deploys with multiple ECS clusters and a set of LBs. Then cluster A could have the old service and cluster B have the new service allowing traffic to move from A to B as we go from blue to green.
We aren't quite there yet but plan to be soon. So, for now, we will do the classic parking lot switch approach:
In this example the service that needs to go from 1 LB to 2 LBs is called target_service
clone target_service to become target_service2
deploy microservices that can talk to either target_service or target_service2
verify that target_service and target_service2 are both handling incoming data and requests
modify target_service infra-as-code to go from 1 to 2 LBs
deploy modified target_service (terraform deployment tool will destroy target_service leaving target_service2 to cover the gap, and then it will deploy target_service with 2 LBs
verify that target_service with 2 LBS works and handle requests
destroy and remove target_service2 as it is no longer needed
So, this is a blue-green like deploy albeit less elegant.
To update an Amazon ECS service with multiple load balancers, you need to ensure that you are using version 2 of the AWS CLI. Then, you can run the following command:
aws ecs update-service \
--cluster <cluster-name> \
--service <service-name> \
--load-balancers "[{\"containerName\": \"<container-name>\", \"containerPort\": <container-port>, \"targetGroupArn\": \"<target-group-arn1>\"}, {\"containerName\": \"<container-name>\", \"containerPort\": <container-port>, \"targetGroupArn\": \"<target-group-arn2>\"}]"
In the above command, replace with the name of your ECS cluster, with the name of your ECS service, with the name of the container in your task definition, with the port number the container listens to, and and with the ARN of your target groups.
Update 2022
ECS now supports updation of load balancers of a service. For now, this can be achieved using AWS CLI, AWS SDK, or AWS Cloudformation (not from AWS Console). From the documentation:
When you add, update, or remove a load balancer configuration, Amazon ECS starts new tasks with the updated Elastic Load Balancing configuration, and then stops the old tasks when the new tasks are running.
So, you don't need to create a new service. AWS update regarding this here. Please refer Update Service API doc on how to make the request.
We generally use BlueGreen & Rolling deployment strategy,
for docker containers in ECS container instances, to get deployed & updated.
Ansible ECS modules allow implement such deployment strategies with below modules:
https://docs.ansible.com/ansible/latest/modules/ecs_taskdefinition_module.html
https://docs.ansible.com/ansible/latest/modules/ecs_task_module.html
https://docs.ansible.com/ansible/latest/modules/ecs_service_module.html
Does AWS CDK provide such constructs for implementing deployment strategies?
CDK supports higher level constructs for ECS called "ECS patterns". One of them is ApplicationLoadBalancedFargateService which allows you to define an ECS Fargate service behind an Application Load Balancer. Rolling update is supported out of the box in this case. You simply run cdk deploy with a newer Docker image and ECS will take care of the deployment. It will:
Start a new task with the new Docker image.
Wait for several successful health checks of the new deployment.
Start sending new traffic to the new task, while letting the existing connections gracefully finish on the old task.
Once all old connections are done, ECS will automatically stop the old task.
If your new task does not start or is not healthy, ECS will keep running the original task.
Regarding Blue-Green deployment I think it's yet to be supported in CloudFormation. Once that's done, it can be implemented in CDK. If you can live without BlueGreen as IaC, you can define your CodeDeploy manually.
Check this NPM plugin which helps with blue-green deployment using CDK.
https://www.npmjs.com/package/#cloudcomponents/cdk-blue-green-container-deployment
Blue green deployments are supported in cloud formation now .
https://aws.amazon.com/about-aws/whats-new/2020/05/aws-cloudformation-now-supports-blue-green-deployments-for-amazon-ecs/
Don’t think CDK implementation is done yet .
I have created a Bluemix application and I have also configured the Delivery pipeline in such a way it does the zero downtime deployment with the help of active deploy but in order to add more routes to the instance I need to use either the CF CLI or Bluemix Console which will make the application to restart or restage.
I have also tried adding the routes to the manifest.yml file which is not helping as the Active Deploy based deployment in Delivery Pipeline did not consider the routes added to the manifest file at all.
I would like to add a route to the application based on the requirements without any downtime to the application. Any suggestions or way to handle this?
I've created a SF cluster from the Azure portal and by default it uses incrementing ports starting at 3389 for RDP access to the VMs. How can I change this to another range?
Additionally, or even alternatively, is there a way to specify the range when I create a cluster?
I realize this may not be so much of a SF question as a Load Balanacer or Scale Set question, but I ask in the context of SF because that is how I created this setup. IOW I did not create the load balancer or scale set myself.
You can change this with an ARM template (Azure Resource Manager).
Since you will run into situations from time to time where you want to change parts of your infrastructure, I'd recommend to create the whole cluster from an ARM template instead of through the portal. By doing so you could also create the cluster in an existing VNET, use internal load balancers, etc.
To create the cluster from an ARM template, you can either start with the Azure Quickstart template or by clicking on "Export template" in the Azure Portal right before you would actually create the cluster.
To change the inbound NAT rules for RDP in the template, change the section inboundNatPools in the template.
If you want to change your existing cluster, you can either export your existing resource group as a template or you can try to create a template which contains just the loadBalancer-resource and re-deploy just this part.
Working with ARM templates needs some getting used to, but it has many advantages. It allows you to easily change settings that can not be configured through the portal, it allows you to easily re-create the cluster for different environments, etc.