Config lives with the application - azure-bicep

I don't think this is possible. but I'll ask anyway.
I have a network deployment that creates vnet, subnets, and NSGs. I then have a separate deployment that creates an application, the app needs to update an NSG so that traffic is allowed.
But if i re-run the vnet deployment the application-specific changes are removed as they dont exist within the vnet bicep.main.
I know I can write some code that take the NSG values and re-apply after the vnet deployment, but this will create downtime.
I'm pretty sure their isn't a way persist the changes, but thought i'd ask how others do this?

Related

How to deploy container to ECS+Fargate+CodeDeploy without load balancer?

I have an app that comes in two pieces:
The (Ruby/Rails) main app which is also the web frontend
A Sidekiq event handler
The first one is not a problem, done that. CodeDeploy will deploy the new version by creating the new service(s) on the "blue load balancer target group" and start moving the traffic to them in a blue/green setup.
However, Sidekiq doesn't have a port, so no health check. This is the problem I'm having now, the NLB doesn't validate the health of the container, so it keeps restarting and the deploy eventually times out because it can't get a clean service.
I'm not sure if this is a ECS limitation or a CodeDeploy limitation (or a "me limitation" :) ), but it seems I have to have a load balancer for that to work.
But something about that just rings wrong! I'm sure AWS must have thought about "worker containers", right? How do you setup an ECS cluster with two services, each with one task - the two parts above. And how do you then use CodeDeploy to deploy to that service/task? Or do you deploy Sidekiq (in this case) in some other way?
Having two services, I can scale them individually.
At the moment, I have a CodePipeline, triggering a CodeBuild to build the image, push that into ECR then trigger CodeDeploy to do the deployment. It worked initially when I had one service, with the two tasks, but then both would scale simultaneously which isn't what I want in the end.

How can I update and ECS service adding an addition load balance to which a service talks to without downtime?

We use terraform to manage our AWS resources and have the code to change the service from one to two load balancers.
However,
terraform wants to destroy the service prior to recreating it. AWS cli docs indicate the reason - the API can only modify LBs during service creation and not on update.
create-service
update-service
It seems we need a blue/green deploy with the one LB and two LB services existing at the same time on the same cluster. I expect we'll need to create multiple task sets and the rest of the blue/green approach prior to this change (we'd planned for this anyway just not at this moment)
Does anyone have a good example for this scenario or know of any other approaches beyond full blue/green deployment?
Alas, it is not possible to change the number of LBs during an update. The service must be destroyed and recreated.
Ideally, one would be doing blue green deploys with multiple ECS clusters and a set of LBs. Then cluster A could have the old service and cluster B have the new service allowing traffic to move from A to B as we go from blue to green.
We aren't quite there yet but plan to be soon. So, for now, we will do the classic parking lot switch approach:
In this example the service that needs to go from 1 LB to 2 LBs is called target_service
clone target_service to become target_service2
deploy microservices that can talk to either target_service or target_service2
verify that target_service and target_service2 are both handling incoming data and requests
modify target_service infra-as-code to go from 1 to 2 LBs
deploy modified target_service (terraform deployment tool will destroy target_service leaving target_service2 to cover the gap, and then it will deploy target_service with 2 LBs
verify that target_service with 2 LBS works and handle requests
destroy and remove target_service2 as it is no longer needed
So, this is a blue-green like deploy albeit less elegant.
To update an Amazon ECS service with multiple load balancers, you need to ensure that you are using version 2 of the AWS CLI. Then, you can run the following command:
aws ecs update-service \
--cluster <cluster-name> \
--service <service-name> \
--load-balancers "[{\"containerName\": \"<container-name>\", \"containerPort\": <container-port>, \"targetGroupArn\": \"<target-group-arn1>\"}, {\"containerName\": \"<container-name>\", \"containerPort\": <container-port>, \"targetGroupArn\": \"<target-group-arn2>\"}]"
In the above command, replace with the name of your ECS cluster, with the name of your ECS service, with the name of the container in your task definition, with the port number the container listens to, and and with the ARN of your target groups.
Update 2022
ECS now supports updation of load balancers of a service. For now, this can be achieved using AWS CLI, AWS SDK, or AWS Cloudformation (not from AWS Console). From the documentation:
When you add, update, or remove a load balancer configuration, Amazon ECS starts new tasks with the updated Elastic Load Balancing configuration, and then stops the old tasks when the new tasks are running.
So, you don't need to create a new service. AWS update regarding this here. Please refer Update Service API doc on how to make the request.

Migration K8S cluster

we have several clusters. Right now, we want to upgrade a K8S cluster replacing it for new one.
We handle the deployments with CICD, so, when the new cluster is ready, we will start to move apps to the new cluster running the pipelines.
We're facing a problem with DNS.
All the apps in the kubernetes cluster is resolved by a wildcard DNS.
Besides, we need to do the migration in multiple steps, so, we can't change the wildcard to the new cluster, because the old cluster is going to host some apps for a while and need to interact between them
Any good solution or alternative to get the migration done smoothly?
And what would be a best practice about DNS to avoid this situation in the future?
Thank you in advance.
You can put in specific DNS records for each hostname as they need to migrate.
Say your wildcard is for *.mycompany.com...
app1.mycompany.com is getting migrated
app2.mycompany.com is staying put until the next batch
Add a record for app2.mycompany.com pointing to the old cluster, and switch the wildcard record to point to the new cluster.
Now app1.mycompany.com will resolve to the new cluster, but the more specific record for app2.mycompany.com will trump the wildcard and keep pointing to the old cluster.
When it's time for app2's DNS cutover, delete the record and the wildcard will take over.

Switching VPC on an EKS instance

Is it possible to change the VPC of an already created EKS cluster? Or do I have to create a new one and there to select the new VPC?
you should be able to change the VPC configuration for the EKS cluster. However, as per the documentation which I found, it states that if VPC config is updated, the update type is replacement i.e., a new cluster will be created with the updated config.
Please see https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eks-cluster.html#cfn-eks-cluster-resourcesvpcconfig for more information
Hope this helps.
The correct answer for most situations is "You can't change the VPC for an EKS cluster." The other answer from krisnik refers to a CloudFormation-managed stack, where if you change the VPC it deletes your EKS cluster and makes a new one which is a lot like the old one, but in a new VPC. But also this new one isn't going to have any of your kubernetes applications or anything else in it, unless you were also managing that stuff in CloudFormation. So it's really not particularly helpful unless you fall into that strange combination of cloudformation-to-manage-your-kubernetes.
To confirm this, see API reference for UpdateClusterConfig which confusingly lets you pass in the resourcesVpcConfig parameter, but also has a red-triangle note saying "You can't update the subnets or security group IDs for an existing cluster." So that pretty much settles it.

Changing a SF clusters default RDP ports

I've created a SF cluster from the Azure portal and by default it uses incrementing ports starting at 3389 for RDP access to the VMs. How can I change this to another range?
Additionally, or even alternatively, is there a way to specify the range when I create a cluster?
I realize this may not be so much of a SF question as a Load Balanacer or Scale Set question, but I ask in the context of SF because that is how I created this setup. IOW I did not create the load balancer or scale set myself.
You can change this with an ARM template (Azure Resource Manager).
Since you will run into situations from time to time where you want to change parts of your infrastructure, I'd recommend to create the whole cluster from an ARM template instead of through the portal. By doing so you could also create the cluster in an existing VNET, use internal load balancers, etc.
To create the cluster from an ARM template, you can either start with the Azure Quickstart template or by clicking on "Export template" in the Azure Portal right before you would actually create the cluster.
To change the inbound NAT rules for RDP in the template, change the section inboundNatPools in the template.
If you want to change your existing cluster, you can either export your existing resource group as a template or you can try to create a template which contains just the loadBalancer-resource and re-deploy just this part.
Working with ARM templates needs some getting used to, but it has many advantages. It allows you to easily change settings that can not be configured through the portal, it allows you to easily re-create the cluster for different environments, etc.