I am trying to create a CI build step that will stop and re-run my tasks when my docker containers changed.
The definition itself would be pointing at latest tag in ECR, and so all i need is to stop-task and then run-task.
Two of the parameters in the API as well as the UI are PlacementConstraints and PlacementStrategy.
Is there any way to get these from the API AFTER the task has been started? e.g. get them for a running task. describe-tasks doesn't seem to return this information.
Related
I'm working on some terraform logic and using github workflows to deploy multiple components in a sequential manner like job2(alb) depending on the completion of job1(creation of VPC). This works fine during the apply phase. However if I were to delete the infra using terraform destroy the sequence of jobs fails as job1 can't be successfull without job1.
Is there a way to enable the execution of the workflow in the bottom-up approach based on input?
I know that we can leverage terraform to deploy these components and handle the dependencies at terraform level. This is an example of a use case I'm working on.
You can control the flow of jobs by using the keyword “needs”. Read the docs here: https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idneeds
Let us say I am defining a task definition in AWS Fargate, this task definition would be used to start up tasks that involve a multi-container application regarding 2 web servers. How many task definitions would I need, how many tasks would I pay for and how many services are create?
I have read a lot of documentation, but it does not click for me. Is there anyone who can explain the correlation between: task definitions, task/s, Docker containers, services and ECS Fargate clusters?
A task definition is a specification. You use it to define one or more containers (with image URIs) that you want to run together, along with other details such as environment variables, CPU/memory requirements, etc. The task definition doesn't actually run anything, its a description of how things will be set up when something does run.
A task is an actual thing that is running. ECS uses the task definition to run the task; it downloads the container images, configures the runtime environment based on other details in the task definition. You can run one or many tasks for any given task definition. Each running task is a set of one or more running containers - the containers in a task all run on the same instance.
A service in ECS is a way to run N tasks all using the same task definition, and keep those N tasks running if they happen to shut down unexpectedly. Those N tasks can run on different instances in EC2 (although some may run on the same instance depending on the placement strategy used for the service); on Fargate, there are no instances and the tasks "just run", so you don't have to think about placement strategies. You can also use services to connect those tasks to a load balancer, so that requests from a client inside or outside of AWS can be routed evenly cross all N tasks. You can update the task definition used by a service, which will then trigger a rolling update (starting up and shutting down running tasks) so that all running tasks will be using the new version of the task definition after the deployment completes. This is used, for example, when you create a new container image and want your service to be updated to use the latest version.
A service is scoped to a cluster. A cluster is really just a name. Different clusters can have different IAM policies and roles, so that you can restrict who can create services in different clusters using IAM.
The question seems simple enough. I have a bunch task definitions and a cluster in my CloudFormation template. When setting up manually I would create a task based on any definition and provide it with a CRON definition. It would then start to run.
I can't seem to find this option in CF? I found service but this only works for tasks that run indefinitely, which mine are not (they run once per day for approx. 10-20 minutes).
After some research I found out about AWS::Events::Rule which people seem to only use in conjunction with Lambda which I do not. I was unable to find any example that referenced FARGATE tasks so I'm not sure it's even possible.
If anyone has any examples of running tasks in CRON using CF, that would be great.
I think that ECS scheduled tasks (cron) would suit you:
Amazon ECS supports the ability to schedule tasks on either a cron-like schedule or in a response to CloudWatch Events. This is supported for Amazon ECS tasks using both the Fargate and EC2 launch types.
This is based on CloudWatch Events which can be used to schedule many things, not only lambda.
To setup it using CloudFormation you can use AWS::Events::Rule with the target of AWS::Events::Rule EcsParameters
I want to run a job(or any other method) on a set of nodes in Rundeck to test the successful connection with the nodes and then tag them as node_name_failled and node_name_succeed. Is this possible to do so using plugins or without them so that it can be achieved in one click
Currently, I'm able to do so by externally parsing the node's execution status and modifying the resource model. But this needs to navigate away from the UI
In Rundeck Enterprise you can use health check feature and later dispatch your jobs with some filter based on their status. A good way to do that in Community Edition is to call the jobs via API or RD CLI wrapped in some bash script that detects the status node before.
Here you have an example to call a job using API and here using RD CLI.
EDIT: Also you can build your own health check system based on this, take a look at this and this.
I am deploying an ECS Fargate task with two containers: 1 reverse proxy nginx and 1 python server. For each I have an ECR repository, and I have a CI/CD CodePipeline set up with
CodeCommit -> CodeBuild -> CodeDeploy
This flow works fine for simple code changes. But what if I want to add another container? I can certainly update my buildspec.yml to add the building of the container, but I also need to 1) update my task definition, and 2) assign this task definition to my service.
Questions:
1) If I use the CLI in my CodeBuild stage to create a new task definition and associate it with my service, won't this trigger a deploy? And then my CodeDeploy will try to deploy again, so I'll end up deploying twice?
2) This approach ends up creating a new task definition and updating the service on every single deploy. Is this bad? Should I have some logic to pull down the LATEST task revision and diff the JSON from CodeCommit version and only update if there is a difference?
Thanks!
The CodePipeline's ECS Job Worker copies the Task Definition and updates the Image and ImageTag for the container specified in the 'imagedefinitions.json' file, then updates the ECS Service with this new TaskDef. The job worker cannot add a new container in the task definition.
If I use the CLI in my CodeBuild stage to create a new task definition and associate it with my service, won't this trigger a deploy? And then my CodeDeploy will try to deploy again, so I'll end up deploying twice?
I don't think so. Is there a CloudWatch event rule that triggers CodeDeploy in such fashion?
This approach ends up creating a new task definition and updating the service on every single deploy. Is this bad? Should I have some logic to pull down the LATEST task revision and diff the JSON from CodeCommit version and only update if there is a difference?
The ECS Deploy Job worker creates a new task definition revision every time a deployment occurs so if this is official behaviour, I wouldn't consider it bad as such.
I will question why you need to add new containers to your Task definition in runtime during deploys. Your task definition in general should be modified less often, and only the image:tag in it should be modified regularly - something the ECS Deploy action helps you achieve.