I'm trying to determine the best way to generate a new or update an existing AWS ECS task definition JSON file using Codeship's codeship/aws-deployment image.
I don't want to rely on the implicit :latest tag within my task definition and wish to use the custom image tag generated in a previous step which pushes to AWS ECR.
Is a custom bash or python script to pull the current task definition, increment the version and replace the image string the only way to accomplish this or is there an existing CLI tool I'm glossing over?
Related
I'm looking for an option to pick all the templates from the repository without hardcode the yml template files and in future if new templates are added, the pipeline should automatically pick all of them and do the deploy and create a single stack in aws environment, without making any modification to gitlab-ci.yml/pipeline file.
I tried using deploy CLI command, it deploy all the templates but then it goes for update and start deleting one by one and only the last template resource will be available after the pipeline execution is complete.
Let me know if there is an option to do this?
Synopsis
My overarching objective is to automate the build and publish of a container image to the GitHub Packages registry (and/or DockerHub). I have already set up a project that accomplishes this, however there's a "gotcha."
I want to use output from the container image build process to update the tag that's assigned to the container image.
For example:
ghcr.io/pcgeek86/aws-powershell:<versionNumber>
The Caveat
Unfortunately, the GitHub Action for Docker does a build and a push simultaneously. Hence, I am unable to capture the output from the container image build, parse the version number, and then update the environment variable containing the new tag value.
Question
Is there a way to separate the container 1) build and 2) push into separate steps, so that I can capture the build output and use that to modify the container image tag, before it's pushed up to the GitHub Packages registry?
I'm attempting to write a CloudFormation template to fully to define all resources required for an ECS service, including...
CodeCommit repository for the nodejs code
CodePipeline to manage builds
ECR Repository
ECS Task Definition
ECS Service
ALB Target Group
ALB Listener Rule
I've managed to get all of this working. The stack builds fine. However I'm not sure how to correctly handle updates.
The Container in the Task Defition in the template required an image to be defined. However the actual application image won't exist until after the code is first built by the pipeline.
I had an idea that I might be able to work around this issue, by defining some kind of placeholder image "amazon/amazon-ecs-sample" for example, just to allow the stack to build. This image would be replaced by CodeBuild when the pipeline first runs.
This part also works fine.
The issues occur when I attempt to update the task definition, for example adding environment variables, in the CloudFormation template. When I re-run the stack, it replaces my application image in the container definition, with the original placeholder image from the template.
This is logical enough, as CloudFormation obviously assumes the image in the template is the correct one to use.
I'm just trying to figure out the best way to handle this.
Essentially I'd like to find some way to tell CloudFormation to just use whatever image is defined in the most recent revision of the task definition when creating new revisions, rather than replacing it with the original template property.
Is what I'm trying to do actually possible with pure CloudFormation, or will I need to use a custom resource or something similar?
Ideally I'd like to keep extra stack dependencies to a minimum.
One possibility I had thought of, would be to use a fixed tag for the container definition image, which won't actually exist when the cloudformation stack first builds, but which will exist after the first code-pipeline build.
For example
image: [my_ecr_base_uri]/[my_app_name]:latest
I can then have my pipeline push a new revision with this tag. However, I prefer to define task defition revisions with specific verion tags, like so ...
image: [my_ecr_base_uri]/[my_app_name]:v1.0.1-[git-sha]
... as this makes it very easy to see exactly what version of the application is currently running, and to revert revisions easily if needed.
Your problem is that you're putting too many things into this CloudFormation template. Your template could include the CodeCommit repository and the CodePipeline. But the other things should be outputs from your pipeline. Remember: Your pipeline will have a build and a deploy stage. The build stage can "build" another cloudformation template that is executed in the deploy stage. During this deploy stage, your pipeline will construct the ECS services, tasks, ALB etc...
I'm trying to create a custom task to be published on the Azure Pipelines Marketplace so that people can use my security tool within Azure Pipelines. The task requires a lot of additional software, so Docker has been used for packaging.
I've similarly created the action for GitHub Actions, https://github.com/tonybaloney/pycharm-security/blob/master/action.yml
The action will-
Use a custom Docker image (hosted on Docker Hub)
Mount the code after checkout
Run a custom entry point, passing the arguments provided to the action
I cannot see how to achieve the same thing in Azure Pipelines. All of the example code for custom tasks is written in TS/Node or PowerShell.
The only TS/Node.js example doesn't show you how to do anything like download a docker image and run it.
The only other documentation I can find is about how to build a Docker image from within a Pipeline. But I want to download and run an image as a Task.
The Task SDK documentation clearly shows how to construct the package, but not how to do anything beyond getting it to pass arguments.
One possibility is to clone the DockerV2 Task and to customize it to run the Docker commands that I need, but this seems quite convoluted compared with how simple it is in GitHub Actions
How can you convert a GitHub action that uses Docker images into Azure Pipelines custom task
I am afraid you have to clone the DockerV2 Task and to customize it to run the Docker commands that you need.
The reason for its complexity is that their implementation forms are different.
We are customizing github action and publishing to Marketplace, the custom github action did not compile and package the source code, but just quoted the original code. In other words, our custom action is more like providing a link to tell the compiler where to download the source code and pass parameters and rewrite the source code. So we don't need to download the source code of github action and customize it.
However, the Azure Pipelines custom task is different. The custom task needs to be compiled to generate a .visx file, which requires the source code and compiles it after rewriting.
Besides, Azure devops provide a Task groups, so that we could encapsulate a sequence of tasks, already defined in a build or a release pipeline, into a single reusable task that can be added to a build or release pipeline, just like any other task. You can choose to extract the parameters from the encapsulated tasks as configuration variables, and abstract the rest of the task information.
Hope this helps.
If I create a new task definition for ECS, is there a way to delete all existing task definitions for a service and add the new definition, or do I have to create a brand new service?
The problem I am having, I am stuck in a long development loop where I update a container, create a new task definition revision and find myself having to create a brand new service, load balancer, target group, etc, for this new task definition. Is there a way, perhaps to tell the existing service to use the latest revision of the task definition instead of having to do all the above?
Yes by using the 'latest' tag on your uploaded docker image.
Then make sure your task definition uses the 'latest' tag for its container image.
After updating the docker image going forward, you'll need to simply 'cycle' the service. In the AWS Console, you can update the service and simply check the 'Force New Deployment' checkbox in the wizard and change nothing else. AWS will then stop existing tasks and start new ones which will use the current version of the docker image. For non-web tasks this is usually pretty quick. Web tasks take a little longer to 'drain'. In any case, you won't need to create new task definitions this way or update the service except to redeploy.
Force new deployment is also available through all of the CLI and API tools as well.
When a new version of the task definition is created, then open the service and click Edit, then select the new version from the drop-down box, then check the Force Deployment and click the Update button. Note that the new task will be in the pending state as not enough resource; if that is the case then select the currently running task and Stop it.