I need to set this for ECS service.
ECSService:
Type: AWS::ECS::Service
DependsOn: ListenerSSL
The thing is ListenerSSL is a load balancer listener resource set on the main template while ECSService is a resource in stack attached to the main template via AWS::CloudFormation::Stack so this doesn't work.
I tried adding ListenerSSL: !Ref ListenerSSL line in the parameters section of the AWS::CloudFormation::Stack and then adding:
ListenerSSL:
Type: String
in the parameters section of that stack but it says that DependsOn needs a resource not a parameter.
So what is the solution for this?
DependsOn is used for resources in the same stack, as CFN will always try to create the resources in the same template in parallel.
If you have to different stacks, you just create first one, and then second one. You can't setup DependsOn on resources from other stack.
Related
I have setup CICD using Codepipeline and CodeBuild that deploy my application to ECS Fargate container. This is the setup for the frontend of my app.
For the backend I have another same CICD. But this time, I want to deploy my backend to the same ECS Fargate container using Cloudformation. I know I have to update the task definition.
How I can update the already existed task definition that only create backend container of my app to the same task definition that we have used for frontend. And it should not affect the frontend container?
Is there any workaround for this?
You can't do that. Task definitions are immutable. You can only create new reversion of a task definition and deploy the new reversion. You can't change existing reversion. From docs:
A task definition revision is a copy of the current task definition with the new parameter values replacing the existing ones. All parameters that you do not modify are in the new revision.
To update a task definition, create a task definition revision. If the task definition is used in a service, you must update that service to use the updated task definition.
I'm looking at an example of the YAML pipeline with a services section. Here is a sample:
The YAML schema doesn't have services defined.
Where can I get information about the services section of the pipeline?
Update: Per Bowman's answer, the services section is part of the job step. In this scenario, there is only one job so the job step is omitted.
In the simplest case, a pipeline has a single job. In that case, you do not have to explicitly use the job keyword unless you are using a template. You can directly specify the steps in your YAML file.
here is the reference
There is in the official document:
https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema/jobs-job?view=azure-pipelines
services: # Container resources to run as a service container.
I think you directly check it in the top level, right? In fact, in this situation, there is a hidden default job, the definition of the job also be hidden. services section is under the job definition of that hidden job, not the top level.
I'm automating a PR process that need to create stacks through cloudformation. The problem is that, by definition https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-taskdefinition.html the old TaskDefinition revision is deleted to make place for the new/updated TaskDefinition. Is there any way to avoid the replacement and do only an update?
From the docs:
To update a task definition, create a task definition revision. If the task definition is used in a service, you must update that service to use the updated task definition.
This implies that task definition revisions are immutable and there is no way around creating a new revision.
If you want to retain the old versions of your task definitions, you could try the UpdateReplacementPolicy with a value of Retain. Maybe it's able to keep the old revisions around. For more details check out the CloudFormation docs - UpdateReplacePolicy.
That would look something like this:
AWSTemplateFormatVersion: 2010-09-09
Resources:
taskdefinition:
Type: 'AWS::ECS::TaskDefinition'
UpdateReplacePolicy: Retain
Properties: {} # Your usual properties here
Whenever an S3 artifact is used, the following declaration is needed:
s3:
endpoint: s3.amazonaws.com
bucket: "{{workflow.parameters.input-s3-bucket}}"
key: "{{workflow.parameters.input-s3-path}}/scripts/{{inputs.parameters.type}}.xml"
accessKeySecret:
name: s3-access-user-creds
key: accessKeySecret
secretKeySecret:
name: s3-access-user-creds
key: secretKeySecret
It would be helpful if this could be abstracted to something like:
custom-s3:
bucket: "{{workflow.parameters.input-s3-bucket}}"
key: "{{workflow.parameters.input-s3-path}}/scripts/{{inputs.parameters.type}}.xml"
Is there a way to make this kind of custom definition in Argo to reduce boilerplate?
For a given Argo installation, you can set a default artifact repository in the workflow controller's configmap. This will allow you to only specify the key (assuming you set everything else in the default config - if not everything is defined for the default, you'll need to specify more things).
Unfortunately, that will only work if you're only using one S3 config. If you need multiple configurations, cutting down on boilerplate will be more difficult.
In response to your specific question: not exactly. You can't create a custom some-keyname (like custom-s3) as a member of the artifacts array. The exact format of the YAML is defined in Argo's Workflow Custom Resource Definition. If your Workflow YAML doesn't match that specification, it will be rejected.
However, you can use external templating tools to populate boilerplate before the YAML is installed in your cluster. I've used Helm before to do exactly that with a collection of S3 configs. At the simplest, you could use something like sed.
tl;dr - for one S3 config, use default artifact config; for multiple S3 configs, use a templating tool.
I am currently storing all my parameters in Systems Manager Parameter Store and referencing them in CloudFormation stack.
I am now stuck in a scenario where the parameters vary for the same Cloudformation template.
For instance server A, has parameters m5.large instance type, subnet 1, host name 1 and likewise server B can have m5.xlarge, subnet 2, host name 2 and so on. These 2 parameters are for the same CFN template.
How can I handle this situation in a CI/CD manner?
My current setup involves SSM Parameter store -> CloudWatch Events -> CodePipeline -> Cloudformation.
I am Assuming you use AWS CodePipeline. Each CodePipeline stage consists of multiple stage actions. On of the action configuration properties is the CloudFormation template, but also the The action can be configured to include the CloudFormation template, but also a template configuration can be provided. If you define the server name as a parameter in the CloudFormation stack then you can provide a different configuration for each CloudFormation parameter.
Assuming you define only one server in the CloudFormation stack and use the template twice in your codepipeline, then you can provide a different configuration to both stage actions . Based on this configuration you can decide which parameter in the parameter store you want to retrieve. Of course this implies that your parameter store should be parameterized as well e.g. instead of parameter instancetype you might have parameter servera/instancetype and serverb/instancetype
However I think it is best if you just define the parameter in the Template Configuration file provided to the action declaration. So for example define the parameter instancetype in your CloudFormation template and use two different configuration files ( one for each stack) where the first Template Configuration file might say instancetype: m5.large and the second configuration file instancetype: m5.xlarge. This makes your CloudFormation stack history more explicit, easier to read, and makes the use of the parameter store for non-secrets no longer necessary.