How to propogate tags to an ECS Task launched from an EventBridge Target? - tags

I have a EventBridge (previously CloudWatch Events) Rule and Target that are used to launch ECS Tasks on a schedule (cron). I would like to apply some tags to the Task.
I tried including tags in RegisterTaskDefinition, but this did not result in any tags being set on the Tasks, as RunTask does not propagate tags if propagateTags is unspecified.
PutTargets is the action to create the event target that will eventually call RunTask. I searched in ecsParameters (EcsParameters) and input (TaskOverride) for fields that would correspond to either tags or propagateTags from RunTask but I could not find any corresponding fields.
Is there any way to apply tags to an ECS Task that is run from an EventBridge rule target?
2021-06-24 update (thanks #baxang): EventBridge has added ecsParameters.PropagateTags: "TASK_DEFINITION" to the API documentation and to some SDKs yesterday (containers-roadmap#89)!
python botocore 1.20.99
js aws-sdk v2.933.0
js #aws-sdk/client-eventbridge 3.20.0 #aws-sdk/client-cloudwatch-events 3.20.0 (2021-07-01 commit)
aws-sdk-go v1.38.66
aws-sdk-go-v2/service/eventbridge 1.7.0, aws-sdk-go-v2/service/cloudwatchevents 1.7.0 (2021-06-25 commit)
java com.amazonaws aws-java-sdk-eventbridge 1.12.11 (commit)
java software.amazon.awssdk eventbridge 2.16.98 (commit)
.Net AWSSDK.EventBridge 3.7.68.0, AWSSDK.CloudWatchEvents 3.7.68.0 (commit)
terraform provider aws aws_cloudwatch_event_target (source) (#19975, part of the 2021-07-15 v3.50.0 release)
CloudFormation AWS::Events::Rule EcsParameters (2021-09-22)

It seems the API has propagateTags https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_EcsParameters.html#eventbridge-Type-EcsParameters-PropagateTags so if you are launching the task via API, there seems to be a way.
CloudFormation however does not support the property: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-events-rule-ecsparameters.html. This issue(link) on aws-cloudformation/cloudformation-coverage-roadmap repo seems to be related.

Related

Azure Function on Linux Breaks Node function requiring Node v12 when Deployed from Azure DevOps

I have a Nodejs Azure Function using a timer trigger. It uses some modern Javascript syntax (await, flatMap, etc) that is supported in Node v12.
I've deployed my infrastructure with Terraform and specified the linuxFxVersion as "node|12". So far so good. When I deploy my code from the Azure DevOps using the built-in AzureFunctionApp#1 task, it will cause the function to deploy a new image that is running Node v8. This causes my function to break.
Here is the release definition:
steps:
- task: AzureFunctionApp#1
displayName: 'Azure Function App Deploy: XXXXXXXXX'
inputs:
azureSubscription: 'XXXXXXXXX'
appType: functionAppLinux
appName: 'XXXXXXXXX'
package: '$(System.DefaultWorkingDirectory)/_XXXXXXXXX/drop/out.zip'
runtimeStack: 'DOCKER|microsoft/azure-functions-node8:2.0'
configurationStrings: '-linuxFxVersion: node|12'
You can see I explicitly try to force the linuxFxVersion to remain 'node|12' in the release.
In the release logs, you can watch the release try to set the configuration for linuxFxVersion 2x, once to the wrong image, and the second time to "node|12".
After I release the code, the function will still run, but when I print the node version it shows version 8 and fails at runtime when it hits the unsupported syntax.
If I re-run my terraform script, it will show me that the linuxFxVersion for my function app is now set to 'DOCKER|microsoft/azure-functions-node8:2.0' and it sets it back to "node|12". After that runs, my function now works. If I update my code and deploy again, it breaks again in the same way.
What is even more baffling to me is that this is a v3 function app, which in theory does not support Node v8 at all.
Am I missing something obvious here or is the Function App release task just broken for Linux Functions?
After writing up this whole big question, and proof reading it... I noticed this little snippet in the release task YAML (which I hadn't seen before today as it's a release and uses the AzDO GUI for editing):
runtimeStack: 'DOCKER|microsoft/azure-functions-node8:2.0'
It turns out, if you specify the stack as 'JavaScript' (the options are .NET and JavaScript) the task sets the "runtimeStack" to that string. Which is what gets set in the linuxFxVersion setting on the Function App. Even if you override that setting in the configuration settings.
The fix is to leave the Runtime field blank and then it will respect your settings. Awesome.

Tag resources when registering to the environment

I have a pipeline with multiple stages that deploys groups of virtual machines.
And registers one to and azure pipelines environment.
Then I want to target that registered VM in a deployment job.
I have a problem to target that resource by name as the resource does not exists in the environment at queue time so I cannot even disable the stage before running.
So my next option is targeting by tags.
But I saw no option in the registration script to define tags at registering time.
So my pipeline flow has a manual step between stages to go to the environment and tag the resource.
Then I can trigger the deployments stage of the pipeline and it continues ok.
So my question is:
There is any way of disabling the resource evaluation at queuetime or anyway to tag resoureces in the environment programmatically?
Thanks
But I saw no option in the registration script to define tags at
registering time.
When running the Registration script, there will be a step: Enter Environment Virtual Machine resource tags? (Y/N) (press enter for N), at this time you need to enter Y, and then in the next step: Enter Comma separated list of tags (e.g web, db) define the tag for the resource.
Update:
You can add --addvirtualmachineresourcetags --virtualmachineresourcetags "<tag>" to the registration script.
You can refer to this case for details.

Cloud SQL API [sql-component.googleapis.com] not enabled on project

I am running a cloud build trigger on a cloudbuid.yaml file in which I build a docker container and then deploy it to cloud run. The error stacktrace is as follows:
API [sql-component.googleapis.com] not enabled on project
The problem is that I have enabled both SQL and SQL Admin APIs in both projects (one for the cloud build and one for the database), which was confirmed in the console and in gcloud.
Here is the yaml code for the step I am referring to:
- name: 'gcr.io/cloud-builders/gcloud'
args: [
'beta',
'run',
'deploy',
'MY_NAME',
'--image', 'gcr.io/MY_PROJECT/MY_IMAGE',
'--region', 'MY_REGION',
'--platform', 'managed',
'--set-cloudsql-instances', 'MY_CONNECTION_NAME',
'--set-env-vars', 'NODE_ENV=production,INSTANCE_CONNECTION_NAME=MY_CONNECTION_NAME,SQL_USER=MY_USER,SQL_PASSWORD=MY_PASSWORD,SQL_NAME=MY_SCHEMA,TOPIC_NAME=MY_TOPIC'
]
Any suggestions?
Thanks.
P.S.: As per Eespinola suggestion, I checked and confirmed I am running Google Cloud SDK 254.0.0.
P.S. 2: I have also tried to create a project from scratch but ended up with the same results.
Ok so as per the same thread eespinola posted (see above), the Cloud Build gcloud step will be updated according to Cloud SDK 254.0.0 update in a near future (the actual date may or may not be posted in the same thread in the future). Until then, the alternative is to use the YAML file without the --add-cloudsql-instances flag and add it manually in the UI (I still have not tried this but it should work as per Google's development team).

Azure Pipelines how filter artifacts per stage for "Manual only" triggered Releases

Let's say I have these 3 Stages: Dev, QC, Prod.
My requirements are:
Artifacts only from specific branches(release/*) can be deployed to QC/Prod
Artifacts from all branches can be deployed to Dev
I can achieve what I want using Artifact filters for "After stage" triggered Releases but I need this for "Manual only".
Is there a workaround that will let me control/filter which artifacts are available for deployment for specific stages/environments?
Basically, I need the Azure DevOps equivalent of Octopus Channels.
Update
I think I'm close to a solution.
In the "Pre-deployment conditions", I can add a new Deployment Gate which makes a Rest API call.
e.g URL suffix=/Release/releases/76
Now, I just need to correctly parse the ApiResponse because the below Success criteria doesn't work
eq(root['artifacts[0].definitionReference.branch.id'], 'refs/heads/master')
Evaluation of expression 'eq(root['artifacts[0].definitionReference.branch.id'], 'refs/heads/master')' failed.
As you said, you can do this using Deployment gates on your stages.
Create a new Generic service connection from Project Settings -> Pipelines -> Service Connections.
For service URL something like https://vsrm.dev.azure.com/{OrgName}/{ProjectName}/_apis
On your stage, open the Pre-Deployment Conditions
Enable the Gates option.
Add a new Invoke REST API gate and set the Delay before evaluation to 0 minutes.
4.1 Set the connection type to Generic.
4.2 Select the service connection you created in step 1.
4.3 Set the method to GET.
4.4 Set the URL suffix to /Release/releases/$(Release.ReleaseId)
4.5 On the Advanced area, set the Completion Event to ApiResponse.
4.6 On the Advanced area, set the success criteria to (or startsWith)
eq(root['artifacts'][0]['definitionReference']['branch']['id'],'refs/heads/master')
Now, if you try to deploy an artifact not from the master branch, the deployment will fail
There is a workaround:
In the QC/Prod stages add a custom condition that the job will be executed only where the artifacts source branch is release/*:
startsWith(variables['Release.Artifacts.{Artifacts-Alias}.SourceBranch'], 'refs/heads/release')
Now, when you manually run the QC/Prod stages and the artifacts not came from the release the job not will be executed:
This works
and(contains(variables['build.sourceBranch'], 'refs/heads/release'), succeeded())

Concourse CI: Use Metadata (Build number, URL etc) in on_success/on_failure

How is it possible to use Metadata in on_success/on_failure? For example, to send emails via https://github.com/pivotal-cf/email-resource?
I haven't found a way, as I can't change content of files where email resources reside (subject/body), as the metadata is not available to tasks.
And yep, that might be a duplicate for Concourse CI and Build number
But still my question IMHO is a valid use case for notifications.
The metadata you are referring to, I assume, is the environment variables provided to resources, not tasks.
This can be used with the slack resource to provide information about what build failed.
For example:
on_failure:
put: slack-alert
params:
text: |
The `science` pipeline has failed. Please resolve any issues and ensure the pipeline lock was released. Check it out at:
$ATC_EXTERNAL_URL/teams/$BUILD_TEAM_NAME/pipelines/$BUILD_PIPELINE_NAME/jobs/$BUILD_JOB_NAME/builds/$BUILD_NAME
The email resource, you're referencing has an open PR to support these environment variables. I'd discuss your need for that feature there.