AWS ECS Blue/Green CodePipeline: Exception while trying to read the image artifact - amazon-ecs

I wanted to create a CodePipeline which builds a container image from CodeCommit source and afterwards deploys the new image in Blue/Green fashion to my ECS service (EC2 launchtype).
The source stage is CodeCommit, which already includes appspec.json
as well as taskdef.json
The build stage is building the new
container & pushing it to ECR successfully, the file
imagedefinition.json is the BuildArtifact created at this step,
containing the container and the recently created image with its
tag corresponding to the CodeCommit commit-id.
The deploy stage
is made of action "Amazon ECS (Blue/Green)" using the
SourceArtifact and BuildArtifact as InputArtifacts, to take the
appspec and taskdef from the SourceArtifact and the image
description from the BuildArtifact, to finally deploy the new
container in Blue/Green manner.
The problem is with the image definition from the BuildArtifact. The pipeline fails in the Deploy phase with error:
""
Invalid action configuration
Exception while trying to read the image artifact file from the artifact: BuildArtifact.
""
How to properly configure the "Amazon ECS (Blue/Green)" deploy phase, so that it can use the recently created image and deploy it....by replacing placeholder IMAGE_NAME inside taskdef.json ?
Any hint highly appreciated :D

answering my own question here, hopefully it helps others who facing the same situation.
the file imagedefinitions.json is inappropriate for deploy action "Amazon ECS Blue/Green". For that you have to create file imageDetail.json within the build step and provide it as artifact to the deploy step. How ? This is how the bottom of my buildspec.yaml looks like:
- printf '{"ImageURI":"%s"}' $REPOSITORY_URI:$IMAGE_TAG > imageDetail.json
artifacts:
files:
- 'image*.json'
- 'appspec.yaml'
- 'taskdef.json'
secondary-artifacts:
DefinitionArtifact:
files:
- appspec.yaml
- taskdef.json
ImageArtifact:
files:
- imageDetail.json
In the Deploy phase of CodePipeline, use DefinitionArtifact and ImageArtifact as Input Artifacts and configure them in the corresponding section "Amazon ECS task definition" and "AWS CodeDeploy AppSpec file".
Ensure that your appspec.yaml contains placeholder for the task definition. Here is my appspec.yaml:
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: <TASK_DEFINITION>
LoadBalancerInfo:
ContainerName: "my-test-container"
ContainerPort: 8000
Also ensure that your taskdef.json contains placeholder for the final image, like
...
"image": <IMAGE1_NAME>,
...
use that placeholder in the codepipeline config of your blue/green deploy phase in the section "Dynamically update task definition image - optional" by choosing the input artifact as "ImageArtifact" and the placeholder <IMAGE1_NAME>

Amazon ECS Blue/Green (or CodeDeployToECS) CodePipeline action requires the TaskDefinitionTemplateArtifact parameter (see [1]).
In addition to the above file note an imageDetail.json is required for ECS Blue/Green deployments (not 'imagedefinition.json'). The file structure and details are available here [2]. Add this file to the root of your deployment artifact/version control. If you do not want to add this file manually you can use the ECR source action to the CodePipeline and configure this with the Image you are using in the ECS service/taskdef.json. This is all discussed at [2] for clarity.
To see how this is all brought together you can also follow the step by step instructions for ECS Blue/Green deployments here [3].
References:
[1] https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html#action-requirements : CodePipeline Pipeline Structure Reference - Action Structure Requirements in CodePipeline
[2] https://docs.aws.amazon.com/codepipeline/latest/userguide/file-reference.html#file-reference-ecs-bluegreen : Image Definitions File Reference - imageDetail.json File for Amazon ECS Blue/Green Deployment Actions
[3] https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-ecs-ecr-codedeploy.html : Tutorial: Create a Pipeline with an Amazon ECR Source and ECS-to-CodeDeploy Deployment

I ran into the same problem.
tl:dr
I was not passing the correct input artefact with the imageDetail.json to the pipeline CodeDeployToECS action.
Summary:
Instead of checking in a version of the task definition with the '<IMAGE1_NAME>' placeholder, I'm dynamically generating the task definition input to CodeDeploy inside the pipeline.
The task definition early in the project is quite volatile, with new variables etc being passed to the container. It's generated and registered within the pipeline (Cloudformation) and then read out via a Codebuild project, substituting the image placeholder with '<IMAGE1_NAME>' and passed to the next stage in the pipeline via a pipeline artefact.
Fixing it:
I have a CodeBuild project within the pipeline that produces the imageDetail.json:
{"ImageURI":"########.dkr.ecr.eu-west-1.amazonaws.com/##/#####:2739511dd87d4e4e1f65ed69c9e779b63fb72e36-master-fbe73fdc-6213-4bd6-a784-dcc3d2ae7845"}
It's pipeline output is named 'BuildDockerOutput'
I have another Codebuild project that produces:
taskdef.json
{
"containerDefinitions": [
{
"name": "ronantest1",
"image": "<IMAGE1_NAME>",
]
}
appspec.json
{
"version": 0.0,
"Resources": [
{
"TargetService": {
"Type": "AWS::ECS::Service",
"Properties": {
"TaskDefinition": "<TASK_DEFINITION>",
"LoadBalancerInfo": {
"ContainerName": "ronantest1",
"ContainerPort": "8080"
}
}
}
}
],
"Hooks": [
{
"AfterAllowTestTraffic": "arn:aws:lambda:eu-west-1:######:function:code-deploy-after-allow-test-traffic"
}
]
}
It's pipeline output is named 'PrepareCodeDeployOutputTesting'
My final CodeDeploy action is like the following:
- Name: BlueGreenDeploy
InputArtifacts:
- Name: BuildDockerOutput
- Name: PrepareCodeDeployOutputTesting
Region: !Ref DeployRegion1
ActionTypeId:
Category: Deploy
Owner: AWS
Version: '1'
Provider: CodeDeployToECS
RoleArn: !Sub arn:aws:iam::${TestingAccountId}:role/######/CrossAccountsDeploymentRole
Configuration:
AppSpecTemplateArtifact: PrepareCodeDeployOutputTesting
AppSpecTemplatePath: appspec.json
ApplicationName: !Ref ApplicationName
DeploymentGroupName: !Ref ApplicationName
TaskDefinitionTemplateArtifact: PrepareCodeDeployOutputTesting
TaskDefinitionTemplatePath: taskdef.json
Image1ArtifactName: BuildDockerOutput
Image1ContainerName: "IMAGE1_NAME"
RunOrder: 4
Note the different aspects of the CodeDeployToECS needed artefacts from different InputArtifacts, specifically 'Image1ArtifactName'

Thanks, to all, this gives me some light into solving the issue.
I would like to add that when you use aws cli, cloudformation, or Terraform to configure codepipeline, some parameters and options are not available with the console and setting some variables in these tools like the empty string "" will cause an exception error.
Always check for codepipeline settings in the console when you deploy using these tools.
so the error occur when you defined Image Artifact but not define the placeholder
imageDetail.json can be passed into codedeploy using the following methods:
git source ( codecommit or github ) the file that exist in your app codebase
ECR source - the file will be autogenerated by ECR, but will use SHA256 instead of the image tag
CodeBuild source - you update the file using codebuild buildspec.yml and pass it down to codedeploy stage.

Related

How to automate deployment to ECS Fargate when new image is pushed to ECR repository

Firstly, this is specific to CDK - I know there are plenty of questions/answers around this topic out there but none of them are CDK specific.
Given that best practices dictate that a Fargate deployment shouldn't look for the 'latest' tag in an ECR repository, how could one set up a CDK pipeline when using ECR as a source?
In a multi-repository application where each service is in it's own repository (where those repositories would have their own CDK CodeBuild deployments to set up building and pushing to ECR), how would the infrastructure CDK pipeline be aware of new images being pushed to an ECR repository and be able to deploy that new image to the ECS Fargate service?
Since a task definition has to specify an image tag (else it'll look for 'latest' which may not exist), this seems to be impossible.
As a concrete example, say I have the following 2 repositories:
CdkInfra
One of these repositories would be created for each customer to create the full environment for their application
SomeService
Actual application code
Only one of this repository should exist and re-used by multiple CdkInfra projects
cdk directory defining the CodeBuild project so when a push to master is detected, the service is built and the image pushed to ECR
The expected workflow would be as such:
SomeService repository is updated and so a new image is pushed to ECR
The CdkInfra pipeline should detect that a tracked ECR repository has a new image
The CdkInfra pipeline updates the Fargate task definition to reference the new image's tag
The Fargate service pulls the new image and deploys it
I know there is currently a limit with CodeDeploy not supporting ECS deployments due to CFN not supporting them, but it seems that CodePipelineActions has the ability to set up an EcrSourceAction which may be able to achieve this, however I've been unable to get this to work so far.
Is this possible at all, or am I stuck waiting until CFN support ECS CodeDeploy functionality?
You could store the name of the latest tag in an AWS Systems Manager (SSM) parameter (see the list here), and dynamically update it when you deploy new images to ECR.
Then, you could use the AWS SDK to fetch the value of the parameter during your CDK deploy, and then pass that value to your Fargate deployment.
The following CDK stack written in Python uses the value of the YourSSMParameterName parameter (in my AWS account) as the name of an S3 bucket:
from aws_cdk import (
core as cdk
aws_s3 as s3
)
import boto3
class MyStack(cdk.Stack):
def __init__(self, scope, construct_id, **kwargs):
super().__init__(scope, construct_id, **kwargs)
ssm = boto3.client('ssm')
res = ssm.get_parameter(Name='YourSSMParameterName')
name = res['Parameter']['Value']
s3.Bucket(
self, '...',
bucket_name=name,
)
I tested that and it worked beautifully.
Alright so after some hackery I've managed to do this.
Firstly, the service itself (in this case it's a Spring Boot project) gets a cdk directory in it's root. This basically just sets up the CI part of the CI/CD pipeline:
const appName: string = this.node.tryGetContext('app-name');
const ecrRepo = new ecr.Repository(this, `${appName}Repository`, {
repositoryName: appName,
imageScanOnPush: true,
removalPolicy: cdk.RemovalPolicy.DESTROY,
});
const bbSource = codebuild.Source.bitBucket({
// BitBucket account
owner: 'mycompany',
// Name of the repository this project belongs to
repo: 'reponame',
// Enable webhook
webhook: true,
// Configure so webhook only fires when the master branch has an update to any code other than this CDK project (e.g. Spring source only)
webhookFilters: [codebuild.FilterGroup.inEventOf(codebuild.EventAction.PUSH).andBranchIs('master').andFilePathIsNot('./cdk/*')],
});
const buildSpec = {
version: '0.2',
phases: {
pre_build: {
// Get the git commit hash that triggered this build
commands: ['env', 'export TAG=${CODEBUILD_RESOLVED_SOURCE_VERSION}'],
},
build: {
commands: [
// Build Java project
'./mvnw clean install -Dskiptests',
// Log in to ECR repository that contains the Corretto image
'aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 489478819445.dkr.ecr.us-west-2.amazonaws.com',
// Build docker images and tag them with the commit hash as well as 'latest'
'docker build -t $ECR_REPO_URI:$TAG -t $ECR_REPO_URI:latest .',
// Log in to our own ECR repository to push
'$(aws ecr get-login --no-include-email)',
// Push docker images to ECR repository defined above
'docker push $ECR_REPO_URI:$TAG',
'docker push $ECR_REPO_URI:latest',
],
},
post_build: {
commands: [
// Prepare the image definitions artifact file
'printf \'[{"name":"servicename","imageUri":"%s"}]\' $ECR_REPO_URI:$TAG > imagedefinitions.json',
'pwd; ls -al; cat imagedefinitions.json',
],
},
},
// Define the image definitions artifact - is required for deployments by other CDK projects
artifacts: {
files: ['imagedefinitions.json'],
},
};
const buildProject = new codebuild.Project(this, `${appName}BuildProject`, {
projectName: appName,
source: bbSource,
environment: {
buildImage: codebuild.LinuxBuildImage.AMAZON_LINUX_2_3,
privileged: true,
environmentVariables: {
// Required for tagging/pushing image
ECR_REPO_URI: { value: ecrRepo.repositoryUri },
},
},
buildSpec: codebuild.BuildSpec.fromObject(buildSpec),
});
!!buildProject.role &&
buildProject.role.addToPrincipalPolicy(
new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['ecr:*'],
resources: ['*'],
}),
);
Once this is set up, the CodeBuild project has to be built manually once so the ECR repo has a valid 'latest' image (otherwise the ECS service won't get created correctly).
Now in the separate infrastructure codebase you can create the ECS cluster and service as normal, getting the ECR repository from a lookup:
const repo = ecr.Repository.fromRepositoryName(this, 'SomeRepository', 'reponame'); // reponame here has to match what you defined in the bbSource previously
const cluster = new ecs.Cluster(this, `Cluster`, { vpc });
const service = new ecs_patterns.ApplicationLoadBalancedFargateService(this, 'Service', {
cluster,
serviceName: 'servicename',
taskImageOptions: {
image: ecs.ContainerImage.fromEcrRepository(repo, 'latest'),
containerName: repo.repositoryName,
containerPort: 8080,
},
});
Finally create a deployment construct which listens to ECR events, manually converts the generated imageDetail.json file into a valid imagedefinitions.json file, then deploys to the existing service.
const sourceOutput = new cp.Artifact();
const ecrAction = new cpa.EcrSourceAction({
actionName: 'ECR-action',
output: sourceOutput,
repository: repo, // this is the same repo from where the service was originally defined
});
const buildProject = new codebuild.Project(this, 'BuildProject', {
environment: {
buildImage: codebuild.LinuxBuildImage.AMAZON_LINUX_2_3,
privileged: true,
},
buildSpec: codebuild.BuildSpec.fromObject({
version: '0.2',
phases: {
build: {
commands: [
'cat imageDetail.json | jq "[. | {name: .RepositoryName, imageUri: .ImageURI}]" > imagedefinitions.json',
'cat imagedefinitions.json',
],
},
},
artifacts: {
files: ['imagedefinitions.json'],
},
}),
});
const convertOutput = new cp.Artifact();
const convertAction = new cpa.CodeBuildAction({
actionName: 'Convert-Action',
input: sourceOutput,
outputs: [convertOutput],
project: buildProject,
});
const deployAction = new cpa.EcsDeployAction({
actionName: 'Deploy-Action',
service: service.service,
input: convertOutput,
});
new cp.Pipeline(this, 'Pipeline', {
stages: [
{ stageName: 'Source', actions: [ecrAction] },
{ stageName: 'Convert', actions: [convertAction] },
{ stageName: 'Deploy', actions: [deployAction] },
],
});
Obviously this isn't as clean as it otherwise could be once CloudFormation supports this fully, but it works pretty well.
My view on this situation is that if you are using CDK (actually is CloudFormation) to deploy latest image from ECR is very difficult.
What I end up is putting all Docker image build and CDK deploy as a one build script
In my case, is a Java application, I build the war file and prepare the DockerFile in a /docker directory
FROM tomcat:8.0
COPY deploy.war /usr/local/tomcat/webapps/
Then have the CDK script to pick up and build the image in Runtime.
const taskDefinition = new ecs.FargateTaskDefinition(this, 'taskDefinition', {
cpu: 256,
memoryLimitMiB: 1024
});
const container = taskDefinition.addContainer('web', {
image: ecs.ContainerImage.fromDockerImageAsset(
new DockerImageAsset(this, "image", {
directory: "docker"
})
)
});
This will put the image into a specific CDK ECR repository and deploy.
Therefore, I don't relies on the ECR for keeping different version of my build. Each time I need to deploy or rollback, just do it directly from the build script.

AWS CDK not deploying API Gateway change (EDGE to REGIONAL)

I'm experimenting with AWS CDK by converting a console-developed application (just API Gateway and Lambdas for now). All is well--I can hit the API's resources and methods and the appropriate lambdas get executed.
I'm trying to understand what triggers a deployment (and what doesn't). For example, if I try to change the API's endpoint type from the default (EDGE) to REGIONAL:
const api = new apigateway.RestApi(this, "cy-max-api", {
restApiName: "CY Max Service",
description: "CDK version of Max AWS demo app.",
endpointConfiguration: [EndpointType.REGIONAL] // <-- add only this line and deploy
});
and deploy (cdk deploy), nothing is deployed (I checked the logs, console says no stack changes). I even tried forcing the deploy (cdk deploy -f)--no joy.
I suspect this is the expected behavior, but would like to understand why this change doesn't trigger a deploy (and what would be necessary to force one).
Update based on response by #balu-vyamajala (thanks for taking the time to test it).
I am using version 1.82.0 of the CDK. Here's the result of cdk diff when the only change is adding the endpointConfiguration line:
Stack CyMaxStack
Resources
[-] AWS::ApiGateway::Deployment CyMaxcymaxapiDeploymentD64E3EA0186ed2bffe1dbc3004a8457d0ce5eb28 destroy
[+] AWS::ApiGateway::Deployment CyMax/cy-max-api/Deployment CyMaxcymaxapiDeploymentD64E3EA0cd62c1e6cd1229987f977199cc5906ea
[~] AWS::ApiGateway::RestApi CyMax/cy-max-api CyMaxcymaxapi48ECF39D
└─ [+] EndpointConfiguration
└─ {}
[~] AWS::ApiGateway::Stage CyMax/cy-max-api/DeploymentStage.prod CyMaxcymaxapiDeploymentStageprod5291AAF0
└─ [~] DeploymentId
└─ [~] .Ref:
├─ [-] CyMaxcymaxapiDeploymentD64E3EA0186ed2bffe1dbc3004a8457d0ce5eb28
└─ [+] CyMaxcymaxapiDeploymentD64E3EA0cd62c1e6cd1229987f977199cc5906ea
and here's what cdk deploy says:
CyMaxStack: deploying...
[0%] start: Publishing 6280a7c7fbc87dd62aeb85e098d6de2f0b644eea442dcbfc67063a56c08ce151:current
[100%] success: Published 6280a7c7fbc87dd62aeb85e098d6de2f0b644eea442dcbfc67063a56c08ce151:current
CyMaxStack: creating CloudFormation changeset...
[█████████████████████████████·····························] (5/10)
✅ CyMaxStack
Outputs:
CyMaxStack.CyMaxcymaxapiEndpoint52D905B0 = https://...my URL...
Stack ARN:
arn:aws:cloudformation:us-west-1:...my ARN...
When I check the console the API has not been updated to REGIONAL. Also, endpointConfiguration is either missing, or {} in cdk.out/tree.json. I never see {REGIONAL} in that file.
I am guessing you are asking about update to AWS::ApiGateway::Deployment which doesn't automatically happen and cdk generates a hash of methods and resources to append to resource name to force deployment.
But in your case, EndpointConfiguration is a property of AWS::ApiGateway::RestApi which is directly referred in AWS::ApiGateway::Deployment. Irrespective of any other changes, it must trigger a new Deployment.
which version of cdk you are using?
I just tested it with 1.80.0, it did trigger a change in three resources AWS::ApiGateway::Deployment, AWS::ApiGateway::Stage and AWS::ApiGateway::RestApi.
Please try cdk synth and observe generated CloudFormation for resource AWS::ApiGateway::RestApi before and after compiling your change

Serverless conditional function deployment by region

Following config is extracted from my serverless.yml
service: test-svc
provider:
name: aws
...
functions:
apiHandler:
handler: index.handler
events:
- http: ANY /
- http: 'ANY {proxy+}'
taskHandler:
handler: task.handler
events:
- sqs:
...
alexaHandler:
handler: alexa.handler
events:
- alexaSmartHome: ...
I want to deploy apiHandler and taskHandler function in only region-a
And deploy alexaHandler in region-b, region-c and region-d.
If I execute the command sls deploy --region us-east-1 all three functions will be deployed, but I don't need that. I need only 2 functions to be deployed.
Using sls deploy function is not an option because it only swaps zip file.
Putting alexaHandler in sub-directory with new serverless.yml didn't work because deployment only packs sub-directory and won't include code from the parent directory. (Many codes are shared between 3 function)
Any suggestion to deal with this requirement?
After going through all the serverless plugin list I found above requirement could be achieved through serverless-plugin-select
Using this plugin we can select to deploy only a few functions from serverless.yml depending on stage or region value. In my case using region value.
Following is modified serverless.yml. plugins section added and regions key added in each function.
service: test-svc
plugins:
- serverless-plugin-select
provider:
name: aws
...
functions:
apiHandler:
...
regions:
- us-west-2
taskHandler:
...
regions:
- us-west-2
alexaHandler:
...
regions:
- eu-west-1
- us-east-1
- us-west-2
With the above config, I use the following bash script to deploy for all region.
#!/usr/bin/env bash
serverless deploy --region eu-west-1
serverless deploy --region us-east-1
serverless deploy --region us-west-2
You can conditionally select values in serverless.yml by storing the conditional functions in a custom variable like
### serverless.yml
provider:
name: << aws or your provider >>
runtime: << your runtime, eg nodejs8.10 >>
region: << your aws region >>
stage: ${opt:stage, 'dev'}
custom:
extraCode:
dev:
testing: ${file(testing_only/testing_endpoints.yml)}
prod:
...
## and then at the functions section of serverless.yml
functions:
- ${file(functions/someFunctionsInAFile.yml)}
- ${file(functions/someMoreFunctions.yml)}
- ${self:custom.extraCode.${self:provider.stage}}
When you deploy serverless you should pass in the command line option --stage=myStageName so that when you pass in --stage=dev or --stage=prod the last line in the function section will be blank and nothing will deployed.
If you pass in --stage=testing the last line in the functions sections will be filled with the file set in your custom variable section and then your test code will be deployed.

Use CAPABILITY_AUTO_EXPAND for nested stacks on CloudFormation

I am trying to use nested stack and when my ChangeSet is being executed, I got this error:
Requires capabilities : [CAPABILITY_AUTO_EXPAND]
I went and create a pipeline with cloudformation.
This can be use to create a pipeline:
Configuration:
ActionMode: CHANGE_SET_REPLACE
ChangeSetName: changeset
RoleArn: ??
Capabilities: CAPABILITY_IAM
StackName: appsync-graphql
TemplatePath: BuildArtifact::output.yaml
This can’t:
Configuration:
ActionMode: CHANGE_SET_REPLACE
ChangeSetName: changeset
RoleArn: ??
Capabilities:
- CAPABILITY_IAM
- CAPABILITY_AUTO_EXPAND
StackName: appsync-graphql
TemplatePath: BuildArtifact::output.yaml
The error was: “Value of property Configuration must be an object with String (or simple type) properties”
This is the closest docs that I find: https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_UpdateStack.html
It said: Type: Array of strings for capabilites, and the aws cli docs says similarly, but doesn’t give an example.
So I ran out of ideas about what else to try to have CAPABILITY_AUTO_EXPAND capability.
I tried another variant and it worked!
Configuration:
..
Capabilities: CAPABILITY_IAM,CAPABILITY_AUTO_EXPAND
...
I got the answer from Keeton Hodgson, this cli command works:
sam deploy --template-file output.yaml --stack-name <AppName> --capabilities CAPABILITY_IAM CAPABILITY_AUTO_EXPAND
Notice that there is no comma.
I still don't know how to change the pipeline template for it to work.
I tried the solutions above and what worked for me today (June 2020) using the higher level sam was adding a space between the capabilities listed. It's complete insanity that there's no resilience in this text file interpretation. SAM's cli is open source so I guess I could put my code where my mouth is and submit a PR. Anyway.
samconfig.toml:
...
capabilities = "CAPABILITY_IAM CAPABILITY_AUTO_EXPAND"
...
Then:
sam deploy
Output:
...
Capabilities : ["CAPABILITY_IAM", "CAPABILITY_AUTO_EXPAND"]
...
Put the capabilities property at the very end like this
aws cloud formation deploy COMMAND --capabilities CAPABILITY_NAMED_IAM
Change the order
Configuration:
ActionMode: CHANGE_SET_REPLACE
ChangeSetName: changeset
RoleArn: ??
StackName: appsync-graphql
TemplatePath: BuildArtifact::output.yaml
Capabilities:
- CAPABILITY_IAM
- CAPABILITY_AUTO_EXPAND
After some research found that you can actually add those capabilities in console.
Reference Capabilities - optional section in the cfn deploy phase definition in console

Passing secureObject array as VSTS variable

I have an ARM template that deploys Key Vault and populates it with secrets. It does creates secrets, based on how many arrays are in the parameter secretsObject. For example if I have:
"secretsObject": {
"type": "secureObject",
"defaultValue": {
"secrets": [
{
"secretName": "exampleSecret1",
"secretValue": "secretVaule1"
},
{
"secretName": "exampleSecret2",
"secretValue": "secretValue2"
}
]
}
}
The template will create 2 Secrets. So this is the line that I put into .parameters.json to deploy the template from Visual Studio:
"secrets": [
{
"secretName": "exampleSecret1",
"secretValue": "secretVaule1"
},
{
"secretName": "exampleSecret2",
"secretValue": "secretValue2"
}
]
The problem is I can't figure out how to past such line into VSTS as a variable (to overwrite parameter). This is the ARM template I'm using
There were errors in your deployment. Error code: InvalidDeploymentParameterKey.
One of the deployment parameters has an empty key. Please see https://aka.ms/arm-deploy/#parameter-file for details.
Processed: ##vso[task.issue type=error;]One of the deployment parameters has an empty key. Please see https://aka.ms/arm-deploy/#parameter-file for details.
task result: Failed
Task failed while creating or updating the template deployment.
There is the issue in Azure Resource Group deployment task and I submit a feedback here: VSTS build/release task: Override template parameters of Azure Resource Group Deployment.
The workaround is that you can update the parameter file during the build/release (e.g. parameter.json) and specify this parameter file in Azure Resource Group deployment task.
There are many ways to update file, such as Replace Tokens.
Update:
Feedback in Gitgub: https://github.com/Microsoft/vsts-tasks/issues/6108