I am using the below to get the stack information I want via AWS Cli:
aws cloudformation --region ap-southeast-2 describe-stacks --stack-name mystack
It's returning result OK:
{
"Stacks": [
{
"StackId": "arn:aws:mystackid",
"LastUpdatedTime": "2017-01-13T04:59:17.472Z",
"Tags": [],
"Outputs": [
{
"OutputKey": "Ec2Sg",
"OutputValue": "sg-97e13dff"
},
{
"OutputKey": "DbUrl",
"OutputValue": "myUrl"
}
],
"CreationTime": "2017-01-13T03:27:18.893Z",
"StackName": "mystack",
"NotificationARNs": [],
"StackStatus": "UPDATE_ROLLBACK_COMPLETE",
"DisableRollback": false
}
]
}
But I do not know how to return only the value of OutputValue which is myUrl
As I do not need the rest, just myUrl.
Is that possible via aws cloudformation describe-stacks?
Edit
I just realize I can use --query:
--query "Stacks[0].Outputs[1].OutputValue"
will get exactly what I want but I would like to use DbUrl else if the number of Outputs changes, my result will be unexpected.
I got the answer, use the below:
--query 'Stacks[0].Outputs[?OutputKey==`DbUrl`].OutputValue' --output text
Or
--query 'Stacks[0].Outputs[?OutputKey==`DbUrl`].OutputValue' --output text
Or
--query 'Stacks[?StackName==`mystack`][].Outputs[?OutputKey==`DbUrl`].OutputValue' --output text
While querying works, it may prove problematic if you have multiple stacks. Realistically, you should probably be leveraging exports for things that are distinct and authoritative.
By way of example - if you modified your CloudFormation snippet to look like this:
"Outputs" : {
"DbUrl" : {
"Description" : "My Database Url",
"Value" : "myUrl",
"Export" : {
"Name" : "DbUrl"
}
}
}
Then you could use:
aws cloudformation list-exports --query "Exports[?Name==\`DbUrl\`].Value" --no-paginate --output text
to retrieve it. Exports are required to be unique - only one stack can export any given name. This way, you're assured that you get the right value, every time. If you attempt to create a new stack that exports a name that already exists elsewhere, that stack creation will fail.
Avoid using hardcoded indexes such as [0]. This will lead to unpredicted query returns when you have multiple stacks. Try a more dynamic query such as the following:
aws cloudformation describe-stacks --region my_region --query "Stacks[?StackName=='my_stack_name'][].Outputs[?OutputKey=='my_output_key'].OutputValue" --output text
For your example this would be:
aws cloudformation describe-stacks --region ap-southeast-2 --query "Stacks[?StackName=='mystack'][].Outputs[?OutputKey=='Ec2Sg'].OutputValue" --output text
Take note of []. Without it, your query will not return anything.
To clarify the correct way of using list-exports:
There was a little bit issue on above post, we cannot use \' or \` around output name. The correct syntax is to use single quote ' without using escape character.
Example:
Using escape of tilde
C:\>aws cloudformation list-exports --query "Exports[?Name==\`my-output-name4\`].Value" --no-paginate --output text
Bad value for --query Exports[?Name==\`my-output-name4\`].Value: Bad jmespath expression: Unknown token \:
Exports[?Name==\`my-output-name4\`].Value
Using escape of single quote:
C:\>aws cloudformation list-exports --query "Exports[?Name==\'my-output-name4\'].Value" --no-paginate --output text
Bad value for --query Exports[?Name==\'my-output-name4\'].Value: Bad jmespath expression: Unknown token \:
Exports[?Name==\'my-output-name4\'].Value
And finally, the correct syntax:
C:\>aws cloudformation list-exports --query "Exports[?Name=='myexportname'].Value" --no-paginate --output text
If you had errors or unexpected empty output, please be aware all CLI is case sensitive. For example, if you used
--query "Exports[?Name=='myexportname'].value"
The output would be empty.
Using the Windows AWS CLI I had to ensure the --query param was doubled quoted.
aws cloudformation describe-stacks --stack-name <stack_name> --query "Stacks[0].Outputs[?OutputKey==`<key_we_want>`].OutputValue" --output text
Failing to use double quotes, resulted in the query returning:
Stacks[0].Outputs[?OutputKey==].OutputValue
Not so helpful.
Related
In my Bicep file, I have a parameter:
#secure()
param securityToken string
This parameter is an XML value provided as a string from an environment variable, not read from a file. I have trouble correctly setting this parameter from my YML pipeline. I have this piece of code:
az deployment sub create --template-file main.bicep --parameters securityToken='<xml>'
Unfortunately, this returns an error:
< was unexpected at this time.
Ok, so I somehow have to escape this parameter? I also tried:
az deployment sub create --template-file main.bicep --parameters securityToken='<xml>'
But this gives the following error:
The system cannot find the file specified.
So my questions is:
How can I provide an XML string parameter to my Bicep / ARM deployment?
Thanks!
Deployments to subscriptions need the --location <location> parameter, which is missing in the examples above.
When I run the following on Windows, Git Bash, the deployment succeeds.
az deployment sub create --location eastus --template-file main.bicep --parameters securityToken='<xml>'
Result
main.bicep
targetScope = 'subscription'
#secure()
param securityToken string
output exposedSecret string = securityToken
So I found the problem. Apparently, the < and > needs to be escaped, and I finally found out how: putting it between ". So, the value "<"xml">" did the trick. Now my working code is this:
$token = '<xml></xml>'
$token = $token .replace('<', '"<"') // = "<"xml>"<"/xml>
$token = $token .replace('>', '">"') // = "<"xml">""<"/xml">"
$token = $token .replace('""', '" "') // The double "" only escapes the " itself. Therefore, add an extra space: "<"xml">" "<"/xml">"
I need to change the environment variable under container definition in the ecs Task Definition.
TASK_DEFINITION=$(aws ecs describe-task-definition --task-definition $TASKDEFINITION_ARN --output json)
echo $TASK_DEFINITION | jq '.taskDefinition.containerDefinitions[0] | ( .environment[] |= if
.name == "ES_PORT_9200_TCP_ADDR" then .value = "vpc-kkslke-shared-3-abcdkalssdfy.us-east-1.es.amazonaws.com"
else . end)' | jq -s . >container-definition.json
CONTAINER_DEF=$(<$container-definition.json)
aws ecs register-task-definition --family $FAMILY_NAME --container-definitions $CONTAINER_DEF
Error Message:
Error parsing parameter '--container-definitions': Invalid JSON:
Expecting property name enclosed in double quotes: line 1 column 2
(char 1) JSON received: {
One observation not sure if it is related to a bug in VScode or not . When I try to use the view the variable value in debug mode. I only get partial text as seen below. But when I do echo for the same variable I do see the full json. Not sure if the whole value is being passed to the container definition.
I have a build config file that looks like:
steps:
...
<i use the ${_IMAGE} variable around 4 times here>
...
images: ['${_IMAGE}']
options:
dynamic_substitutions: true
substitutions:
_IMAGE: http://example.com/image-${_ENVIRON}
And I trigger the build like:
gcloud builds submit . --config=config.yaml --substitutions=_ENVIRON=prod
What I expected is for the gcloud to substitute the _ENVIRON variable in my script and then substitute the _IMAGE variable so that it'd expand to 'http://example.com/image-prod' - but instead I'm getting the following error:
ERROR: (gcloud.builds.submit) INVALID_ARGUMENT: generic::invalid_argument: key "_ENVIRON" in the substitution data is not matched in the template
What can I do to make that work? I really want to be able to change the environment easily with a sub and without the need to change anything in code
As you've seen, this isn't possible.
If the only use of _ENVIRON is by _IMAGE, why not drop the substitions from config.yaml and use _IMAGE as the substitution:
ENVIRON="prod"
IMAGE: http://example.com/image-${ENVIRON}
gcloud builds submit . \
--config=config.yaml \
--substitutions=_IMAGE=${IMAGE}
I am trying to use az group deployment create to perform an ARM template deployment and I want to pass in parameters where the values are defined in variables. I can do a single parameter with no issues using the syntax below:
--parameters parameter1=$var1
But when I try to add additional parameters using the syntax below, it fails:
--parameters parameter1=$var1, parameter2=$var2
The syntax below fails as well since it will not use the values of the variables:
--parameters '{
"parameter1": { "value": "$var1" },
"parameter2": { "value": "$var2" }
}'
Does anyone know if what I am trying to do is possible and what the correct syntax would be?
I was fighting a combination of a corrupt shell and slightly incorrect syntax. The correct syntax for what I was trying to do is listed below:
--parameters parameter1=$var1 parameter2=$var2
Or, for a cleaning view when several parameters are involved:
--parameters parameter1=$var1 `
parameter2=$var2 `
parameter3=$var3
I understand that dataproc workflow-templates is still in beta, but how do you pass parameters via the add-job into the executable sql? Here is a basic example:
#/bin/bash
DATE_PARTITION=$1
echo DatePartition: $DATE_PARTITION
# sample job
gcloud beta dataproc workflow-templates add-job hive \
--step-id=0_first-job \
--workflow-template=my-template \
--file='gs://mybucket/first-job.sql' \
--params="DATE_PARTITION=$DATE_PARTITION"
gcloud beta dataproc workflow-templates run $WORK_FLOW
gcloud beta dataproc workflow-templates remove-job $WORK_FLOW --step-
id=0_first-job
echo `date`
Here is my first-job.sql file called from the shell:
SET hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;
SET mapred.output.compress=true;
SET hive.exec.compress.output=true;
SET mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec;
SET io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec;
USE mydb;
CREATE EXTERNAL TABLE if not exists data_raw (
field1 string,
field2 string
)
PARTITIONED BY (dt String)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
LOCATION 'gs://data/first-job/';
ALTER TABLE data_raw ADD IF NOT EXISTS PARTITION(dt="${hivevar:DATE_PARTITION}");
In the ALTER TABLE statement, what is the correct syntax? I’ve tried what feels like over 15 variations but nothing works. If I hard code it like this (ALTER TABLE data_raw ADD IF NOT EXISTS PARTITION(dt="2017-10-31");) the partition gets created, but unfortunately it needs to be parameterized.
BTW – The error I receive is consistently like this:
Error: Error while compiling statement: FAILED: ParseException line 1:48 cannot recognize input near '${DATE_PARTITION}' ')' '' in constant
I am probably close but not sure what I am missing.
TIA,
Melissa
Update: Dataproc now has workflow template parameterization, a beta feature:
https://cloud.google.com/dataproc/docs/concepts/workflows/workflow-parameters
For your specific case, you can do the following:
Create an empty template
gcloud beta dataproc workflow-templates create my-template
Add a job with a placeholder for the value you want to parameterize
gcloud beta dataproc workflow-templates add-job hive \
--step-id=0_first-job \
--workflow-template=my-template \
--file='gs://mybucket/first-job.sql' \
--params="DATE_PARTITION=PLACEHOLDER"
Export the template configuration to a file
gcloud beta dataproc workflow-templates export my-template \
--destination=hive-template.yaml
Edit the file to add a parameter
jobs:
- hiveJob:
queryFileUri: gs://mybucket/first-job.sql
scriptVariables:
DATE_PARTITION: PLACEHOLDER
stepId: 0_first-job
parameters:
- name: DATE_PARTITION
fields:
- jobs['0_first-job'].hiveJob.scriptVariables['DATE_PARTITION']
Import the changes
gcloud beta dataproc workflow-templates import my-template \
--source=hive-template.yaml
Add a managed cluster or cluster selector
gcloud beta dataproc workflow-templates set-managed-cluster my-template \
--cluster-name=my-cluster \
--zone=us-central1-a
Run your template with parameters
gcloud beta dataproc workflow-templates instantiate my-template \
--parameters="DATE_PARTITION=${DATE_PARTITION}"
Thanks for trying out Workflows! First-class support for parameterization is part of our roadmap. However for now your remove-job/add-job trick is the best way to go.
Regarding your specific question:
Values passed via params are accessed as ${hivevar:PARAM} (see [1]). Alternatively, you can set --properties which are accessed as ${PARAM}
The brackets around params are not needed. If it's intended to handle spaces in parameter values use quotations like: --params="FOO=a b c,BAR=X"
Finally, I noticed an errant space here DATE_PARTITION =$1 which probably results in empty DATE_PARTITION value
Hope this helps!
[1] How to use params/properties flag values when executing hive job on google dataproc