How to use !ImportValue to get the resource arn using AWS SAM template - aws-cloudformation

I have a base template, output section is like this:
Outputs:
layerName:
Value: !Ref Psycopg2LayerLambdaLayer
How to get the arn of Psycopg2LayerLambdaLayer using the output from the base template in my new template? is this correct?
Layers: !ImportValue layerName.arn

If you want to use an Output value as an Import in a different template, you must export it first. In your example, it might look like the following:
Outputs:
layerArn:
Value: !GetAtt Psycopg2LayerLambdaLayer.arn
Export:
Name: psycopg2LayerArn
After deploying this, you can import the value in another stack with !ImportValue psycopg2LayerArn.
Note that an export has to have a unique name per account and region, therefore it is a good idea to prefix it with the stack/resource name. Also note that you can’t export objects, only scalar values such as strings.
Read more at https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-importvalue.html

Related

Why YAML Schema (from JSON Schema) is returning $ref x in schema can not be resolved?

I have this schema file in ./types/index.yaml:
$schema: "https://json-schema.org/draft/2020-12/schema"
name: Types in YAML
definitions:
flow:
type: object
properties:
slug:
type: string
flow_list:
type: 'array'
items:
$ref: "#/definitions/flow"
And I have this "instance" YAML file, in contents/en.yaml:
# yaml-language-server: $schema=../types/index.yaml#definitions/flow_list
-
slug: '123'
I would expect it to allow me to create a list of objects with the slug property on it (to get started), but instead I am getting something like this (with the ...... filled in with the full absolute OS path):
$ref 'definitions/flow_list' in 'file:///...../types/index.yaml' can not be resolved.
I am using the RedHat VSCode YAML extension. Any ideas on how to get it so I can write the array of "flows" here?
Restarting VSCode a few times seemed to fix it.

Getting values from a command in yaml parameter

When storing an array in the Azure pipeline yaml parameter, I normally declare the type object that looks something like this:
parameters:
- name: versions
type: object
default:
- 3.0.1
- 3.0.2
- 3.0.3
But now, the issue is I don't know this array set of versions and I have to get it from a command.
So I was thinking something like this:
parameters:
- name: versions
type: object
default: $npm view #get-versions
But it does not seem to work. Does anyone know how to get values from command for the parameter in yaml pipeline? I appreciate it so much!
You can't. Parameters need to be pre-defined and cannot be dynamically generated.
The ugly workaround would be to write an application that parses and updates the pipeline YAML when a new version of your package is published. An alternative is to just make the parameter value a free-form text field, with a reasonable default.

Parameter name containing special characters on Helm chart

In my Helm chart, I need to set the following Java Spring parameter name:
company.sms.security.password#id(name):
secret:
name: mypasswd
key: mysecretkey
But when applying the template, I encounter a syntax issue.
oc apply -f template.yml
The Deployment "template" is invalid: spec.template.spec.containers[0].env[79].name: Invalid value: "company.sms.security.password#id(name)": a valid environment variable name must consist of alphabetic characters, digits, '_', '-', or '.', and must not start with a digit (e.g. 'my.env-name', or 'MY_ENV.NAME', or 'MyEnvName1', regex used for validation is '[-._a-zA-Z][-._a-zA-Z0-9]*')
What I would usually do is defining this variable at runtime like this:
JAVA_TOOL_OPTIONS:
-Dcompany.sms.security.password#id(name)=mypass
But since it's storing sensitive data, obviously I cannot log in clear the password.
So far I could only think about defining an Initcontainer as a workaround, changing the parameter name is not an option.
Edit: So the goal is to not log the password neither in the manifest nor in the application logs.
Edited:
Assign the value from your secret to one environment variable, and use it in the JAVA_TOOL_OPTIONS environment variable value. the way to expand the value of a previously defined variable VAR_NAME, is $(VAR_NAME).
For example:
- name: MY_PASSWORD
valueFrom:
secretKeyRef:
name: mypasswd
key: mysecretkey
- name: JAVA_TOOL_OPTIONS
value: "-Dcompany.sms.security.password#id(name)=$(MY_PASSWORD)"
Constrains
There are some conditions for kuberenetes in order to parse the $(VAR_NAME) correctly, otherwise $(VAR_NAME) will be parsed as a regular string:
The variable VAR_NAME should be defined before the one that uses it
The value of VAR_NAME must not be another variable, and must be defined.
If the value of VAR_NAME consists of other variables or is undefined, $(VAR_NAME) will be parsed as a string.
In the example above, if the secret mypasswd in the pod's namespace doesn't have a value for the key mysecretkey, $(MY_PASSWORD) will appear literally as a string and will not be parsed.
References:
Dependent environment variables
Use secret data in environment variables

How to set a variable from another yaml file in azure-pipeline.yml

I have an environment.yml shown as follow, I would like to read out the content of the name variable (core-force) and set it as a value of the global variable in my azure-pipeline.yamal file how can I do it?
name: core-force
channels:
- conda-forge
dependencies:
- click
- Sphinx
- sphinx_rtd_theme
- numpy
- pylint
- azure-cosmos
- python=3
- flask
- pytest
- shapely
in my azure-pipeline.yml file I would like to have something like
variables:
tag: get the value of the name from the environment.yml aka 'core-force'
Please check this example:
File: vars.yml
variables:
favoriteVeggie: 'brussels sprouts'
File: azure-pipelines.yml
variables:
- template: vars.yml # Template reference
steps:
- script: echo My favorite vegetable is ${{ variables.favoriteVeggie }}.
Please note, that variables are simple string and if you want to use list you may need do some workaraund in powershell in place where you want to use value from that list.
If you don't want to use template functionality as it is shown above you need to do these:
create a separate job/stage
define step there to read environment.yml file and set variables using REST API or Azure CLI
create another job/stage and move you current build defitnion into there
I found this topic on developer community where you can read:
Yaml variables have always been string: string mappings. The doc appears to be currently correct, though we may have had a bug when last you visited.
We are preparing to release a feature in the near future to allow you to pass more complex structures. Stay tuned!
But I don't have more info bout this.
Global variables should be stored in a separate template file. This file ideally would be in a separate repo where other repos can refer to this.
Here is another answer for this

automate uploading of glue script

We are currently using cloud formation to create a glue job (via codebuild and codepipeline). The one thing we are stuck on is how to automate the code that goes into the glue job.
Our current relevant piece of the cloudformation template looks like this:
MyJob:
Type: AWS::Glue::Job
Properties:
Command:
Name: glueetl
ScriptLocation: "s3://aws-glue-scripts//your-script-file.py"
DefaultArguments:
"--job-bookmark-option": "job-bookmark-enable"
ExecutionProperty:
MaxConcurrentRuns: 2
MaxRetries: 0
Name: cf-job1
Role: !Ref MyJobRole
The problem is is the "ScriptLocation". Looks like it is required to be an S3 location. How can we automate the upload of this. The code is in a .py file in our Git repository and I assume is uploaded to the artifact repository as are of the codebuild process, but how to access it?
Would like to hear how others are doing this. Thanks!
EDIT: I was able to find a similar stack overflow post:AWS Glue automatic job creation but it the answers really don't give a solution or understand the question posed.
I've written a tool to handle the upload of stack dependencies, including CloudFormation nested templates and non-inline Lambda functions.
Currently AWS Glue was not handled since I haven't try it in any project yet. But it should be easy to expand to support Glue.
The dependencies were defined in separate config file, and a piece of code within the tool is responsible for the config. Here's the sample config:
Nested CloudFormation templates:
# DEPENDS=( <ParameterName>=<NestedTemplate> )
#
# Required: Yes if has nested template, otherwise No
# Default: None
# Syntax:
# <ParameterName>: The name of template parameter that is referred at the
# value of nested template property `TemplateURL`.
# <NestedTemplate>: A local path or a S3 URL starting with `s3://` or
# `https://` pointing to the nested template.
# The nested templates at local is going to be uploaded
# to S3 Bucket automatically during the deployment.
# Description:
# Double quote the pairs which contain whitespaces or special characters.
# Use `#` to comment out.
# ---
# Example:
# DEPENDS=(
# NestedTemplateFooURL=/path/to/nested/foo/stack.json
# NestedTemplateBarURL=/path/to/nested/bar/stack.json
# )
Lambda functions:
# LAMBDA=( <S3BucketParameterName>:<S3KeyParameterName>=<LambdaFunction> )
#
# Required: Yes if has None-inline Lambda Function, otherwise No
# Default: None
# Syntax:
# <S3BucketParameterName>: The name of template parameter that is referred
# at the value of Lambda property `Code.S3Bucket`.
# <S3KeyParameterName>: The name of template parameter that is referred
# at the value of Lambda property `Code.S3Key`.
# <LambdaFunction>: A local path or a S3 URL starting with `s3://` pointing
# to the Lambda Function.
# The Lambda Functions at local is going to be zipped and
# uploaded to S3 Bucket automatically during the deployment.
# Description:
# Double quote the pairs which contain whitespaces or special characters.
# Use `#` to comment out.
# ---
# Example:
# DEPENDS=(
# S3BucketForLambdaFoo:S3KeyForLambdaFoo=/path/to/LambdaFoo.py
# S3BucketForLambdaBar:S3KeyForLambdaBar=s3://mybucket/LambdaBar.py
# )
The tools were written in bash and come with 2 parts:
xsh: It works as a bash library framework.
xsh-lib/aws: It's a library of xsh.
The code you may need to expand is located in xsh-lib/aws/functions/cfn/deploy.sh.
The example deploy command looks like:
$ xsh aws/cfn/deploy -C /path/to/your/template-and-config-dir -t stack.json -c sample.conf
I'm considering to abstract the dependencies such as CloudFormation template, Lambda functions and Glue, into a single interface for both configs and handlers.
This will make it easier to add new dependency handlers to the deployer.