Referring to local variables in Concourse credentials file - concourse

I have the following credentials.yml file :
test: 123
test2: ((test))
When I upload a pipeline, feeding it with the credentials file , whenever test2 variable is used it is not interpolated and I'm getting a "undefined vars : test" error in Concourse.
Is it even possible to refer to another variable in the very same yaml or do you have to always refer to variables in a configured credentials manager (e.g. Vault) ?

Solved it using anchors and aliases . Sadly keys containing dots or hyphens do not work at all.
e.g. :
test: &test 123
test2: *test

Related

Unexpected error while passing variable group variables (Azure DevOps) to YAML pipeline

I'm a newbie to both Azure DevOps and Terraform but, I'm trying to deploy a pipeline using a YAML file.
I have tried to run a terraform plan using a YAML file and passing variables (from AZ DevOps) but, I got the following error:
2021-11-24T18:39:46.4604561Z Error: "name" may only contain alphanumeric characters, dash, underscores, parentheses and periods
2021-11-24T18:39:46.4604832Z
2021-11-24T18:39:46.4605940Z on modules/aks/main.tf line 2, in resource "azurerm_resource_group" "aks-resource-group":
2021-11-24T18:39:46.4606436Z 2: name = var.resource_group_name
2021-11-24T18:39:46.4606609Z
2021-11-24T18:39:46.4606722Z
2021-11-24T18:39:46.4606818Z
2021-11-24T18:39:46.4607525Z Error: Error: Subnet: (Name "#{vnet_subnet_name}#" / Virtual Network Name "#{vnet_name}#" / Resource Group "RG-XX-XXXX-XXXXX-001") was not found
2021-11-24T18:39:46.4608006Z
2021-11-24T18:39:46.4608580Z on modules/aks/main.tf line 16, in data "azurerm_subnet" "subnet-project":
2021-11-24T18:39:46.4609335Z 16: data "azurerm_subnet" "subnet-project" {
The 'name' has the following format at the Variable group in the Azure DevOps UI:
RG-XX-XXXX-XXXXX-001
This is the snippet of where I included the replace token at the YAML file:
displayName: 'Replace Secrets'
inputs:
targetFiles: |
variables.tfvars
encoding: 'utf-8'
actionOnMissing: fail
tokenPattern: #{MyVar}#
And this is a sample of the variables I have in a variable group:
variable-group-sample
Also, I replace the terraform.tfvars file with something like this:
resource_group_name = "#{resource_group_name}#"
I have checked the name inserted at the UI several times but I feel the error is pointing to something else I cannot see.
Have anyone experienced something related to this error?
Thank you in advance!
tokenPattern: #{MyVar}#
It is looking for the pattern #{MyVar}# to replace. Not "something contained between #{ and }#, but the actual value #{MyVar}#. I'm guessing it's expecting a regular expression, but I'm not familiar with that task.
So the end result is that your #{token values}# aren't getting replaced.
Assuming you're using https://marketplace.visualstudio.com/items?itemName=qetza.replacetokens, you probably want to specify tokenPrefix: #{ and tokenSuffix: }# instead of using tokenPattern.
Now, having said that...
There is no reason for you to be using token replacement on a tfvars file. You should create different tfvars files for each environment, then pass in a tfvars file via the -var-file argument to Terraform. Secrets can be passed in on the command line via -var 'foo=bar'
Storing variables that represent application or deployment configuration in Azure DevOps (or GitHub, or any other CI system) is a big, big anti-pattern, because it's tightly coupling your deployment process to a particular platform. If you're sourcing all of your variables from Azure DevOps, you can't easily test locally or migrate to a different CI/CD provider like GitHub Actions in the future.
For values that shouldn't be in source control, such a secrets, you should use a secret provider like Azure KeyVault and integrate it with your application (or, in this case, use a data resource in Terraform to pull the necessary secrets automatically at deployment time).

automate uploading of glue script

We are currently using cloud formation to create a glue job (via codebuild and codepipeline). The one thing we are stuck on is how to automate the code that goes into the glue job.
Our current relevant piece of the cloudformation template looks like this:
MyJob:
Type: AWS::Glue::Job
Properties:
Command:
Name: glueetl
ScriptLocation: "s3://aws-glue-scripts//your-script-file.py"
DefaultArguments:
"--job-bookmark-option": "job-bookmark-enable"
ExecutionProperty:
MaxConcurrentRuns: 2
MaxRetries: 0
Name: cf-job1
Role: !Ref MyJobRole
The problem is is the "ScriptLocation". Looks like it is required to be an S3 location. How can we automate the upload of this. The code is in a .py file in our Git repository and I assume is uploaded to the artifact repository as are of the codebuild process, but how to access it?
Would like to hear how others are doing this. Thanks!
EDIT: I was able to find a similar stack overflow post:AWS Glue automatic job creation but it the answers really don't give a solution or understand the question posed.
I've written a tool to handle the upload of stack dependencies, including CloudFormation nested templates and non-inline Lambda functions.
Currently AWS Glue was not handled since I haven't try it in any project yet. But it should be easy to expand to support Glue.
The dependencies were defined in separate config file, and a piece of code within the tool is responsible for the config. Here's the sample config:
Nested CloudFormation templates:
# DEPENDS=( <ParameterName>=<NestedTemplate> )
#
# Required: Yes if has nested template, otherwise No
# Default: None
# Syntax:
# <ParameterName>: The name of template parameter that is referred at the
# value of nested template property `TemplateURL`.
# <NestedTemplate>: A local path or a S3 URL starting with `s3://` or
# `https://` pointing to the nested template.
# The nested templates at local is going to be uploaded
# to S3 Bucket automatically during the deployment.
# Description:
# Double quote the pairs which contain whitespaces or special characters.
# Use `#` to comment out.
# ---
# Example:
# DEPENDS=(
# NestedTemplateFooURL=/path/to/nested/foo/stack.json
# NestedTemplateBarURL=/path/to/nested/bar/stack.json
# )
Lambda functions:
# LAMBDA=( <S3BucketParameterName>:<S3KeyParameterName>=<LambdaFunction> )
#
# Required: Yes if has None-inline Lambda Function, otherwise No
# Default: None
# Syntax:
# <S3BucketParameterName>: The name of template parameter that is referred
# at the value of Lambda property `Code.S3Bucket`.
# <S3KeyParameterName>: The name of template parameter that is referred
# at the value of Lambda property `Code.S3Key`.
# <LambdaFunction>: A local path or a S3 URL starting with `s3://` pointing
# to the Lambda Function.
# The Lambda Functions at local is going to be zipped and
# uploaded to S3 Bucket automatically during the deployment.
# Description:
# Double quote the pairs which contain whitespaces or special characters.
# Use `#` to comment out.
# ---
# Example:
# DEPENDS=(
# S3BucketForLambdaFoo:S3KeyForLambdaFoo=/path/to/LambdaFoo.py
# S3BucketForLambdaBar:S3KeyForLambdaBar=s3://mybucket/LambdaBar.py
# )
The tools were written in bash and come with 2 parts:
xsh: It works as a bash library framework.
xsh-lib/aws: It's a library of xsh.
The code you may need to expand is located in xsh-lib/aws/functions/cfn/deploy.sh.
The example deploy command looks like:
$ xsh aws/cfn/deploy -C /path/to/your/template-and-config-dir -t stack.json -c sample.conf
I'm considering to abstract the dependencies such as CloudFormation template, Lambda functions and Glue, into a single interface for both configs and handlers.
This will make it easier to add new dependency handlers to the deployer.

How to fix PipelineParam from discarding all information except for name in Kubeflow Pipeline

I'm trying to write an application using Kubeflow Pipelines. I'm running into trouble when passing in parameters to the pipeline (the main python function decorated with #kfp.dsl.pipeline). The parameters should be automatically converted into a PipelineParam with name, value, etc info. However, it seems that everything except for the name is being discarded. I'm on an Ubuntu server.
I've tried uninstalling/reinstalling and updating Kubeflow, tried installing several of the most recent versions of kfp (0.1.23, 0.1.22, 0.1.20, 0.1.18), as well as installing on my local machine.
def print_pipeline_param():
return(kfp.dsl.PipelineParam("Name", value="Value"))
#kfp.dsl.pipeline(
name='Test Pipeline',
description='Test pipeline'
)
def test_pipeline(output_file='/output.txt'):
print(print_pipeline_param())
print(output_file)
if __name__ == '__main__':
import kfp.compiler as compiler
compiler.Compiler().compile(test_pipeline, __file__ + '.tar.gz'
The result of running this is:
{{pipelineparam:op=;name=Name;value=Value;type=;}}
{{pipelineparam:op=;name=output-file;value=;type=;}
I should be getting '/output.txt' in the "value" field, but the only field populated is the name. This only happens when passing in parameters to the main pipeline function. This also happens when directly passing in a PipelineParam like so:
#kfp.dsl.pipeline(
name='Test Pipeline',
description='Test pipeline'
)
def test_pipeline(output_file=kfp.dsl.PipelineParam("Output File", value="/output.txt")):
print(output_file)
Prints out: {{pipelineparam:op=;name=output-file;value=;type=;}

How to replace variables of JSON file in Team Services?

I'm stuck with a release variable substitution of an angular project. I have a settings.json file which I would like to replace some variables:
{
test : "variable to replace"
}
I tried to find some custom task on the marketplace but all of the tasks seems to work only with xml files for the web.config.
I use the "Replace tokens" from the Marketplace https://marketplace.visualstudio.com/items?itemName=qetza.replacetokens
You define the desired values as variables in the Release Definition and then you add the Replace Tokens task and configure a wildcard path for all target text files in your repository where you want to replace values (for example: **/*.json). The token that gets replaced has configurable prefix and postfix (default are '#{' and '}#'). So if you have a variable named constr you can put in your config.json
{
"connectionstring": "#{constr}#"
}
and it will deploy the file like
{
"connectionstring": "server=localhost,user id=admin,password=secret"
}
The IIS Web App Deploy Task in VSTS Releases has JSON variable substitution under *File Transforms & Variable Substitution Options.
Provide a list of json files and JSONPath expressions for the variables that need replacing
For example, to replace the value of ‘ConnectionString’ in the sample below, you need to define a variable as ‘Data.DefaultConnection.ConnectionString’ in the build/release definition (or release definition’s environment).
{
  "Data": {
    "DefaultConnection": {
      "ConnectionString": "Server=(localdb)\SQLEXPRESS;Database=MyDB;Trusted_Connection=True"
    }
  }
}
You can add a variable in release variables Tab, and then use PowerShell task to update the content of your settings.json.
Assume the original content is
{
test : "old
}
And you want to change it to
{
test : "new"
}
So you can replace the variable in json file with below steps:
1. Add variable
Define a variable in release variable tab with the value you want to replace with (variable test with value new):
2. Add PowerShell task
Settings for powershell task:
Type: Inline Script.
Inline Script:
# System.DefaultWorkingDirectory is the path like C:\_work\r1\a, so you need specify where your appsettings.json is.
$path="$(System.DefaultWorkingDirectory)\buildName\drop\WebApplication1\src\WebApplication1\appsettings.json"
(Get-Content $path) -replace "old",$(test) | out-file $path

Why do i use custom Variable Environment in a hosting website?

I am new to hosting world (cloudcontrol), an i got some problem with application credentials, like database administration (mongohq), or google authentification.
So, will i put those variable with some kind of syntaxte (something like $variable) in the code, and then make a commandline with key-value as variable-value ?
If you are using Tornado, it makes it even simpler. Use tornado.options and pass environment variables while running the code.
Use following in your Tornado code:
define("mysql_host", default="127.0.0.1:3306", help="Main user DB")
define("google_oauth_key", help="Client key for Google Oauth")
Then you can access the these values in your rest of your code as:
options.mysql_host
options.google_oauth_key
When you are running your Tornado script, pass the environment variables:
python main.py --mysql_host=$MYSQL_HOST --google_oauth_key=$OAUTH_KEY
assuming both $MYSQL_HOST and $OAUTH_KEY are environment variables. Let me know if you need a full working example or any further help.
example:
First set a environment variable:
$export mongo_uri_env=mongodb://alien:12345#kahana.mongohq.com:10067/essog
and make changes in your Tornado code:
define("mongo_uri", default="127.0.0.1:28017", help="MongoDB URI")
...
...
uri = options.mongo_uri
and you would run your code as
python main.py --mongo_uri=$mongo_uri_env
If you don't want to pass it while running, then you have to read that environment variable directly in your script. For that
import os
...
...
uri = os.environ['mongo_uri_env']