Pass environment/input argument to command argument in step function - kubernetes

I'm trying to setup a step function with an EKS run job. The job kicks off a pod in an EKS cluster and execute commands. As a start, I want the command to be echo $S3_BUCKET $S3_KEY, where both $S3_BUCKET and $S3_KEY are environment variables passed in from the step function input. Here is the container spec:
"containers": [
{
"name": "my-container-spec",
"image": "****.dkr.ecr.****.amazonaws.com/****:latest",
"command": [
"echo"
],
"args": [
"$S3_BUCKET", "$S3_KEY"
],
"env": [
{
"name": "S3_BUCKET",
"value.$": "$.s3_bucket"
},
{
"name": "S3_KEY",
"value.$": "$.s3_key"
}
]
}
],
"restartPolicy": "Never"
Unfortunately, after the job is executed, the command only echo the raw test $S3_BUCKET $S3_KEY instead of the passed in value.
So the question here is how I should pass in an environment variable as an args. The environment variable doesn't have to be passed in, it could be other inherited variables.

This will do the trick: args: ["$(S3_BUCKET), $(S3_KEY)"]

Related

Execute SQL script with Azure ARM template

I'm deploying PostgreSQL server with a database and trying to seed this database with SQL script. I've learned that the best way to execute SQL script from ARM template is to use deployment script resource. Here is part of a template:
{
"type": "Microsoft.DBforPostgreSQL/flexibleServers/databases",
"apiVersion": "2021-06-01",
"name": "[concat(parameters('psqlServerName'), '/', parameters('psqlDatabaseName'))]",
"dependsOn": [
"[resourceId('Microsoft.DBforPostgreSQL/flexibleServers', parameters('psqlServerName'))]"
],
"properties": {
"charset": "[parameters('psqlDatabaseCharset')]",
"collation": "[parameters('psqlDatabaseCollation')]"
},
"resources": [
{
"type": "Microsoft.Resources/deploymentScripts",
"apiVersion": "2020-10-01",
"name": "deploySQL",
"location": "[parameters('location')]",
"kind": "AzureCLI",
"dependsOn": [
"[resourceId('Microsoft.DBforPostgreSQL/flexibleServers/databases', parameters('psqlServerName'), parameters('psqlDatabaseName'))]"
],
"properties": {
"azCliVersion": "2.34.1",
"storageAccountSettings": {
"storageAccountKey": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2019-06-01').keys[0].value]",
"storageAccountName": "[parameters('storageAccountName')]"
},
"cleanupPreference": "Always",
"environmentVariables": [
{
"name": "psqlFqdn",
"value": "[reference(resourceId('Microsoft.DBforPostgreSQL/flexibleServers', parameters('psqlServerName')), '2021-06-01').fullyQualifiedDomainName]"
},
{
"name": "psqlDatabaseName",
"value": "[parameters('psqlDatabaseName')]"
},
{
"name": "psqlAdminLogin",
"value": "[parameters('psqlAdminLogin')]"
},
{
"name": "psqlServerName",
"value": "[parameters('psqlServerName')]"
},
{
"name": "psqlAdminPassword",
"secureValue": "[parameters('psqlAdminPassword')]"
}
],
"retentionInterval": "P1D",
"scriptContent": "az config set extension.use_dynamic_install=yes_without_prompt\r\naz postgres flexible-server execute --name $env:psqlServerName --admin-user $env:psqlAdminLogin --admin-password $env:psqlAdminPassword --database-name $env:psqlDatabaseName --file-path test.sql --debug"
}
}
]
}
Azure does not show any errors regarding the syntax and starts the deployment. However, deploySQL deployment gets stuck and then fails after 1 hour due to agent execution timeout. PostgreSQL server itself, database and firewall rule (not shown in the code above) are deployed without any issues, but SQL script is not executed. I've tried to add --debug option to Azure CLI commands, but got nothing new from pipeline output. I've also tried to execute these commands in Azure CLI pipeline task and they worked perfectly. What am I missing here?

Use command inside a VSCode configuration

As per the documentation given here, I wish to add a text prompt box when I start my debug configuration. My launch.json file is as follows -
{ "version": "2.0.0",
"configurations": [
{
"name": "Docker Attach my container",
"type": "coreclr",
"request": "attach",
"processId": "${command:pickRemoteProcess}",
"pipeTransport": {
"pipeProgram": "docker",
"pipeArgs": [ "exec", "-i", "${input:containerName}" ],
"debuggerPath": "/vsdbg/vsdbg",
"pipeCwd": "${workspaceRoot}",
"quoteArgs": false
}
}
],
"inputs": [
{
"id": "containerName",
"type": "promptString",
"description": "Please enter container name",
"default": "my-container"
}
]
}
However with this VSCode does not give the prompt for me to enter container name. Any ideas why this would be the case?
Also further question, ideally I wish to execute a shell script that can run docker ps + some grep to filter out the correct container name automatically. So if that can be done and then passed to this configuration as an argument, that would be even ideal.
For the second part you can use the extension Command Variable to use the content of a file as a variable of via a Key-Value pair.
Write a shell script that does your docker ps and grep that produces the result in a file in a preLaunchTask.
Use the command extension.commandvariable.file.content in an ${input:xxxx} variable and use the extension to read the content of the file to be used in the launch command.

Packer - Powershell pass variables

Currently we are deploying images with packer (In a build pipeline which is located in Azure DevOps) within our AWS domain with success. Now we want to take this a step further and we're trying to configure a couple of user for future Ansible maintenance. So we're written a script and tried it as an inline Powershell script but both of the options do not seem to pick up the variable which is set in the variable group in Azure DevOps, all the other variables are being used with success. My code is as follows:
{
"variables": {
"build_version": "{{isotime \"2006.01.02.150405\"}}",
"aws_access_key": "$(aws_access_key)",
"aws_secret_key": "$(aws_secret_key)",
"region": "$(region)",
"vpc_id": "$(vpc_id)",
"subnet_id": "$(subnet_id)",
"security_group_id": "$(security_group_id)",
"VagrantUserpassword": "$(VagrantUserPassword)"
},
"builders": [
{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "{{user `region`}}",
"vpc_id": "{{user `vpc_id`}}",
"subnet_id": "{{user `subnet_id`}}",
"security_group_id": "{{user `security_group_id`}}",
"source_ami_filter": {
"filters": {
"name": "Windows_Server-2016-English-Full-Base-*",
"root-device-type": "ebs",
"virtualization-type": "hvm"
},
"most_recent": true,
"owners": [
"801119661308"
]
},
"ami_name": "WIN2016-CUSTOM-{{user `build_version`}}",
"instance_type": "t3.xlarge",
"user_data_file": "userdata.ps1",
"associate_public_ip_address": true,
"communicator": "winrm",
"winrm_username": "Administrator",
"winrm_timeout": "15m",
"winrm_use_ssl": true,
"winrm_insecure": true,
"ssh_interface": "private_ip"
}
],
"provisioners": [
{
"type": "powershell",
"environment_vars": ["VagrantUserPassword={{user `VagrantUserPassword`}}"],
"inline": [
"Install-WindowsFeature web-server,web-webserver,web-http-logging,web-stat-compression,web-dyn-compression,web-asp-net,web-mgmt-console,web-asp-net45",
"New-LocalUser -UserName 'Vagrant' -Description 'User is responsible for Ansible connection.' -Password '$(VagrantUserPassword)'"
]
},
{
"type": "powershell",
"environment_vars": ["VagrantUserPassword={{user `VagrantUserPassword`}}"],
"scripts": [
"scripts/DisableUAC.ps1",
"scripts/iiscompression.ps1",
"scripts/ChocoPackages.ps1",
"scripts/PrepareAnsibleUser.ps1"
]
},
{
"type": "windows-restart",
"restart_check_command": "powershell -command \"& {Write-Output 'Machine restarted.'}\""
},
{
"type": "powershell",
"inline": [
"C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\InitializeInstance.ps1 -Schedule",
"C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\SysprepInstance.ps1 -NoShutdown"
]
}
]
}
The "VagrantUserpassword": "$(VagrantUserPassword)" is what is not working, we've tried multiple options but none of them seem to be working.
Any idea's?
Kind regards,
Rick.
Based on my test, the pipeline variables indeed couldn't pass to the powershell environment variable.
Workaround:
You could try to use the Replace Token task to pass the pipeline value to Json file.
Here are the steps:
1.Set the value in Json file.
{
"variables": {
....
"VagrantUserpassword": "#{VagrantUserPassword}#"
},
Use Replace Token task before the script task.
Set the value in Pipeline variables.
Then the value could be set successfully.
On the other hand, I also find some issues in your sample file.
"environment_vars": ["VagrantUserPassword={{user VagrantUserPassword}}"], The VagrantUserPassword need to be replaced with VagrantUserpassword(["VagrantUserPassword={{user VagrantUserpassword}}"]).
Note: This is case sensitive.
You need to use $Env:VagrantUserPassword to replace the $(VagrantUserPassword)
For example:
"inline": [
"Write-Host \"Automatically generated aws password is: $Env:VagrantUserPassword\"",
"Write-Host \"Automatically generated aws password is: $Env:VAR5\""
]

RUST cargo run task with arguments in vscode

Is there a way to specify arguments to a RUST cargo command running as VS CODE task?
Or should I be trying this as an NPM script? (of course, this is RUST, so I am using CARGO and npm, creating a package.json would be odd).
The Build task works fine:
"version": "2.0.0",
"tasks": [
{
"type": "cargo",
"subcommand": "build",
"problemMatcher": [
"$rustc"
],
"group": {
"kind": "build",
"isDefault": true
}
},
But I have no idea where to put the arguments since I want it to be
$cargo run [filename]
{
"type": "cargo",
"subcommand": "run",
"problemMatcher": [
"$rustc"
]
}
There absolutely is, the args option allows you to pass additional arguments to your tasks and you can use various ${template} parameters to pass things like the currently opened file.
It's also worth calling out that the type of the command should probably be shell, with cargo being specified as the command itself.
For your use case, you may consider using the following (to execute cargo run $currentFile).
{
"version": "2.0.0",
"tasks": [
{
"label": "run",
"type": "shell",
"problemMatcher": [
"$rustc"
],
"command": "cargo",
"args": [
"run",
"${file}"
]
}
]
}

EMR cluster bootstrap + setting environment variables cluster-wise

I am trying to create an EMR cluster (through the command line) and give it some bootstrap actions and configurations file.
The aim is setting some SPARK/Yarn vars, and some other environment variables that should be used cluster-wise (so these env vars should be available on the master AND the slaves).
I am giving it a configurations file that looks like this:
[
{
"Classification": "yarn-env",
"Properties": {},
"Configurations": [
{
"Classification": "export",
"Properties": {
"appMasterEnv.SOME_VAR": "123",
"nodemanager.vmem-check-enabled": "false",
"executor.memoryOverhead": "5g"
},
"Configurations": [
]
}
]
},
{
"Classification": "spark-env",
"Properties": {},
"Configurations": [
{
"Classification": "export",
"Properties": {
"appMasterEnv.SOME_VAR": "123",
"PYSPARK_DRIVER_PYTHON": "python36",
"PYSPARK_PYTHON": "python36",
"driver.memoryOverhead": "14g",
"driver.memory": "14g",
"executor.memory": "14g"
},
"Configurations": [
]
}
]
}
]
However when I try to add some steps to the cluster, the step fails claiming it does not know about the environment variable SOME_VAR.
Traceback (most recent call last):
File "..", line 9, in <module>.
..
raise EnvironmentError
OSError
(The line number is where I am trying to use the environment var SOME_VAR)
Am I doing it the right way both for SOME_VAR and the other Spark/Yarn vars?
Thank you
Remove appMasterEnv in front of appMasterEnv.SOME_VAR, as user lenin suggested.
Use classification yarn-env to pass environment variables to the worker nodes.
Use classification spark-env to pass environment variables to the driver, with deploy mode client. When using deploy mode cluster, use yarn-env.