Howto Pass a Pipeline Secret to DockerCompose#0 - powershell

I'm writing a testing pipeline and dynamically creating a series of containers to setup the pieces.
I need to be able to pass a secret variable from the Pipeline into the Docker Compose construct to enable the container to connect to the database server.
I have a number of non-secret variables in the pipeline and they are all being passed successfully.
I have mapped the $env:addressdatabase_password in a powershell test to verify that my variable is available.
#To Verify if mapped secret variables are coming through (reverse the string)
- powershell: |
$a = $env:addressdatabase_password
Write-Host "$a"
$b = $a.ToCharArray()
[Array]::Reverse($b)
$c = -join($b)
Write-Host "$c"
env:
addressdatabase_password: $(database-address-password) #pipeline secret
The task in my azure-pipelines.yml looks like this (not all arguments are shown)
- task: DockerCompose#0
displayName: 'Container Start'
inputs:
containerregistrytype: 'Azure Container Registry'
azureSubscription: '$(containerSubscription)'
azureContainerRegistry: '{"loginServer":"$(containerLoginServer)", "id" : "$(containerRegistryId)"}'
dockerComposeFile: '**/docker-compose.yml'
action: 'Run a Docker Compose command'
dockerComposeCommand: 'up -d'
arguments: mycontainer
containerName: 'cf_$(CreateDb.buildidentifier)'
detached: true
dockerComposeFileArgs: |
addressdatabase_name=$(database-address-name)
addressdatabase_user=$(database-address-user)
addressdatabase_pass=$(addressdatabase_password)
env:
addressdatabase_password: $(database-address-password) #pipeline secret
The relevant parts of the docker-compose.yml file
mycontainer:
image: mycontainer-runtime:latest
ports:
- "80:80"
volumes:
- ${mount_1}:C:/mount1
- ${mount_2}:C:/mount2
environment:
ADDRESS_DATABASE_NAME: ${addressdatabase_name}
ADDRESS_DATABASE_USERNAME: ${addressdatabase_user}
ADDRESS_DATABASE_PASSWORD: ${addressdatabase_pass} #pipeline secret
The container starts up successfully, but when I examine the Environment Variables inside the container
ADDRESS_DATABASE_NAME=pr_address
ADDRESS_DATABASE_USER=test-addressuser
ADDRESS_DATABASE_PASSWORD=$(addressdatabase_password)
I'm looking for a way to get this value securely to my container without exposing it in the Pipeline.

Related

Using --env-file in DockerCompose#0 task generates error "variable is not set. Defaulting to a blank string" despite success

I'm using a docker-compose task in a pipeline that looks like this (the environmentfile is a secure file, that is retrieved, successfully, in a previous task):
- task: DockerCompose#0
inputs:
azureSubscription: $(azureSubscription)
azureContainerRegistry: $(azureContainerRegistry)
additionalDockerComposeFiles: docker-compose.$(environment).yml
dockerComposeFile: "$(artifactPath)/compose/docker-compose.yml"
action: "Run a Docker Compose command"
dockerComposeCommand: "--env-file $(environmentfile.secureFilePath) up"
projectName: $(Build.Repository.Name)-$(environment)
arguments: "-d"
A docker-compose file that looks like this (shortened for brevity):
services:
api:
image: myapi
environment:
- MYVAR=${ENVVAR1}
And a env-file that looks like this (this is the environmentfile secure-file):
ENVVAR1=MyVariable
And the task and pipeline runs successfully (and using the correct env vars) - but spits out this error:
##[error]The ENVVAR1 variable is not set. Defaulting to a blank string.
C:\ProgramData\chocolatey\bin\docker-compose.exe -f D:\deployagent\A1\_work\1\drop\docker-compose.test.yml -f D:\deployagent\A1\.docker-compose.1643613696562.yml -p "API-test" --env-file D:\deployagent\A1\_work\_temp\api.env.test up -d
Can anyone help me get rid of this error - without using the dockerComposeFileArgs parameter in the task - as that defeats the purpose of using the --env-file flag with the securefile.
dockerComposeFileArgs: |
ENVVAR1=mysecretvar

DevOps Pipeline AzureCLI#2 with dynamic azureSubscription

I have a DevOps pipeline that gives me this error:
There was a resource authorization issue: "The pipeline is not valid. Job ExecutionTerraform: Step AzureCLI input connectedServiceNameARM references service connection Azure: $(subscriptionName) which could not be found. The service connection does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz."
The configuration I am using is looking up the Subscription name dynamically.
The step I use for that is:
- bash: |
# pull the subscription data
# ... read data into local variables
# set the shared variables
echo "##vso[task.setvariable variable=subscriptionId]${SUBSCRIPTION_ID}"
echo "##vso[task.setvariable variable=subscriptionName]${SUBSCRIPTION_NAME}"
From there I attempt to call the Azure CLI via a template:
- template: execution-cli.yml
parameters:
azureSubscriptionId: $(subscriptionId)
azureSubscriptionName: $(subscriptionName)
Inside the template my CLI step uses:
steps:
- task: AzureCLI#2
displayName: Test CLI
inputs:
azureSubscription: "ARMTest ${{ parameters.azureSubscriptionName }}"
scriptType: bash
scriptLocation: inlineScript
inlineScript: |
az --version
addSpnToEnvironment: true
useGlobalConfig: true
It looks like Pipelines is trying to preemptively check authorization without noticing that there's a variable in there. What am I doing wrong here that is causing Azure to attempt to resolve that at the wrong time?
I do this in other pipelines without issues and I am not sure what is different in this particular instance.
Update 1: Working Template I have Elsewhere
Full template:
parameters:
- name: environment
type: string
jobs:
- job: AKSCredentials
displayName: "AKS Credentials Pull"
steps:
- task: AzureCLI#2
displayName: AKS Credentials
inputs:
azureSubscription: "Azure: testbed-${{ parameters.environment }}"
scriptType: bash
scriptLocation: inlineScript
inlineScript: az aks get-credentials -g testbed-${{ parameters.environment }} -n testbed-${{ parameters.environment }}-aks
addSpnToEnvironment: true
useGlobalConfig: true
This is not possible because azure subscription needs to be known at compilation time. You set your variable on run time.
Here an issue with similar case when it is explained:
run time variables aren't supported for service connection OR azure subscription. In your code sample, you are referring to AzureSubscription variable which will get initialized at the run time (but not at save time). Your syntax is correct but you need to set AzureSubscription variable as part of variables.
If you define your variables like:
variables:
subscriptionId: someValue
subscriptionName: someValue
and then you will use it
- template: execution-cli.yml
parameters:
azureSubscriptionId: $(subscriptionId)
azureSubscriptionName: $(subscriptionName)
it should work. But since you set up your variables on runtime it causes your issue.

Use Azure pipeline secret variable to set environment variables on build agent

We have certain functional tests that rely on some secrets. Those secrets are obtained from a Azure Key Vault (AKV) and to connect from build agent, I am using environment variables and AzureIdentity.I set those env variables on the build agent machine using powershell. When I use non-secret pipeline variables, then everything works but when I switch to secret pipeline variable for AZURE_CLIENT_SECRET, the authentication starts to fail. I tried the approach of using a script to set the environment variable from secret pipeline variable, but it does not work. I also tried the approach mentioned here but that does not work either. ANy suggestion on how to set an environment variable using secret pipeline variables?
ANy suggestion on how to set an environment variable using secret pipeline variables?
If you set secret variable in below pipeline.
And then use the script's environment or map the variable within the variables block to pass secrets to your pipeline like below script. See: Set secret variables for details.
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
Write-Host "Using the mapped env var for this task works and is recommended: $env:MY_MAPPED_ENV_VAR"
env:
MY_MAPPED_ENV_VAR: $(PAT) # the recommended way to map to an env variable
If you use Azure Key vault variable, we create a secret variable(PAT) in below Azure key vault.
So we can link secrets from an Azure key vault in variable group, as below.
Now we can use this variable group in below script. See: Reference secret variables in variable groups for details.
variables:
- group: 'AKVgroup' # variable group
pool:
vmImage: 'ubuntu-latest'
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
Write-Host "Using the mapped env var for this task works and is recommended: $env:MY_MAPPED_ENV_VAR"
env:
MY_MAPPED_ENV_VAR: $(PAT) # the recommended way to map to an env variable
The other way is using Azure Key Vault task like below script. See: Use secrets from Azure Key Vault in Azure Pipelines for details.
- task: AzureKeyVault#1
inputs:
azureSubscription: 'ARM'
KeyVaultName: 'edwardkey'
SecretsFilter: '*'
RunAsPreJob: true
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
Write-Host "Using the mapped env var for this task works and is recommended: $env:MY_MAPPED_ENV_VAR"
env:
MY_MAPPED_ENV_VAR: $(PAT) # the recommended way to map to an env variable
If you explicitly pass the secret to the script as a parameter then the scrip will have access to it.
If you want to then use that to set an environment variable for use in later scripts you'll can use a different environment variable name and have the script publish that you want it available in subsequent scripts. That sort of defeats the purpose of it being secret but if thats what you want.

Use a variable name that is stored in another variable in Azure Pipelines

I'm using the AzureKeyVault task to retrieve a secret from the Key Vault. The name of the secret is StorageAccountKey. This name is stored in the variable KeyName. I do it like that
- task: AzureKeyVault#1
displayName: 'Get key'
name: GetKey
inputs:
azureSubscription: '${{ parameters.azureSubscription }}'
KeyVaultName: '$(KeyVaultName)'
SecretsFilter: '$(KeyName)'
Now, in a subsequent task, I would like to access the secret. How would I do that, given that the name of the secret is itself stored in a variable? The following seems not to work
- task: Bash#3
displayName: Create container
inputs:
targetType: 'inline'
script: |
az storage container create \
--name raw \
--account-name storageaccountname \
--account-key $($(dataLakeAccountKeyKeyName))
failOnStderr: true
I'm getting the error
/mnt/azp/azp-linux1_5/_temp/6719378a-b3ee-45d8-aad8-4f6a5e8b581e.sh: line 1: StorageAccountKey: command not found
ERROR: az storage container create: error: argument --account-key: expected one argument
So, it does seem to resolve the inner variable but still fails.
I also struggled to get this done and this has worked for me:
steps:
- task: AzureKeyVault#1
inputs:
azureSubscription: ${{ parameters.azureSubscription }}
KeyVaultName: ${{ parameters.azureKeyVaultName }}
SecretsFilter: '*'
RunAsPreJob: true
- bash: |
#I can now use ${GCP_CREDS}
displayName: GCP auth
env:
GCP_CREDS: $(${{ parameters.azureKeyVaultCredentailsKey }})
Try to use --account-key $(StorageAccountKey)
From "Azure Key Vault task" documentation.
"Values are retrieved as strings. For example, if there is a secret named connectionString, a task variable connectionString is created with the latest value of the respective secret fetched from Azure key vault. This variable is then available in subsequent tasks."
So if you access secret named in azure key vault "StorageAccountKey" then Azure DevOps creates from this place variable called "StorageAccountKey".
I have never used Azure Key Vault but hope it will help you : )

How to build Docker image then use built image to run tests

I want to use Azure Pipelines to build a Docker image, then run tests inside the built image with a container job.
The image should use the build id as the tag (or a combination of the build id, commit hash and branch name). If I used a static tag as value (e.g. build), having two two pipelines run in parallell could result in an unwanted race condition.
The build step is easy enough - login, build and push to Docker Hub.
However, when specifying the test job to use a container, I'm unable to use a variable.
Here is an example that works, but it doesn't use a private registry.
variables:
- group: dockerCredentials
# contains: dockerUsername, dockerPassword
- name: imageName
value: azure-pipelines-test
- name: dockerRegistry
value: krsb
- name: fullImageName
value:
jobs:
- job: build
pool:
vmImage: 'Ubuntu-16.04'
steps:
- script: |
docker login -u $(dockerUsername) -p $(dockerPassword)
docker build -t '$(dockerRegistry)/$(imageName):$(build.buildId)' .
docker push '$(dockerRegistry)/$(imageName):$(build.buildId)'
displayName: 'docker build'
- job: test
dependsOn:
- build
pool:
vmImage: ubuntu-16.04
container:
image: $[ format('{0}/{1}:{2}', variables['dockerRegistry'], variables['imageName'], variables['build.buildId']) ]
endpoint: docker-hub-registry
steps:
- script: printenv
If I want to use a private registry, I don't specify container as a string, but use this syntax instead (Docker hub credentials is specified in the endpoint¹):
# ...
container:
image: image-name:tag
endpoint: docker-hub-registry
When I use this syntax, I cannot use the $[ variables['name'] ] syntax, this is not expanded and when the pipeline runs it outputs an error:
##[command]/usr/bin/docker pull $[ format('{0}:{1}', variables['imageName'], variables['build.buildId']) ]
"docker pull" requires exactly 1 argument.
Same goes if I use $(imageName):$(build.buildId).
Is it possible to use a variable in the image name?