I have creates below task to find the current tag and pass on to next tasks while building the docker image with new tag.
- task: Bash#3
displayName: 'Fetch latest tag from ECR and Create Next Tag.'
inputs:
targetType: 'inline'
script: |
ecrURI=$(ecrURI)
repoName="${ecrURI##*/}"
latestECRTag=$(aws ecr describe-images --output json --repository-name ${repoName} --region $(awsDefaultRegion) --query 'sort_by(imageDetails,& imagePushedAt)[-1].imageTags[0]' | jq . --raw-output)
if [[ -z ${latestECRTag} ]];
then
latestECRTag='0.0.0'
fi
major=$(echo ${latestECRTag} |cut -d'.' -f1)
minor=$(echo ${latestECRTag} |cut -d'.' -f2)
patch=$(echo ${latestECRTag} |cut -d'.' -f3)
latestECRTag="$(expr $major + 1).${minor}.${patch}"
echo $latestECRTag
echo "##vso[task.setvariable variable=NEXT_ECR_TAG;isOutput=true]$latestECRtag"
- bash: |
echo "Started Building Docker Image with tag $latestECRTag"
docker build -t test:latest -f Dockerfile .
docker tag test:latest $(ecrURI):$(NEXT_ECR_TAG)
displayName: 'Build Docker Image with Tag.'
workingDirectory: $(System.DefaultWorkingDirectory)/configCreate/
The step/task to fetch and create the new tag is working fine, but as soon as I moved to the next task to build the docker tag based on NEXT_ECR_TAG it shows me empty value. everything else are properly populated.
Can anyone help me to find out why i'm not able to fetch the NEXT_ECR_TAG value in next task? It could be silly thing, but don't know what?
duh!! moment for me. there is a typo in my variable. Fixing typo fixed everything.
Related
I have this git action for my build
...
- name: Building S3 Instance
uses: charlie87041/s3-actions#main
id: s3
env:
AWS_S3_BUCKET: 'xxx'
AWS_ACCESS_KEY_ID: 'xxx'
AWS_SECRET_ACCESS_KEY: 'xxxxx'
AWS_REGION: 'xxx'
- name: Updating EC2 [Develop] instance
uses: appleboy/ssh-action#master
with:
host: ${{secrets.EC2HOST}}
key: ${{secrets.EC2KEY}}
username: xxx
envs: TESTING
script: |
cd ~/devdir
export BUCKET_USER=${{steps.s3.outputs.user_id}}
export BUCKET_USER_KEY=${{steps.s3.outputs.user_key}}
docker login
docker-compose down --remove-orphans
docker system prune -a -f
docker pull yyyy
docker-compose up -d
And this is the important function in charlie87041/s3-actions#main
generate_keys () {
RSP=$(aws iam create-access-key --user-name $USER);
BUCKET_ACCESS_ID=$(echo $RSP | jq -r '.AccessKey.AccessKeyId');
BUCKET_ACCESS_KEY=$(echo $RSP | jq -r '.AccessKey.SecretAccessKey');
echo "user_id=$BUCKET_ACCESS_ID" >> $GITHUB_OUTPUT
echo "user_key=$BUCKET_ACCESS_KEY" >> $GITHUB_OUTPUT
echo "::set-output name=BUCKET_ACCESS_KEY::$BUCKET_ACCESS_KEY"
echo "::set-output name=BUCKET_ACCESS_ID::$BUCKET_ACCESS_ID"
}
I need to update env variables in container with BUCKET_USER and BUCKET_USER_KEY, but these always return null when echo the container. How do I do this?
Not that set-output was deprecated recently (oct. 2022)
If you are using self-hosted runners make sure they are updated to version 2.297.0 or greater.
If you are using runner on github.com directly, you would need to change
echo "::set-output name=BUCKET_ACCESS_KEY::$BUCKET_ACCESS_KEY"
with
echo "BUCKET_ACCESS_KEY=$BUCKET_ACCESS_KEY" >> $GITHUB_OUTPUT
I am not sure an export within the script would work.
Using with directives, as in issue 154 might be more effective
with:
BUCKET_USER: ${{steps.s3.outputs.user_id}}
...
script: |
...
Is there a way to use the github-cli or api to view the inputs of an action while it is running?
I want to allow Github actions to run concurrently. The resources they will manage are determined by the input stack_name. I want to make sure two pipelines cannot run at the same time with the same stack_name input. If this happens, then I want one of the pipeline actions to fail and stop immediately.
I am also taking the input and turning it into an environmental variable for one of my jobs. After the job finishes, the values are available in the logs and I can grep through the following output to get a pipelines stack_name:
$ gh run view $running_pipeline_id --repo=$GITHUB_SERVER_URL/$GITHUB_REPOSITORY --log
....
env-check env-check 2022-03-22T17:06:30.2615395Z STACK_NAME: foo
However, this is not available while a job is running and I instead get this error:
run 1234567890 is still in progress; logs will be available when it is complete
Here is my current attempt at a code block that can achieve this. I could also use suggestions on how to make better gh run list and/or gh run view calls that can avoid using grep and awk. Clean json output I can parse with jq is preferable.
set +e
running_pipeline_ids=$(gh run list --workflow=$SLEEVE --repo=$GITHUB_SERVER_URL/$GITHUB_REPOSITORY \
| grep 'in_progress' \
| awk '{print $((NF-2))}' \
| grep -v $GITHUB_RUN_ID)
set -e
for running_pipeline_id in $running_pipeline_ids; do
# get the stack name for all other running pipelines
running_pipeline_stack_name=$(gh run view $running_pipeline_id --repo=$GITHUB_SERVER_URL/$GITHUB_REPOSITORY --log \
| grep 'STACK_NAME:' | head -n 1 \
| awk -F "STACK_NAME:" '{print $2}' | awk '{print $1}')
# fail if we detect another pipeline running against the same stack
if [ "$running_pipeline_stack_name" == "$STACK_NAME" ]; then
echo "ERROR: concurrent pipeline detected. $GITHUB_SERVER_URL/$GITHUB_REPOSITORY/actions/runs/$running_pipeline_id"
echo "Please try again after the running pipeline has completed."
exit 1
fi
done
Perhaps you could use the concurrency feature of GitHub Actions?
Now you cannot directly bake this into an action, but if it's possible for you to extract your action into a reusable workflow then you could make use of the concurrency feature.
It would look something like this:
# ./github/workflows/partial.yaml
on:
workflow_call:
inputs:
stack-name:
description: "name of the stack"
required: true
type: string
jobs:
greet:
runs-on: ubuntu-latest
concurrency:
group: ${{ inputs.stack-name }}
cancel-in-progress: true
steps:
- uses: my/other-action
with:
stack_name: ${{ inputs.stack-name }}
And then where you're using it:
jobs:
test:
uses: my/app-repo/.github/workflows/partial.yml#main
with:
stack-name: 'my-stack'
Our build agent is running Podman 3.4.2 and there is a global alias in place for each terminal session that simply replaces docker with podman, so the command docker --version yields podman version 3.4.2 as a result.
The goal is to use podman for the Docker#2 task in a Azure DevOps pipeline:
steps:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: aspnet-web-mhi
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
Turns out I was a bit naive in my assumptions, that this would work as the ado_agent is having none of it:
##[error]Unhandled: Unable to locate executable file: 'docker'. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also check the file mode to verify the file is executable.
Is there a way to make that replacement work without too much fuss? I'd avoid scripting everything by myself to use podman instead of docker and push it to a registry, if I can avoid it.
Since I needed to make progress on this, I've decided to go the down the bash-route and built, pushed, pulled and run the images manually. This is the gist of it:
steps:
- task: Bash#3
displayName: Build Docker Image for DemoWeb
inputs:
targetType: inline
script: |
podman build -f $(dockerfilePath) -t demoweb:$(tag) .
- task: Bash#3
displayName: Login and Push to ACR
inputs:
targetType: inline
script: |
podman login -u $(acrServicePrincipal) -p $(acrPassword) $(acrName)
podman push demoweb-mhi:$(tag) $(acrName)/demoweb:$(tag)
- task: Bash#3
displayName: Pull image from ACR
inputs:
targetType: inline
script: |
podman pull $(acrName)/demoweb:$(tag) --creds=$(acrServicePrincipal):$(acrPassword)
- task: Bash#3
displayName: Run container
inputs:
targetType: inline
script: |
podman run -p 8080:80 --restart unless-stopped $(acrName)/demoweb:$(tag)
If you decide to go down that route, please make sure to not expose your service principal and password as variables in your yml file, but create them as secrets.
I'll keep this question open - maybe someone with more expertise in handling GNU/Linux finds a more elegant way.
You could install package podman-docker as well. It installs a wrapper in /usr/bin/docker that points to /usr/bin/podman. So tasks that originally use docker binary (or even docker socket) can be run transparently as podman like Docker#2 build and push.
cat /usr/bin/docker
#!/bin/sh
[ -e /etc/containers/nodocker ] || \
echo "Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg." >&2
exec /usr/bin/podman "$#"
I am trying to test an app service using JMeter in Azure Pipelines. I run this code from my YAML-file:
- job: JMeter
pool:
vmImage: 'Ubuntu-16.04'
steps:
- task: JMeterInstaller#0
displayName: 'Install JMeter 5.3'
inputs:
jmeterVersion: '5.3'
- task: Bash#3
displayName: 'Run JMeter test'
inputs:
rootFolderOrFile: '$(System.DefaultWorkingDirectory)/JMeter'
script:
jmeter -n -t test.jmx -l myTtest.csv -e -o Result
But I am getting an error message: Unexpected property rootFolderOrFile. I put this property to indicate the file which contains the test plan (test.jmx). I tried with targetType: inline and the pipeline shows stage succeeded, but I can not find the myTest.csv and the resuts folder Result therefore I thought to change the targetType to filePath can fix my code.
Apparently No!
Can anyone help me or guide me to find why I am getting the error?
Or Is this the wrong way?
Any help is very appreciated
The task version you used it version 3, it does not have the field rootFolderOrFile, you could try below script.
steps:
- task: Bash#3
displayName: 'Bash Script'
inputs:
targetType: filePath
filePath: './$(System.DefaultWorkingDirectory)/JMeter'
arguments: 'jmeter -n -t test.jmx -l myTtest.csv -e -o Result'
In addition, I found a blog, you could also check it.
Update1
If the file JMeter is in the root, the field filePath should be $(System.DefaultWorkingDirectory)/JMeter
Yaml definition
- task: Bash#3
displayName: 'Bash Script'
inputs:
targetType: filePath
filePath: '$(System.DefaultWorkingDirectory)/JMeter'
arguments: 'jmeter -n -t test.jmx -l myTtest.csv -e -o Result'
By the way, we could add the task bash and enter the script ls '$(System.DefaultWorkingDirectory)' to check the working directory files.
I cannot find myTest.csv or the folder Result
To solve this issue, you need to specify the output path for the myTest.csv file.
If you run the command: jmeter -n -t JMeter/test.jmx -l JMeter/myTest.csv -e -o Result
The myTest.csv file will be created in the JMeter folder instead of Result Folder.
So the correct command is that:
jmeter -n -t JMeter/test.jmx -l JMeter/Result/myTest.csv -e -o Result
I have created pipeline to import existing Azure Resource into terraform. Since Terraform Import requires Provider details or Environment Variables for The below details which has to extracted from the Service Connection.
steps:
- task: AzureCLI#2
displayName: Terraform Init
inputs:
azureSubscription: ${{ parameters.service_connection }}
addSpnToEnvironment: true
scriptType: bash
scriptLocation: inlineScript
inlineScript: |
export ARM_CLIENT_ID=$servicePrincipalId
export ARM_CLIENT_SECRET=$servicePrincipalKey
export ARM_SUBSCRIPTION_ID=$(az account show --query id | xargs)
export ARM_TENANT_ID=$(az account show --query tenantId | xargs)
ls
terraform init -upgrade -input=false \
-backend-config="subscription_id=${{ parameters.tf_state_subscription_id }}" \
-backend-config="tenant_id=$tenantId" \
-backend-config="client_id=$servicePrincipalId" \
-backend-config="client_secret=$servicePrincipalKey" \
-backend-config="resource_group_name=${{ parameters.resource_group_name }}" \
-backend-config="storage_account_name=${{ parameters.storage_account_name }}" \
-backend-config="container_name=${{ parameters.tf_state_key }}" \
-backend-config="key=${{ parameters.tf_state_key }}.tfstate"
if [ $(az resource list --name pytestkeyvault --query '[].id' -o tsv) != null ]
then
echo "using Keyvault $(az resource list --name pytestkeyvault --query '[].id' -o tsv)"
terraform import azurerm_key_vault.this $(az resource list --name pytestkeyvault --query '[].id' -o tsv)
else
echo "Keyvault does not exist"
fi
echo $ARM_CLIENT_ID
The exported environment variable ARM_CLIENT_ID is empty. The below variables are not being exported as environment variables.
echo $ARM_CLIENT_ID
echo $ARM_CLIENT_SECRET
echo $ARM_SUBSCRIPTION_ID
echo $ARM_TENANT_ID
For my setup i could not access the service principal from azure powershell.
But i could from Azure CLI.
This post pointed me in the right direction, check it out:
https://www.integration-playbook.io/docs/combining-az-cli-and-azure-powershell-az-modules-in-a-pipeline
In my experience of trying every possible variation of setting environment variables, it seems as ADO build agents don't allow the persisting of ARM_CLIENT_SECRET as an environment variable.
So the workaround I had to do was set the environment variables at the task level (instead of at the shell/machine level):
- script: |
terraform init # ...rest of your CLI arguments/backend-config flags
env:
ARM_CLIENT_SECRET: $(client_secret)
displayName: Terraform Init
Edit:
IMO, just using terraform init yourself via CLI is better than using the AzureCLI#2 task which is a confusing black box that honestly makes it harder/more verbose to do the same thing just with the plan CLI command.
Try using system variables $env:servicePrincipalId, $env:servicePrincipalKey, $env:tenantId to get SPN details.