Why am I getting "Unexpected property rootFolderOrFile" when trying to run JMeter? - azure-devops

I am trying to test an app service using JMeter in Azure Pipelines. I run this code from my YAML-file:
- job: JMeter
pool:
vmImage: 'Ubuntu-16.04'
steps:
- task: JMeterInstaller#0
displayName: 'Install JMeter 5.3'
inputs:
jmeterVersion: '5.3'
- task: Bash#3
displayName: 'Run JMeter test'
inputs:
rootFolderOrFile: '$(System.DefaultWorkingDirectory)/JMeter'
script:
jmeter -n -t test.jmx -l myTtest.csv -e -o Result
But I am getting an error message: Unexpected property rootFolderOrFile. I put this property to indicate the file which contains the test plan (test.jmx). I tried with targetType: inline and the pipeline shows stage succeeded, but I can not find the myTest.csv and the resuts folder Result therefore I thought to change the targetType to filePath can fix my code.
Apparently No!
Can anyone help me or guide me to find why I am getting the error?
Or Is this the wrong way?
Any help is very appreciated

The task version you used it version 3, it does not have the field rootFolderOrFile, you could try below script.
steps:
- task: Bash#3
displayName: 'Bash Script'
inputs:
targetType: filePath
filePath: './$(System.DefaultWorkingDirectory)/JMeter'
arguments: 'jmeter -n -t test.jmx -l myTtest.csv -e -o Result'
In addition, I found a blog, you could also check it.
Update1
If the file JMeter is in the root, the field filePath should be $(System.DefaultWorkingDirectory)/JMeter
Yaml definition
- task: Bash#3
displayName: 'Bash Script'
inputs:
targetType: filePath
filePath: '$(System.DefaultWorkingDirectory)/JMeter'
arguments: 'jmeter -n -t test.jmx -l myTtest.csv -e -o Result'
By the way, we could add the task bash and enter the script ls '$(System.DefaultWorkingDirectory)' to check the working directory files.

I cannot find myTest.csv or the folder Result
To solve this issue, you need to specify the output path for the myTest.csv file.
If you run the command: jmeter -n -t JMeter/test.jmx -l JMeter/myTest.csv -e -o Result
The myTest.csv file will be created in the JMeter folder instead of Result Folder.
So the correct command is that:
jmeter -n -t JMeter/test.jmx -l JMeter/Result/myTest.csv -e -o Result

Related

Using podman instead of docker for the Docker#2 task in Azure DevOps

Our build agent is running Podman 3.4.2 and there is a global alias in place for each terminal session that simply replaces docker with podman, so the command docker --version yields podman version 3.4.2 as a result.
The goal is to use podman for the Docker#2 task in a Azure DevOps pipeline:
steps:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: aspnet-web-mhi
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
Turns out I was a bit naive in my assumptions, that this would work as the ado_agent is having none of it:
##[error]Unhandled: Unable to locate executable file: 'docker'. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also check the file mode to verify the file is executable.
Is there a way to make that replacement work without too much fuss? I'd avoid scripting everything by myself to use podman instead of docker and push it to a registry, if I can avoid it.
Since I needed to make progress on this, I've decided to go the down the bash-route and built, pushed, pulled and run the images manually. This is the gist of it:
steps:
- task: Bash#3
displayName: Build Docker Image for DemoWeb
inputs:
targetType: inline
script: |
podman build -f $(dockerfilePath) -t demoweb:$(tag) .
- task: Bash#3
displayName: Login and Push to ACR
inputs:
targetType: inline
script: |
podman login -u $(acrServicePrincipal) -p $(acrPassword) $(acrName)
podman push demoweb-mhi:$(tag) $(acrName)/demoweb:$(tag)
- task: Bash#3
displayName: Pull image from ACR
inputs:
targetType: inline
script: |
podman pull $(acrName)/demoweb:$(tag) --creds=$(acrServicePrincipal):$(acrPassword)
- task: Bash#3
displayName: Run container
inputs:
targetType: inline
script: |
podman run -p 8080:80 --restart unless-stopped $(acrName)/demoweb:$(tag)
If you decide to go down that route, please make sure to not expose your service principal and password as variables in your yml file, but create them as secrets.
I'll keep this question open - maybe someone with more expertise in handling GNU/Linux finds a more elegant way.
You could install package podman-docker as well. It installs a wrapper in /usr/bin/docker that points to /usr/bin/podman. So tasks that originally use docker binary (or even docker socket) can be run transparently as podman like Docker#2 build and push.
cat /usr/bin/docker
#!/bin/sh
[ -e /etc/containers/nodocker ] || \
echo "Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg." >&2
exec /usr/bin/podman "$#"

Terraform Plan Fails in Azure Devops

I am trying to use terraform to build the infra in Azure. I am automating it through Azure DevOps, we don't have the tasks in our org yet, I am running through CLI scripts to get it, while I am able to till terraform init but unable to run through terraform plan. I am using service principal to authenticate as mentioned here. I am following this to complete the setup.
Here is my pipeline.
- task: AzureCLI#1
displayName: Terraform init
inputs:
azureSubscription: Subscription
scriptLocation: inlineScript
inlineScript: |
set -eux # fail on error
terraform init \
-backend-config=storage_account_name=$(storageAccountName) \
-backend-config=container_name=$(container_name) \
-backend-config=key=$(key)/terraform.tfstate \
-backend-config=sas_token=$(artifactsLocationSasToken) \
-backend-config=subscription_id="$(ARM_SUBSCRIPTION_ID)" \
-backend-config=tenant_id="$(ARM_TENANT_ID)" \
-backend-config=client_id="$(ARM_CLIENT_ID)" \
-backend-config=client_secret="$(ARM_CLIENT_SECRET)"
addSpnToEnvironment: true
workingDirectory: $(System.DefaultWorkingDirectory)/Modules
- bash: |
set -eu # fail on error
terraform plan -out=tfplan -input=false -detailed-exitcode
displayName: Terraform apply
workingDirectory: $(System.DefaultWorkingDirectory)/Modules
and in the tf file I have very basic to try
provider "azurerm" {
version = ">= 2.61.0"
features {}
}
data "azurerm_resource_group" "main" {
name = var.resource_group_name
}
terraform {
backend "azurerm" {
}
}
I am getting this error.
Error building AzureRM Client: obtain subscription(***) from Azure
CLI: Error parsing json result from the Azure CLI: Error waiting for
the Azure CLI: exit status 1: ERROR: Please run 'az login' to setup
account.
Updated
- task: AzureCLI#2
inputs:
azureSubscription: $(scConn)
scriptType: 'pscore'
scriptLocation: 'inlineScript'
inlineScript: |
$sasToken = (az storage container generate-sas --account-name $(storageAccountName) --name $(container_name) --permissions rwdl --expiry $(date -u -d "30 minutes" +%Y-%m-%dT%H:%MZ))
Write-Host($sasToken) Write-Output("##vso[task.setvariable variable=artifactsLocationSasToken;]$sasToken")
- task: AzureCLI#1
displayName: Terraform credentials
inputs:
azureSubscription: $(scConn)
scriptLocation: inlineScript
inlineScript: |
set -eu # fail on error
echo "##vso[task.setvariable variable=ARM_CLIENT_ID]$(servicePrincipalId)"
echo "##vso[task.setvariable variable=ARM_CLIENT_SECRET;issecret=true]$(servicePrincipalKey)"
echo "##vso[task.setvariable variable=ARM_SUBSCRIPTION_ID]$(subscriptionId)"
echo "##vso[task.setvariable variable=ARM_TENANT_ID]$(tenantId)"
addSpnToEnvironment: true
- task: AzureCLI#1
displayName: Terraform init
inputs:
azureSubscription: $(scConn)
scriptLocation: inlineScript
inlineScript: |
set -eux # fail on error
terraform init \
-backend-config=storage_account_name=$(storageAccountName) \
-backend-config=container_name=$(container_name) \
-backend-config=key=$(key)/terraform.tfstate \
-backend-config=sas_token=$(artifactsLocationSasToken) \
-backend-config=subscription_id="$(ARM_SUBSCRIPTION_ID)" \
-backend-config=tenant_id="$(ARM_TENANT_ID)" \
-backend-config=client_id="$(ARM_CLIENT_ID)" \
-backend-config=client_secret="$(ARM_CLIENT_SECRET)"
addSpnToEnvironment: true
workingDirectory: $(System.DefaultWorkingDirectory)/Modules
Error building AzureRM Client: obtain subscription(***) from Azure CLI: Error parsing json result from the Azure CLI: Error waiting for the Azure CLI: exit status 1: ERROR: Please run 'az login' to setup account.
The root cause of this issue is that Azure CLI will run the az account clear command at the end. So the az login information in the current Azure CLI task will not be retained.
You need to add additional command (az login command)to login before the terraform plan command.
You could enable the parameter in Azure CLI Task: addSpnToEnvironment: true and set the login info as Pipeline variable. Then you could use these info in az login command.
Here is an example:
- task: AzureCLI#1
displayName: Terraform init
inputs:
azureSubscription: Subscription
scriptType: bash
scriptLocation: inlineScript
inlineScript: |
set -eux # fail on error
echo "##vso[task.setvariable variable=ARM_CLIENT_ID]$servicePrincipalId"
echo "##vso[task.setvariable variable=ARM_CLIENT_SECRET]$servicePrincipalKey"
echo "##vso[task.setvariable variable=ARM_TENANT_ID]$tenantId"
terraform init \
-backend-config=storage_account_name=$(storageAccountName) \
-backend-config=container_name=$(container_name) \
-backend-config=key=$(key)/terraform.tfstate \
-backend-config=sas_token=$(artifactsLocationSasToken) \
-backend-config=subscription_id="$(ARM_SUBSCRIPTION_ID)" \
-backend-config=tenant_id="$(ARM_TENANT_ID)" \
-backend-config=client_id="$(ARM_CLIENT_ID)" \
-backend-config=client_secret="$(ARM_CLIENT_SECRET)"
addSpnToEnvironment: true
workingDirectory: $(System.DefaultWorkingDirectory)/Modules
- bash: |
set -eu # fail on error
az login --service-principal --username $(ARM_CLIENT_ID) --password $(ARM_CLIENT_SECRET) --tenant $(ARM_TENANT_ID)
terraform plan -out=tfplan -input=false -detailed-exitcode
displayName: Terraform apply
workingDirectory: $(System.DefaultWorkingDirectory)/Modules

How do I run below if/else steps for windows bash script in yaml azure devOps

steps:
- checkout: A
- script: dir
workingDirectory: $(System.DefaultWorkingDirectory)
- script: |
if [ -f abc.yaml ]; then
# this is used to check if the file is exist: if [ -f your-file-here ]
echo "##vso[task.setVariable variable=FILEEXISTS;isOutput=true]true"
else
echo "##vso[task.setVariable variable=FILEEXISTS;isOutput=true]false"
fi
name: printvar
While I run below , code I get error -f was unexpected at this time. ##[error]Cmd.exe exited with code '255'
The bash script looks fine, however the script keyword is a shortcut that will use different script interpreters depending on platform:
The script keyword is a shortcut for the command-line task. The task
runs a script using cmd.exe on Windows and bash on other platforms.
Use the bash keyword or the shell script task directly instead to run a bash script also on windows agents
The bash keyword is a shortcut for the shell script task. The task
runs a script in bash on Windows, macOS, and Linux.
steps:
- checkout: A
- script: dir
workingDirectory: $(System.DefaultWorkingDirectory)
- bash: |
if [ -f abc.yaml ]; then
# this is used to check if the file is exist: if [ -f your-file-here ]
echo "##vso[task.setVariable variable=FILEEXISTS;isOutput=true]true"
else
echo "##vso[task.setVariable variable=FILEEXISTS;isOutput=true]false"
fi
name: printvar
-f was unexpected at this time. ##[error]Cmd.exe exited with code '255'
Test with your yaml sample , I could reproduce the same issue on windows server.
The root cause of this issue is that the script is run with the Command Line task instead of Bash task.
To solve this issue, you could refer to the following two methods:
1.Use Bash task to run the command:
- bash: |
if [ -f abc.yaml ]; then
echo "##vso[task.setVariable variable=FILEEXISTS;isOutput=true]true"
else
echo "##vso[task.setVariable variable=FILEEXISTS;isOutput=true]false"
fi
workingDirectory: '$(build.sourcesdirectory)'
displayName: 'Bash Script'
2.If you still want to use script parameters, you could try to run the following script:
steps:
- script: |
if exist abc.yaml (
echo ##vso[task.setVariable variable=fileexist;isOutput=true]true
) else (
echo ##vso[task.setVariable variable=fileexist;isOutput=true]false
)
workingDirectory: '$(build.sourcesdirectory)'
displayName: 'Command Line Script'
- powershell: |
# Write your PowerShell commands here.
Write-Host "$(CMDLINE.FILEEXIST)"
displayName: 'PowerShell Script'
Note: You could set the workingDirectory parameter in the task to confirm the path you need to check.

How do I use a Nuget package in the Artifacts page in my Docker Build step?

In my Azure Devops project, under the tab "Artifacts", I have a package MyPackage.
In my build pipeline, I have this step:
- stage: Build
displayName: "Build"
jobs:
- job:
steps:
- task: Docker#2
inputs:
containerRegistry: 'TEST container registry'
repository: 'mycontainerregistry/backend'
command: 'buildAndPush'
buildContext: '$(System.DefaultWorkingDirectory)'
Dockerfile: '**/Dockerfile'
tags: |
$(Build.BuildId)
latest
The Dockerfile being built is the standard generated one by Visual Studio:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["MyProject.API.csproj", "MyProject.API/"]
RUN dotnet restore "MyProject.API/MyProject.API.csproj"
COPY . .
WORKDIR "/src/MyProject.API"
RUN dotnet build "MyProject.API.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "MyProject.API.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "MyProject.API.dll"]
Now, the problem is with the dotnet restore command. This step fails because the restore command can't find the MyPackage nuget from the docker build context. How can I make dotnet restore find MyPackage when running through docker build?
If you have private feed you need to add a source using dotnet nuget add source
dotnet nuget sources add -name "SomeName" -source https://pkgs.dev.azure.com/YourFeed/nuget/v3/index.json -username anything -password $TOKEN
And to pass System.AccessToken you need to use ARG
FROM alpine
ARG TOKEN
RUN dotnet nuget sources add -name "SomeName" -source https://pkgs.dev.azure.com/YourFeed/nuget/v3/index.json -username anything -password $TOKEN
and then in YMAL
- task: Docker#2
inputs:
containerRegistry: 'devopsmanual-acr'
command: 'build'
Dockerfile: 'stackoverflow/85-docker/DOCKERFILE'
arguments: '--build-arg TOKEN=$(System.AccessToken)'
Please split you buildAndPush as it doesn't allow passing arguments into two separate task. For more details please check this question.
Please also make sure that you can Build Service has contributor role on feed settings.
The solution of Krystof Madey does not work for me. At the end I followed the guide How to use secrets inside your Docker build using Azure DevOps and Use your Azure DevOps System.AccessToken for Docker builds… safely to finish the job successfully.
At the end my result looks like the following:
Job:
- job: create_image_and_push_to_acr
displayName: "Create image and push to ACR"
variables:
DOCKER_BUILDKIT: 1
steps:
- script: echo $(System.AccessToken) >> azure_devops_pat
displayName: Get PAT
- task: Docker#2
displayName: "Build"
inputs:
command: build
containerRegistry: $(connection_name)
Dockerfile: $(Build.SourcesDirectory)/Dockerfile
repository: "my_repository"
tags: $(applicationComponentVersion)
arguments: '--secret id=AZURE_DEVOPS_PAT,src=./azure_devops_pat'
- task: Docker#2
displayName: "Push"
inputs:
command: push
containerRegistry: $(connection_name)
repository: "my_repository"
tags: $(applicationComponentVersion)
And inside the docker file:
RUN --mount=type=secret,id=AZURE_DEVOPS_PAT,dst=/azure_devops_pat \
dotnet nuget add source --username this_value_could_be_anything --password `cat /azure_devops_pat` --store-password-in-clear-text --name my_name "https://pkgs.dev.azure.com/.../nuget/v3/index.json" && \
dotnet restore "src/MyProject.csproj"

Failed to access variables among the tasks in AzureDevops

I have creates below task to find the current tag and pass on to next tasks while building the docker image with new tag.
- task: Bash#3
displayName: 'Fetch latest tag from ECR and Create Next Tag.'
inputs:
targetType: 'inline'
script: |
ecrURI=$(ecrURI)
repoName="${ecrURI##*/}"
latestECRTag=$(aws ecr describe-images --output json --repository-name ${repoName} --region $(awsDefaultRegion) --query 'sort_by(imageDetails,& imagePushedAt)[-1].imageTags[0]' | jq . --raw-output)
if [[ -z ${latestECRTag} ]];
then
latestECRTag='0.0.0'
fi
major=$(echo ${latestECRTag} |cut -d'.' -f1)
minor=$(echo ${latestECRTag} |cut -d'.' -f2)
patch=$(echo ${latestECRTag} |cut -d'.' -f3)
latestECRTag="$(expr $major + 1).${minor}.${patch}"
echo $latestECRTag
echo "##vso[task.setvariable variable=NEXT_ECR_TAG;isOutput=true]$latestECRtag"
- bash: |
echo "Started Building Docker Image with tag $latestECRTag"
docker build -t test:latest -f Dockerfile .
docker tag test:latest $(ecrURI):$(NEXT_ECR_TAG)
displayName: 'Build Docker Image with Tag.'
workingDirectory: $(System.DefaultWorkingDirectory)/configCreate/
The step/task to fetch and create the new tag is working fine, but as soon as I moved to the next task to build the docker tag based on NEXT_ECR_TAG it shows me empty value. everything else are properly populated.
Can anyone help me to find out why i'm not able to fetch the NEXT_ECR_TAG value in next task? It could be silly thing, but don't know what?
duh!! moment for me. there is a typo in my variable. Fixing typo fixed everything.