Azure Pipelines: Where is the output of my script? - azure-devops

I tried to build my Angular 13 app on a self-hosted agent and created the following YAML snippet for this:
- task: NodeTool#0
displayName: 'Install Node.js'
inputs:
versionSpec: '14.x'
- script: |
npm install -g #angular/cli
npm install
ng build --configuration production --aot
displayName: 'npm install and build'
workingDirectory: '$(Build.SourcesDirectory)/src'
I can observe the /s directory of the agent _work-directory and after my task was running, there is no node_modules folder or dist folder inside.
But also no console output.
If I remove the line "npm install -g #angular/cli" from the line, a node_modules folder gets created, but no dist-folder.
I am pretty sure that the installation of angular cli fails, but I do not get any error output in my window.
It just looks like this:
How can I get more logs to find out why the angular cli is not installing correctly? I saw that the "script" file that is executed on the agent puts an #echo off by default in front of the script.
Why is that?
How can I get some output to find my problem?

To get more detailed log from the pipeline you can add the variable system.debug and set the value to true in your pipeline.
For YAML pipelines, you can select Variables in the upper right corner of the YAML edit page.
Add a new variable with the name System.Debug and value true.
For more info about logs, please refer to Review logs to diagnose pipeline issues.

Related

Running Behave Tests in an Azure Pipeline

I am trying to run some Behave(Python Cucumber) tests in an azure pipeline and I am getting this error:
/Users/runner/work/_temp/f1131f4b-92a8-4c36-92bc-c9cd539f281c.sh: line 1: behave: command not found
##[error]Bash exited with code '127'.
Finishing: Run behave tests
I am running the tests localy on my machine and they work fine and run. I have the tests in an Azure Git repo and this is my Azure Pipeline YAML, I am a noobPotato and could use some help/guidence :)
trigger:
- main
pool:
vmImage: 'macOS-latest'
steps:
- script: |
python -m pip install --upgrade pip
displayName: 'Install dependencies'
- script: |
export PATH=$PATH:$(python -m site --user-base)/bin
pip install --user behave
displayName: 'Add behave to PATH and install'
- script: |
behave
displayName: 'Run behave tests'
I have ried various ways of installing behave with the -m flag etc and also different ways of adding it to the Path but I am stumped and could use some help!

Alternatives for gnu tar when using cache#2 task in Azure DevOps

I'm having a hard time getting cache#2 task working as it seems when the build agent is windows OS, it requires to have GNU Tar installed in the machine and this seems it doesnt seem to be an option right now. This needs to run in the windows machine as its part of a stage and requires to collect the build output to produce the artifact.
Somewhere I read this could be an option but it doesn't work and its not on Microsoft's documentation: tool: '7zip'
- task: Cache#2
displayName: Restore from cache
inputs:
key: 'npm | "$(Agent.OS)" | $(projectDir)package-lock.json'
#restoreKeys: |
# npm | "$(Agent.OS)"
path: $(Pipeline.Workspace)/.npm
cacheHitVar: CACHE_RESTORED
verbose: true
- script: |
npm install
displayName: Install dependencies #, Build and Lint
workingDirectory: $(projectDir)
condition: ne(variables.CACHE_RESTORED, 'true')
The log says: ##[error]Failed to start the required dependency 'tar'. Please verify the correct version is installed and available on the path. This is accurate but I was wondering if there's any workaround or if I just can have the cache implemented and I have to install all dependencies on each run.
Btw, I'm using the default pool and its using windows v10.0.14393 and agent version 2.206.1
Thanks!
If it helps to anyone, I had to split my workflow into API and FE stages. The angular stage would run in the ubuntu agent that would have the tar installed. That fixed my problem and I also had the 2 stages running in parallel.

Running bash script within a project folder of AzureDevOps

I have a script that is nicely performing all kinds of dependency installation and some manual works (NPM installation, some manual steps to do while setting up project) to setup a project before it is able to run. The script runs perfectly fine in a local environment.
Im now trying to build my pipeline in Azure DevOps, I realized I can't just fire the script right away. Running npm install inside the script is not actually running within my project folder but it always runs on the path /Users/runner/work
Question:
How can I execute the script within my project folder?
Sample code in my script file
set -e
# Setup project dependencies
npm install
# some mandatory manual work
.....
# Pod installation
cd ios
pod install
My AzurePipelines.yml
- task: Bash#3
inputs:
targetType: 'inline'
script: |
sh $(System.DefaultWorkingDirectory)/projectFolder/setup.sh
failOnStderr: true
Issue log from Azure (as you can see, the npm installation is not working due to incorrect path, hence further actions within pipeline will fail)
npm WARN saveError ENOENT: no such file or directory, open '/Users/runner/work/package.json'
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN enoent ENOENT: no such file or directory, open '/Users/runner/work/package.json'
npm WARN work No description
npm WARN work No repository field.
npm WARN work No README data
npm WARN work No license field.
Firstly I'd advise you to split your script into different steps of a single job or multiple jobs with many steps because this makes it easier to parallel them in the future allowing you to speed up the build time.
In order to execute your script directly from the project folder you can leverage the option working directory:
- task: Bash#3
inputs:
targetType: 'inline'
script: |
./setup.sh
failOnStderr: true
workingDirectory: "$(System.DefaultWorkingDirectory)/projectFolder/"
However in your case you could point directly to the script, without the need to run it as a "script"
- task: Bash#3
inputs:
targetType: 'filePath'
filePath: "$(System.DefaultWorkingDirectory)/projectFolder/setup.sh"
failOnStderr: true
workingDirectory: "$(System.DefaultWorkingDirectory)/projectFolder/"
ref.: https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/bash?view=azure-devops

Random failures in build pipelines when running dotnet-ef

As part of build I need to generate db migration script. I'm using Microsoft provided build agent
(only interesting part below)
pool:
vmImage: 'windows-2019'
- task: DotNetCoreCLI#2
displayName: Install dotnet-ef
inputs:
command: 'custom'
custom: 'tool'
arguments: 'install dotnet-ef -g --version 5.0.0-preview.8.20407.4'
- task: DotNetCoreCLI#2
displayName: Generate migrations sql script
inputs:
command: 'custom'
custom: 'ef'
arguments: 'migrations script --project Web/Dal --startup-project Web/WebApi --configuration $(buildConfiguration) --context EmailContext --no-build --output $(Build.ArtifactStagingDirectory)/emailcontext-migrations.sql --idempotent --verbose'
dotnet-ef installation seems to work fine:
Tool 'dotnet-ef' (version '5.0.0-preview.8.20407.4') was successfully installed.
but it still fails from time to time with (more often recently) :
"C:\Program Files\dotnet\dotnet.exe" ef migrations script --project Web/Dal --startup-project Web/WebApi --configuration Release --context EmailContext --no-build --output D:\a\1\a/emailcontext-migrations.sql --idempotent --verbose
Could not execute because the specified command or file was not found.
Is there a problem with my build pipeline configuration?
If it fails from time to time I would rather say that this can be an issue with preview version.
Please add an next step after installing to list all globally installed tools:
dotnet tool list -g
You may also show us a log of installing tool for case when your pipeline doesn't work. To verify if you have this:
(We simply don't know it, since we can't check your logs).
And if it still happens I would encourage you to create an issue on GitHub.
From your description, this is an intermittent issue. So your pipeline configuration could be correct.
Could not execute because the specified command or file was not found.
This issue seems to be related to the dotnet-ef package installed.
As Krzysztof Madej's suggestion, this package version could cause this issue.
You could try to use the latest version: 5.0.0-rc.1.20451.13 or latest stable version: 3.1.8.
Here is a GitHub ticket with the same issue( Can't find the file after global installing dotnet-ef). You could follow it and check the update.
On the other hand, you could try to use the Command Line Task to install the dotnet-ef.
For example:
- task: CmdLine#2
inputs:
script: 'dotnet tool install --global dotnet-ef --version xxxx'

Where do the builds go after the pipeline is run in Azure DevOps?

I'm very new to Azure DevOps. I'm running npm run build in the pipeline.
I'm wonder where the dist folder goes? How do I get access to it for further processing?
The build completes without error.
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- task: NodeTool#0
inputs:
versionSpec: '10.x'
displayName: 'Install Node.js'
- script: |
npm install
npm run build
displayName: 'npm install and build'
In the agent you have 3 folders: a for artifacts, s for sources and b for binaries.
When the build start all the code downloaded to the s folder, so if you run npm run build the dist folder created there.
How do you access it? there are environment variables for all the folders, to the s folder the variable is $(Agent.SourcesDirectory), so you can take the dist from there in another task by $(Agent.SourcesDirectory)/Your App/dist (or more deeper, depend your app structure).
You can find here the list of the environment variables.