Azure Devops pipeline pytest collection failure ModuleNotFoundError: No module named - azure-devops

I get the following error when running an azure pipeline.
Here is the pytest part of my pipelines yaml file.
steps:
#test
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
#test
- script: |
python -m pip install --upgrade pip
python -m pip install wheel
pip install -r requirements.txt
condition: ne(variables.CACHE_RESTORED, 'true')
displayName: 'Install dependencies'
#test
- script: |
python -m spacy download de_core_news_sm
python -m spacy download de_core_news_md
#test
- script: |
pip install pytest pytest-azurepipelines
pytest
displayName: 'pytest'
The file tat_core/criteria/checks/zw2n_test.py does not exist on my local copy of the repository. I deleted it.
How can I tell the pipeline that the file does not exist and the test does not have to be run? I assume there is some kind of caching indicated by the path /opt/hostedtoolscache. Can I empty this cache?

You can try to remove the '__init__.py' file form your project to see if it can work, like as mentioned in this topic.
In addition, please also try the pytest for the same project on your local machine to see if the same issue occurs.

I added the module 'zahlwort2num' to requirements.txt. The pipeline runs now. Unused dependency in requirements.txt is a drawback.

Related

Running Behave Tests in an Azure Pipeline

I am trying to run some Behave(Python Cucumber) tests in an azure pipeline and I am getting this error:
/Users/runner/work/_temp/f1131f4b-92a8-4c36-92bc-c9cd539f281c.sh: line 1: behave: command not found
##[error]Bash exited with code '127'.
Finishing: Run behave tests
I am running the tests localy on my machine and they work fine and run. I have the tests in an Azure Git repo and this is my Azure Pipeline YAML, I am a noobPotato and could use some help/guidence :)
trigger:
- main
pool:
vmImage: 'macOS-latest'
steps:
- script: |
python -m pip install --upgrade pip
displayName: 'Install dependencies'
- script: |
export PATH=$PATH:$(python -m site --user-base)/bin
pip install --user behave
displayName: 'Add behave to PATH and install'
- script: |
behave
displayName: 'Run behave tests'
I have ried various ways of installing behave with the -m flag etc and also different ways of adding it to the Path but I am stumped and could use some help!

Running bash script within a project folder of AzureDevOps

I have a script that is nicely performing all kinds of dependency installation and some manual works (NPM installation, some manual steps to do while setting up project) to setup a project before it is able to run. The script runs perfectly fine in a local environment.
Im now trying to build my pipeline in Azure DevOps, I realized I can't just fire the script right away. Running npm install inside the script is not actually running within my project folder but it always runs on the path /Users/runner/work
Question:
How can I execute the script within my project folder?
Sample code in my script file
set -e
# Setup project dependencies
npm install
# some mandatory manual work
.....
# Pod installation
cd ios
pod install
My AzurePipelines.yml
- task: Bash#3
inputs:
targetType: 'inline'
script: |
sh $(System.DefaultWorkingDirectory)/projectFolder/setup.sh
failOnStderr: true
Issue log from Azure (as you can see, the npm installation is not working due to incorrect path, hence further actions within pipeline will fail)
npm WARN saveError ENOENT: no such file or directory, open '/Users/runner/work/package.json'
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN enoent ENOENT: no such file or directory, open '/Users/runner/work/package.json'
npm WARN work No description
npm WARN work No repository field.
npm WARN work No README data
npm WARN work No license field.
Firstly I'd advise you to split your script into different steps of a single job or multiple jobs with many steps because this makes it easier to parallel them in the future allowing you to speed up the build time.
In order to execute your script directly from the project folder you can leverage the option working directory:
- task: Bash#3
inputs:
targetType: 'inline'
script: |
./setup.sh
failOnStderr: true
workingDirectory: "$(System.DefaultWorkingDirectory)/projectFolder/"
However in your case you could point directly to the script, without the need to run it as a "script"
- task: Bash#3
inputs:
targetType: 'filePath'
filePath: "$(System.DefaultWorkingDirectory)/projectFolder/setup.sh"
failOnStderr: true
workingDirectory: "$(System.DefaultWorkingDirectory)/projectFolder/"
ref.: https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/bash?view=azure-devops

How to run a pipeline on new PRs only if files in a certain directory are changed

I have a repository with 2 directories, one with python code and one with C code.
I want to run one pipeline on all PRs only when the files in the python folder (hello_app) change.
I have used the following yaml file but the pipeline still runs when a new PR contains changes (only) outside of the hello_app directory:
trigger:
- none
pr:
branches:
include:
- master
paths:
exclude:
- '*'
include:
- hello_app/*
pool:
vmImage: 'ubuntu-latest'
strategy:
matrix:
Python27:
python.version: '2.7'
Python36:
python.version: '3.6'
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
- script: |
python -m pip install --upgrade pip
pip install -r requirements.txt
displayName: 'Install dependencies'
- script: |
python -m pip install flake8
flake8 .
displayName: 'Run linter tests'
- script: |
pip install pytest pytest-azurepipelines
pytest
displayName: 'pytest'
I tried to search online, but seems like this should work. Is there something wrong with the yaml I am using?
Please refer to this Doc:
YAML PR triggers are supported only in GitHub and Bitbucket Cloud. If
you use Azure Repos Git, you can configure a branch policy for build
validation to trigger your build pipeline for validation.
If your are using the Azure Repo, you need to configure a branch policy for build validation to trigger your build pipeline for validation.
You could navigate to branch policy -> build validation and set the path filter(/hello_app/*).
Here is my example:
Then it could work as expected.

Random failures in build pipelines when running dotnet-ef

As part of build I need to generate db migration script. I'm using Microsoft provided build agent
(only interesting part below)
pool:
vmImage: 'windows-2019'
- task: DotNetCoreCLI#2
displayName: Install dotnet-ef
inputs:
command: 'custom'
custom: 'tool'
arguments: 'install dotnet-ef -g --version 5.0.0-preview.8.20407.4'
- task: DotNetCoreCLI#2
displayName: Generate migrations sql script
inputs:
command: 'custom'
custom: 'ef'
arguments: 'migrations script --project Web/Dal --startup-project Web/WebApi --configuration $(buildConfiguration) --context EmailContext --no-build --output $(Build.ArtifactStagingDirectory)/emailcontext-migrations.sql --idempotent --verbose'
dotnet-ef installation seems to work fine:
Tool 'dotnet-ef' (version '5.0.0-preview.8.20407.4') was successfully installed.
but it still fails from time to time with (more often recently) :
"C:\Program Files\dotnet\dotnet.exe" ef migrations script --project Web/Dal --startup-project Web/WebApi --configuration Release --context EmailContext --no-build --output D:\a\1\a/emailcontext-migrations.sql --idempotent --verbose
Could not execute because the specified command or file was not found.
Is there a problem with my build pipeline configuration?
If it fails from time to time I would rather say that this can be an issue with preview version.
Please add an next step after installing to list all globally installed tools:
dotnet tool list -g
You may also show us a log of installing tool for case when your pipeline doesn't work. To verify if you have this:
(We simply don't know it, since we can't check your logs).
And if it still happens I would encourage you to create an issue on GitHub.
From your description, this is an intermittent issue. So your pipeline configuration could be correct.
Could not execute because the specified command or file was not found.
This issue seems to be related to the dotnet-ef package installed.
As Krzysztof Madej's suggestion, this package version could cause this issue.
You could try to use the latest version: 5.0.0-rc.1.20451.13 or latest stable version: 3.1.8.
Here is a GitHub ticket with the same issue( Can't find the file after global installing dotnet-ef). You could follow it and check the update.
On the other hand, you could try to use the Command Line Task to install the dotnet-ef.
For example:
- task: CmdLine#2
inputs:
script: 'dotnet tool install --global dotnet-ef --version xxxx'

Azure pipelines yaml permission denied

I'm getting an error when trying to deploy using azure pipelines.
Error: EACCES: permission denied, access '/usr/local/lib/node_modules'
I think its becuase the node_modules folder is not being shared between stages. But I cant figure out what is proper way to do it.
Here is my yaml file:
variables:
- group: netlify
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
stages:
- stage: Build
jobs:
- job: ARM
steps:
- task: NodeTool#0
inputs:
versionSpec: '10.x'
displayName: 'Install Node.js'
- script: |
npm install
npm run unit
displayName: 'Setup and test'
- script: npm run build
- publish: $(System.DefaultWorkingDirectory)
artifact: dist
- stage: Deploy
dependsOn: Build
condition: succeeded()
jobs:
- job: APP
steps:
- bash: |
npm i -g netlify-cli
netlify deploy --site $(NETLIFY_SITE_ID) --auth $(NETLIFY_AUTH_TOKEN) --prod
After running npm install, package node_modules should appear somehwere in the directory but it seems its not properly shared.
You are using Ubuntu image, and trying to global install netlify-cli in Linux without sudo.
If the Ubuntu is the necessary system you must use, you'd better add sudo before this command:
sudo npm i -g netlify-cli
Command succeed on my pipeline
In this doc, Upgrading on *nix (OSX, Linux, etc.):
You may need to prefix these commands with sudo, especially on Linux,
or OS X if you installed Node using its default installer.
Same in VSTS, you must use sudo in the command to let you has password-less sudo rights for Ubuntu.
Another way is change the image to vs2017-win2016 if you do not has any special requirements for the build environment:
pool:
vmImage: 'vs2017-win2016'
When using this image, you could install anything and do not need use sudo.
In fact, we has been pre-installed many basic tools in all hosted images, including node.js
In our github description, we listed all tools that pre-installed for all images. You can check to know more about VSTS.