I'm relatively new with Azure DevOps and I was wondering what will be the most practical way to publish Cypress test screenshots in Azure pipelines (or maybe even somewhere external)?
The only way I found online is this:
http://codestyle.dk/2020/05/19/cypress-screenshots-are-missing-in-azure-pipelines/
But maybe there is some more "practical" solution ?!
To publish the screenshots of your failed Cypress tests, you can add the following task to your pipeline definition .yaml file after running your tests. This will publish all created screenshots in the pipeline artifacts of the current pipeline run.
- task: PublishBuildArtifacts#1
displayName: 'Publish Cypress Screenshot Files'
condition: failed()
inputs:
PathtoPublish: 'cypress/screenshots/'
ArtifactName: 'screenshots'
Two notes about this:
If you want to publish screenshots not only when the tests fail, then you have to remove the line condition: failed()
The cypress/screenshots folder is only automatically created by Cypress if the test execution also creates screenshots. If no screenshot was created, then the folder does not exist and the above pipeline task would fail. Therefore I would also persist the empty screenshots folder in the repo by using a .gitkeep file.
Related
I'm currently working on the project that has an Azure Artifact, specifically nuget packages,
and I'm using DotNetCoreCLI#2 for the dotnet restore and build, for the dotnet restore it is a success but for the build it always failed. Please see screenshot below. I don't know why it fails.
I've also included the vstsFeed in the build stage however still failing.
and this is my yaml file
You might be missing some key concepts of a multi-stage pipeline:
Stages are a good way of thinking of your entire continuous-delivery process, eg BUILD -> DEV -> TEST -> PROD. Some teams use stages to represent environments. Stages run in sequence or in parallel and can have dependencies between them to control their order. Stages must contain at least one or more jobs. If you had approval gates applied, approvals are required for the entire stage.
Jobs are often used to group large related activities together, like construction of a build artifact that will used in subsequent stages, deploying into an environment, running an automated regression suite, performing a security scan, etc. Jobs are comprised of at least one or more steps. The main advantage to having multiple jobs in a single stage is useful for parallelism, or re-running all jobs in the stage or just the failing ones.
Steps are the individual activities within a job.
The key thing you're missing here is unless you are running in a self-hosted build-agent pool with only one build agent, each "job" runs on a different machine. So performing a restore on one machine and then compiling on another machine will always fail.
The process you want:
NuGetAuthenticate. This creates a nuget.config on the build agent that points to the vstsFeed
DotNet Restore. This pulls the packages from the vsts feed to the build agent so that the solution has all the dependencies it needs to compile.
DotNet Build. Compile the project file using the dependencies.
- stages: "BUILD"
job: "BUILD"
steps:
- task: NuGetAuthenticate#1
displayName: 'Setup NuGet to use Azure Artifacts'
- task: DotNetCoreCLI#2
displayName: 'Restore NuGet Packages'
inputs:
command: restore
projects: '**/*.csproj'
vstsFeed: '<<GUID>>'
- task: DotNetCoreCLI#2
displayName: 'Compile'
inputs:
command: 'build'
projects: '**/*.csproj'
Next, add some tests, code scanning and then publish a 'build artifact' that can be downloaded at the start of the next stage.
I have an Azure DevOps pipeline step failing running the OWASP dependency check. I want to find what dependencies need to be updated.
The logs that are written during the dependency check pipeline step say:
[INFO] Writing report to: e:\vsts\a\7567\TestResults\dependency-check\dependency-check-report.html
I assume this dependency-check-report.html is where it will tell me what dependencies need to be updated. But I do not understand where this e:\vsts\a\7567\TestResults\ location is, as this step is being run in DevOps. Is this somewhere in DevOps? I cannot seem to find it anywhere. "Download logs" on the pipeline page doesn't seem to have it either.
where this e:\vsts\a\7567\TestResults\ location is
When you run the pipeline in Azure DevOps, this path represents the local path of the machine where the agent locates.
In your case, the agent is self-hosted agent. You go to the local machine where the agent locates and find the dependency-check-report.html in e:\vsts\a\7567\TestResults\dependency-check.
On the other hand, you can use the Publish Pipeline Artifacts task to upload the target file to Pipeline artifacts.
For example:
steps:
- task: dependency-check-build-task#6
displayName: 'Dependency Check'
inputs:
projectName: test
scanPath: test
continueOnError: true
- task: PublishPipelineArtifact#1
displayName: 'Publish Pipeline Artifact'
inputs:
targetPath: '$(Common.TestResultsDirectory)'
artifact: drop
Note: You need to set the continueOnError: true in OWASP dependency check task.
In this case, the dependency-check-report.html on agent machine will be uploaded to Azure Artifacts.
For example:
I have an Azure DevOps Build pipeline that publishes the entire repository as an artifact to be used with the Release pipeline.
# Publish artifacts to be used in release
- task: PublishBuildArtifacts#1
displayName: 'publish artifacts'
inputs:
PathtoPublish: '$(System.DefaultWorkingDirectory)'
ArtifactName: 'TerraformModule'
publishLocation: 'Container'
The build pipeline triggers the creation of a release pipeline where I try to deploy the terraform configuration.
I can successfully run terraform init in this pipeline but when I try to run plan or apply, I get the following error:
Looking at the screenshot, it looks like it tries to execute the command from /usr/local/bin instead of what I specified in the step? Confused by this. Below is the yaml for my plan step:
steps:
- task: ms-devlabs.custom-terraform-tasks.custom-terraform-release-task.TerraformTaskV3#3
displayName: 'terraform plan'
inputs:
provider: aws
command: plan
workingDirectory: '/home/vsts/work/r1/a/_terraform/TerraformModule/Projects/Potentium/Prod'
environmentServiceNameAWS: 'AWS-Terraform-Build'
I manually changed workingDirectory to where the Artifacts from the build pipeline were downloaded to. See log below for example:
2022-08-14T23:41:31.3359557Z Downloaded TerraformModule/Projects/Potentium/Prod/main.tf to /home/vsts/work/r1/a/_terraform/TerraformModule/Projects/Potentium/Prod/main.tf
The plan step in my build pipeline executes without any issues so I have a feeling it is something to do with the artefacts/extraction that is occurring in the download step. Looking for any advice.
I've had similar issues with the extraction phase, when using ExtractFiles#1 doing a similar thing with terraform. I think there's a bug in it, I could not get it to extract files back to the root of System.DefaultWorkingDirectory unless the root folder was included in the archiv, I am using ArchiveFiles#2. So I was ending up with /opt/az_devops/_work/*/s/s
My solution, was to shell out a command to do the extraction. No problems extracting to the root of System.DefaultWorkingDirectory
Just remember if you're running a subsequent terraform plan, by default the working directory System.DefaultWorkingDirectory will change between runs. So ensure you use these variables rather than an explicit reference.
Our build pipeline includes a YML template that is used to build all of our WCF services and Web applications. For all of the WCF services but one, it works like a charm. For this one WCF service, however, the following output is generated in the build logs during the Publish Artifact stage:
##[warning]Directory 'D:\azagent\A2_work\1381\a' is empty. Nothing will be added to build artifact 'drop'.
Our Build stage invokes a separate YML file which includes the following to build and publish the project:
- task: VSBuild#1
displayName: "Build ${{ parameters.solution}}"
inputs:
solution: ${{ parameters.solution }}
msbuildArgs: >
/p:DeployOnBuild=true
/p:WebPublishMethod=Package
/p:PackageAsSingleFile=true
/p:SkipInvalidConfigurations=true
/p:IgnoreDeployManagedRuntimeVersion=true
/p:PackageLocation="$(build.ArtifactStagingDirectory)"
platform: ${{ parameters.buildPlatform }}
configuration: ${{ parameters.buildConfiguration }}
- task: PublishBuildArtifacts#1
displayName: "Publish Build Artifact"
inputs:
PathtoPublish: "$(Build.ArtifactStagingDirectory)"
ArtifactName: "drop"
publishLocation: "Container"
As stated earlier, this works perfectly for all other WCF projects, and generates a build artifact. However, for the problematic WCF Service, no build artifact is generated. What we've observed is that no ZIP file is created in the D:\azagent\A2_work\1381\a folder (theoretically, Build.ArtifactStagingDirectory).
I have tried numerous recommended solutions to resolve this issue, all to no effect.
Adding a CopyFiles#2 task between the VSBuild#1 task and the PublishBuildArtifacts#1 task did place the files in the Build.ArtifactStagingDirectory, but they were not in a ZIP file. Further, the deployment task (later in the pipeline) failed because no ZIP file was present in the drop folder.
Adding /p:OutDir=$(Build.ArtfactStagingDirectory) did seem to produce some sort of artifact, but the deployment task still failed, claiming it could not find the ZIP file in the drop folder.
I created a test repo that contained only the WCF project (as it's normally contained in a solution containing it and a Web application) and ran the pipeline against that repo. No build artifact was created.
Ultimately, nothing I do seems to be able to get this project to generate a build artifact.
What am I missing here? What further information can I provide that will help you to help me resolve this issue?
According to Microsoft's Developer Community, this has been a known issue since August of 2019. WCF services built using a YAML pipeline do not produce build artifacts. Consequently, they cannot be deployed via YAML pipelines.
As of this date, there is neither a fix nor a workaround available from Microsoft.
I'm trying to deploy my project in Azure DevOps through IIS website and SQL deployment. However I am struggling with deploying SQL part as I do not have a .dacpac file in my build artifacts. How do I generate this as any option that I have tried it ended up with failing of the process.
P.S. I do not have access to the VM where I am deploying due to restrictions. I can access database as I marked as DBO on the machine.
My question is also, do I need to generate this DACPAC file through build every time or it can be generated only once, stored on machine, and I point from deployment process to that file?
Thank you for your help!
However I am struggling with deploying SQL part as I do not have a .dacpac file in my build artifacts. How do I generate this as any option that I have tried it ended up with failing of the process. I can access database as I marked as DBO on the machine.
Firstly you have to create SQL Server Database Project using SSDT (or Azure Data Studio insiders preview) by importing objects of the live database.
The database project then is to be placed into a repository
The pipeline (classic or yaml) is to have a build task MSBuild#1. Here is an YAML example. It generates the dacpac
- task: MSBuild#1
displayName: 'Build solution YourDatabase.sln'
inputs:
solution: 'src/YourDatabase.sln'
This task compiles the database project and produces dacpac file(s)
Then produced files are to be extracted:
- task: CopyFiles#2
displayName: 'Extract DACPACs'
inputs:
CleanTargetFolder: false
SourceFolder: '$(agent.builddirectory)\s\src\YourDatabase\bin\Debug\'
Contents: '*.dacpac'
TargetFolder: '$(build.artifactstagingdirectory)'
And finally, published as the artefact
- task: PublishPipelineArtifact#1
displayName: 'Publish Artifact'
inputs:
targetPath: '$(build.artifactstagingdirectory)'
artifact: 'drop'
Deployment of the dacpac is the final goal and can be done using SqlDacpacDeploymentOnMachineGroup#0, however, this is out of the scope of the original question
My question is also, do I need to generate this DACPAC file through build every time or it can be generated only once, stored on machine, and I point from deployment process to that file?
It depends.
Classic pipelines have a separation of BUILD and RELEASE phases. In this case, you can build it once and reuse that dacpac for many future releases.
In case of multi-stage yaml pipelines, it is common that every pipeline run triggers build and deployment stages, because they are still belong to the same pipeline and run as a single unit work.