I have a Project hosted in VSTS and I'm experimenting with the Build Definitions. I'm trying to execute a powershell script that updates the AssemblyVersion.cs before the Project is built. When I run the script locally it works fine, but when it's ran during the Build process, the script runs without error, but the AssemblyVersion.cs is not updated.
$regex = '\[assembly: AssemblyVersion\("(.*)"\)\]'
$assemblyInfoPath = "..\Properties\AssemblyInfo.cs"
(Get-Content $assemblyInfoPath) | ForEach-Object {
if($_ -match $regex)
{
# Get current version, and update revision number
$version = [version]$matches[1]
$newVersion = "{0}.{1}.{2}.{3}" -f $version.Major, $version.Minor, $version.Build, ($version.Revision + 1)
'[assembly: AssemblyVersion("{0}")]' -f $newVersion
Write-Host "Version updated to: $newVersion"
}
else
{
$_
}
} | Set-Content $assemblyInfoPath
The Build output states that the version has been updated, but when I view AssemblyInfo.cs in the File Viewer it shows the old version.
The build process doesn't check in/commit changes made to source code automatically at the end of the process. You have to write a script to do that.
You need to checkin or push changes to remote repository.
If you are using TFVC, there is TFVC Build Tasks extension that includes TFVC-Check-in Changes step/task that can check in changes.
If you are using Git, you can add these steps/tasks to push changes:
Command Line (Tool: git; Arguments: add **\*.*; Working folder: $(build.sourcesdirectory))
Command Line (Tool: git; Arguments: commit -m "update"; Working folder: $(build.sourcesdirectory))
Command Line (Tool: git; Arguments: push origin HEAD:$(Build.SourceBranchName); Working folder: $(build.sourcesdirectory))
On the other hand, you also could build a custom build/release task and use it in your build/release. More information about custom build/release task, you can refer to: Add a build task
Note: It is not recommended to checkin/push changes to the remote repository in build or release.
Related
I have a release pipeline with a QA/Smoke Test stage, that generates XML files containing test results.
If I run this manually on my machine, obviously I have access to the XML files and I can see the details but on the agent I cannot since we don't have access to those Microsoft hosted agents to view the files.
Is there a way to pipe the files "out" in the task for viewing? maybe there's a third marketplace task that can achieve that?
Here's the deployment result:
2021-06-06T23:34:19.1260519Z Results File: D:\a\r1\a\qa-automation\TestResults\CurrentReport\Logs\junit.xml
2021-06-06T23:34:19.2448029Z Results File: D:\a\r1\a\qa-automation\TestResults\.\CurrentReport\Logs\detailedLogs.xml
2021-06-06T23:34:19.2533810Z
2021-06-06T23:34:19.2596243Z Failed! - Failed: 22, Passed: 2, Skipped: 0, Total: 24, Duration: 52 m 11 s - EED.dll (netcoreapp3.1)
Here's the stage YAML:
steps:
- script: |
git clone https://.../qa-automation.git -b master
cd qa-automation
testrun.bat --cat "EDSmoke" --env dev
displayName: 'Clone qa-automation repo'
Is there a way to pipe the files "out" in the task for viewing? maybe there's a 3rd marketplace task that can achieve that?
You can try with following task:
Write-host "##vso[task.uploadfile]<PathOfTheFiles>\<filename>"
Like:
Write-host "##vso[task.uploadfile]$(System.DefaultWorkingDirectory)\qa-automation\TestResults\CurrentReport\Logs\junit.xml"
View and download attachments associated with releases
Would you like to upload additional logs or diagnostics or images when
running tasks in a release? This feature enables users to upload
additional files during deployments. To upload a new file, use the
following agent command in your script:
Write-host "##vso[task.uploadfile]"
The file is then available as part of the release logs. When you
download all the logs associated with the release, you will be able to
retrieve this file as well.
You can also add a powershell script task in your release definition to read the smoke test output and output it to the console. Then you will be see the content of the log files from "Logs" tab powershell script step. And you can also click "Download all logs as zip" to download the smoke test result files.
I have this structure of projects (folders) in git repository:
/src
/src/Sample.Backend.Common
/src/Sample.Backend.Common.Tests
/src/Sample.Backend.Common.Domain
/src/Sample.Backend.Common.Domain.Tests
/src/Sample.Backend.Pricing.Abstractions
/src/Sample.Backend.Pricing.Domain
/src/Sample.Backend.Pricing.Domain.Tests
/src/Sample.Backend.Pricing.Persistence
/src/Sample.Backend.Pricing.Persistence.Tests
/src/Sample.Backend.Accounting.Abstractions
/src/Sample.Backend.Accounting.Domain
/src/Sample.Backend.Accounting.Domain.Tests
/src/Sample.Backend.Accounting.Persistence
/src/Sample.Backend.Accounting.Persistence.Tests
/src/Sample.Backend.Api
/src/Sample.Common
/src/Sample.Frontend.Common
/src/Sample.Frontend.Web
/src/Sample.Tests.Common
(The sample is simplified, in real there are much more projects/folders.)
I want different pipelines for different parts. For example a pipeline to be triggered whenever any file is commited in master branch in any Backend project. Something like this:
trigger:
branches:
include:
- master
paths:
include:
- src/Sample.Backend.*
- src/Sample.Common
- src/Sample.Tests.Common
The problem is, that filter src/Sample.Backend.* is not working. I have to add exact name of each Backend folder to get it working. I could use exclude but I have the same problem - there are many other projects and I would have to name them all.
I found that wildcards are not supported: https://github.com/MicrosoftDocs/azure-devops-docs/issues/397#issuecomment-422958966
Is there any other way to achieve the same result?
Does Azure YAML pipelne support wildcards in path filter in trigger?
This is a known request on our main forum for product:
Support wildcards (*) in Trigger > Path Filters
This feature has not yet been implemented; you could add your comment and vote this on user voice.
As a workaround for us, we add an inline PowerShell task as the first task to execute the git command line git diff HEAD HEAD~ --name-only then get the modified file names and filter the files name in the latest submit, and use Logging Command to sets variables which are then referenced in custom conditions in the next steps in the build pipeline:
and(succeeded(), eq(variables['CustomVar'], 'True'))
Our inline PowerShell script:
cd $(System.DefaultWorkingDirectory)
$editedFiles = git diff HEAD HEAD~ --name-only
echo "$($editedFiles.Length) files modified:"
$editedFiles | ForEach-Object {
echo $_
Switch -Wildcard ($_ ) {
'XXXX/Src/Sample.Backend.*' {
Write-Host ("##vso[task.setvariable variable=CustomVar]True")
}
'XXXX/Src/Sample.Common*' {
Write-Host ("##vso[task.setvariable variable=CustomVar]True")}
'XXXX/Src/Sample.Tests.Common' {
Write-Host ("##vso[task.setvariable variable=CustomVar]True")}
}
}
Then add the condition for all remaining tasks:
In this case, if the changed files do not meet our filters, then all remaining tasks will be skipped.
UPDATE: 09/09/2021
This is possible now as it is written here
Wild cards can be used when specifying inclusion and exclusion branches for CI or PR triggers in a pipeline YAML file. However, they cannot be used when specifying path filters. For instance, you cannot include all paths that match src/app//myapp*. This has been pointed out as an inconvenience by several customers. This update fills this gap. Now, you can use wild card characters (, *, or ?) when specifying path filters.
Note: documentation seems to be not updated yet.
Old answer:
No this is not possible at the moment. You have even feature request here and I would recommend to upvote it. (I already did this) Rick in above mentioned topic shared his idea how to overcome the issue:
I currently achieve this by having 3 files:
azure-pipelines.yml ( This calls some python on each commit )
azure-pipelines.py (This checks for changed folders and has some parameters to ignore certain folders, then calls the API directly)
azure-pipelines-trigger.yml ( This is called by the python based on the changed folders )
It works well enough, but it is unfortunate for the need to go through these loops.
But it needs an extra work.
This feature will roll out over the next two to three weeks according to the latest release notes
Update on this.
It took a few weeks but the change mentioned by pavlo in the comments above finally got rolled out and path triggers are now supported in YAML.
Most of my builds are from either feature branches or develop, and so I tend to use a known build variable to track the build number such as:
variables:
- group: BuildVars
name: $(BuildPrefix)$(Rev:r)
This works and provides me with a nicely formatted build version that I can then follow through into releases, etc:
However, when we're planning a release, we name our branches after the release, for example: release/1.1, and I'd like to have the build name reference that instead of the hardcoded (previous) version.
I know that I can reference the branch name via the Build.SourceBranch variable, but I don't see an obvious way to read and modify that outside of a build step, by which time I believe it's too late? I don't really want to have to manually change the BuildPrefix variable until the release has been deployed out to production.
Building on from this would then be the ability to append appropriate pre-release tags, etc. based on the branch name...
you can always update the build name during the execution of a build using this:
- pwsh: |
Write-Host "##vso[build.updatebuildnumber]value_goes_here"
so you could calculate the value in the same (or previous step) and update the build name with that value
Is it possible to change the name of a build based on the branch name in Azure Pipelines?
The answer is yes.
The solution we currently use is add a Run Inline Powershell task to update build number based on the Build_SourceBranchName:
$branch = $Env:Build_SourceBranchName
Write-Host "Current branch is $branch"
if ($branch -eq "Dev")
{
Write-Host "##vso[build.updatebuildnumber]$DevBuildNumber"
}
elseif ($branch -eq "Beta")
{
Write-Host "##vso[build.updatebuildnumber]$BetaBuildNumber"
}
elseif ($branch -eq "Test")
{
Write-Host "##vso[build.updatebuildnumber]$TestBuildNumber"
}
Check the Logging Command during the build for some more details.
Hope this helps.
While trying to rig up a solution to build with Cake v0.19.1 on a machine that has only ever known Visual Studio 2017, I can't seem to get NuGetRestore to accept a setting of MSBuildVersion = NuGetMSBuildVersion.MSBuild15.
Is there some magic step to getting a specific MSBuild version into NuGetRestore that I am missing?
Output
...
========================================
RestoreNuGet
========================================
Executing task: RestoreNuGet
Failed to load msbuild Toolset
Could not load file or assembly 'Microsoft.Build, Version=14.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find the file specified.
An error occurred when executing task 'RestoreNuGet'.
Error: NuGet: Process returned an error (exit code 1).
Trimmed-down build.cake
var target = Argument("target", "Default");
var solution = "./some-random.sln";
Task("Default")
.Does(() => {
NuGetRestore(
solution,
new NuGetRestoreSettings {
MSBuildVersion = NuGetMSBuildVersion.MSBuild15,
}
);
});
RunTarget(target);
Update: getting NuGet v4
Per #devlead's answer, I pointed the build.ps1 file at the v4.0.0 of NuGet and got this output.
Cannot find the specified version of msbuild: '15'
An error occurred when executing task 'RestoreNuGet'.
Error: NuGet: Process returned an error (exit code 1).
In my full build.cake, I use vswhere for later MSBuildSettings which I can get to dump out the MSBuild path it found (and I confirmed that exe exists in Explorer).
C:/Program Files (x86)/Microsoft Visual Studio/2017/Enterprise/MSBuild/15.0/Bin/amd64/MSBuild.exe
What you could try is to use the MSBuild alias with the restore target, latest version of MSBuild should have build in NuGet support.
MSBuild(
"./some.sln",
configurator => configurator.WithTarget("restore"));
Make sure you're using the latest version of NuGet.exe, currently it's
v4.0.0 which is the latest version, but you can also see a list of available at https://dist.nuget.org
If you're using the default build.ps1 you could modify it to always download specific version of NuGet.exe
You can do this be remove the Test.Path parts - so it won't look for nuget.exe any where else but your tools folder.
Then change the download uri to not use latest stable (currently v3.5.0) but a specific version by in build.ps1 changing
$NUGET_URL = "https://dist.nuget.org/win-x86-commandline/latest/nuget.exe"
to
$NUGET_URL = "https://dist.nuget.org/win-x86-commandline/v4.0.0/nuget.exe"
will ensure you always download v4.0.0 of the exe.
It's also possible with a little PowerShell to verify correct version in tools, example
if ((Get-ChildItem $NUGET_EXE `
| % VersionInfo `
| % ProductVersion `
| ? { $_ -eq '4.0.0' }|Measure-Object).Count -eq 1)
{
'Correct version'
} else {
'Incorrect version'
}
I am getting the error "Test reports were found but none of them are new. Did tests run?" when trying to send unit test results by email. The reason is that I have a dedicated Jenkins job that imports the artifacts from a test job to itself, and sends the test results by email. The reason why I am doing this is because I don't want Jenkins to send all the developers email during the night :) so I am "post-poning" the email sending since Jenkins itself does not support delayed email notifications (sadly).
However, by the time the "send test results by email" job executes, the tests are hours old and I get the error as specified in the question title. Any ideas on how to get around this problem?
You could try updating the timestamps of the test reports as a build step ("Execute shell script"). E.g.
cd path/to/test/reports
touch *.xml
mvn clean test
via terminal or jenkins. This generates new tests reports.
The other answer that says cd path/to/test/reports touch *.xml didn't work for me, but mvn clean test yes.
Updating the last modified date can also be achieved in gradle itself is desired:
task jenkinsTest{
inputs.files test.outputs.files
doLast{
def timestamp = System.currentTimeMillis()
test.testResultsDir.eachFile { it.lastModified = timestamp }
}
}
build.dependsOn(jenkinsTest)
As mentioned here: http://www.practicalgradle.org/blog/2011/06/incremental-tests-with-jenkins/
Here's an updated version for Jenkinsfile (Declarative Pipeline):
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'make build'
}
}
stage('Test') {
steps {
sh 'make test'
script {
def testResults = findFiles(glob: 'build/reports/**/*.xml')
for(xml in testResults) {
touch xml.getPath()
}
}
}
}
}
post {
always {
archiveArtifacts artifacts: 'build/libs/**/*.jar', fingerprint: true
junit 'build/reports/**/*.xml'
}
}
}
Because gradle caches results from previous builds I ran into the same problem.
I fixed it by adding this line to my publish stage:
sh 'find . -name "TEST-*.xml" -exec touch {} \\;'
So my file is like this:
....
stage('Unit Tests') {
sh './gradlew test'
}
stage('Publish Results') {
// Fool Jenkins into thinking the tests results are new
sh 'find . -name "TEST-*.xml" -exec touch {} \\;'
junit '**/build/test-results/test/TEST-*.xml'
}
Had same issue for jobs running repeatedly (every 30 mins).
For the job, go to Configure, Build, Advanced and within the Switches section add:
--stacktrace
--continue
--rerun-tasks
This worked for me
Navigate to report directory cd /report_directory
Delete all older report rm *.xml
Add junit report_directory/*.xml in pipeline
Rerun the test script , navigate to Build Number → Test Result
Make sure you have one successful build without any failure, only after this you can able to see the reports
Make sure that you have mentioned the correct path against "Test report XMLs" under jenkins configuration, such as "target/surefire-reports/*.xml"
There is no need to touch *.xml as jenkins won't complain even though test results xml file does not change.
if you use Windows slave, you can 'touch' results using groovy pipeline stage with powershell:
powershell 'ls "junitreports\\*.*" | foreach-object { $_.LastWriteTime = Get-Date }'
It happens if you are using a test report which is not modified by that job in that run.
In case for test purpose if you are testing with already created file then, add below command inside jenkins job under Build > Execute Shell
chmod -R 775 /root/.jenkins/workspace/JmeterTest/output.xml
echo " " >> /root/.jenkins/workspace/JmeterTest/output.xml
Above command changes timestamp of file hence error wont display.
Note: To achieve same in Execute Shell instead of above, do not try renaming file using move mv command etc. it won't work , append and delete same for change file timestamp only works.
For me commands like chmod -R 775 test-results.xml or touch test-results.xml does not work due to permission error. As work around use is to set new file in test report settings and command to copy old xml report file to new file.
you can add following shell command to your "Pre Steps" section when configure your job on Jenkins
mvn clean test
this will clean the test
Here's an updated version of the gradle task that touch each test result files.
From Jenkins pipeline script, just call "testAndTouchTestResult" task instead of "test" task.
The code below is with Kotlin syntax:
tasks {
register("testAndTouchTestResult") {
setGroup("verification")
setDescription("touch Test Results for Jenkins")
inputs.files(test.get().outputs)
doLast {
val timestamp = System.currentTimeMillis()
fileTree(test.get().reports.junitXml.destination).forEach { f ->
f.setLastModified(timestamp)
}
}
}
}
The solution for me was delete node_modules and change node version (from 7.1 to 8.4) on jenkins. That's it.