Why is my nuget package push filtered out in VSTS (Azure Devops) - azure-devops

I have an azure DevOps pipeline who compile correctly and the log files indicate a successful "pack" into "D:\a\1\a\Packages\Rvi.LA.ObjetsMetiers.1.1.0.nupkg".
Concerning the nuget push step, I can see the following in the debug log :
2019-02-14T14:19:43.5995520Z ##[debug]pattern: 'D:\a\1\a\Packages\RVI.LA.ObjetsMetiers*.nupkg'
2019-02-14T14:19:44.9183973Z ##[debug]expanding braces
2019-02-14T14:19:44.9184020Z ##[debug]pattern: 'D:/a/1/a/Packages/RVI.LA.ObjetsMetiers*.nupkg'
2019-02-14T14:19:44.9209179Z ##[debug]findPath: 'D:\a\1\a\Packages'
2019-02-14T14:19:44.9209259Z ##[debug]statOnly: 'false'
2019-02-14T14:19:44.9212246Z ##[debug]findPath: 'D:\a\1\a\Packages'
2019-02-14T14:19:44.9212452Z ##[debug]findOptions.allowBrokenSymbolicLinks: 'undefined'
2019-02-14T14:19:44.9212597Z ##[debug]findOptions.followSpecifiedSymbolicLink: 'undefined'
2019-02-14T14:19:44.9212885Z ##[debug]findOptions.followSymbolicLinks: 'undefined'
2019-02-14T14:19:44.9223644Z ##[debug] D:\a\1\a\Packages (directory)
2019-02-14T14:19:44.9225732Z ##[debug] D:\a\1\a\Packages\Rvi.LA.ObjetsMetiers.1.1.0.nupkg (file)
2019-02-14T14:19:44.9225814Z ##[debug]2 results
2019-02-14T14:19:44.9225888Z ##[debug]found 2 paths
So, it finds two results but strangely indicates "found 2 paths" when one of them is a file. Anyway, it successfully finds the one who needs to be pushed and detect it is a file.
The problem is in the following part of the log :
2019-02-14T14:19:44.9225984Z ##[debug]applying include pattern
2019-02-14T14:19:44.9235322Z ##[debug]0 matches
2019-02-14T14:19:44.9235403Z ##[debug]0 final results
2019-02-14T14:19:44.9247396Z ##[warning]No packages matched the search pattern.
2019-02-14T14:19:44.9247569Z ##[debug]Processed: ##vso[task.issue type=warning;]No packages matched the search pattern.
It seems to exclude it with the include pattern who is "$(Build.ArtifactStagingDirectory)\Packages\$(NomNuspec)*.nupkg" and is translated to "D:\a\1\a\Packages\RVI.LA.ObjetsMetiers*.nupkg" in the log above.
I don't understand why it is not found. Is there something who should hit me in the eye even though two persons looked at it many times?

Got it.
The package was packed as "Rvi.LA.ObjetsMetiers.1.1.0.nupkg", but the filter was "RVI" in uppercase. I did see it, but thought "case sensitivity does not matter with a file name!", but it does. The last update told me this is a real possibility.
I modified my nuget file to pack with "RVI.LA.ObjetsMetiers", not "Rvi.LA.ObjetsMetiers" and the original path "$(Build.ArtifactStagingDirectory)\Packages\$(NomNuspec)*.nupkg" works as $(NomNuspec) is resolved with "RVI.LA.ObjetsMetiers" as stated above.
I will suggest to nuget to remove the "case-sensitiveness" as there is no way to have two files named "fileA.txt" and "FileA.txt" anyway.

Related

How to exclude artifact files after web deployment?

We have a build/release pipeline that is finally working correctly, but the developer asked that we exclude the stage config files (Web.Dev.config, Web.Test.config, Web.Prod.config) as well as the artifact archive itself from the site/wwwroot.
As you can see, every time we deployed, these zip files have been getting stored in the site root as well. They aren't harmful but it doesn't look good:
This is the Release App Service Web Deploy YAML:
steps:
- task: AzureRmWebAppDeployment#4
displayName: 'Azure App Service Deploy: project-123'
inputs:
azureSubscription: 'Azure Dev Service Connection'
WebAppName: 'project-123'
packageForLinux: '$(System.DefaultWorkingDirectory)/Project123 Dev Build Artifact/Release'
enableCustomDeployment: true
enableXmlTransform: true
How do we exclude those files after successful deployment?
Kudu dir structure:
Building on #theWinterCoder answer, Unfortunately, there doesn’t appear to be a way to honor the MSDeploySkipRules defined in the csproj file. Instead, files and folders can be skipped by defining the AdditionalArguments parameter of the Azure App Service Deploy (AzureRmWebAppDeployment) task.
Since there doesn’t appear to be any official documentation for the -skip rules, and the MSDeploy.exe documentation that Azure Pipelines references is out-of-date, in 2012, richard-szalay wrote a useful article, “Demystifying MSDeploy skip rules”, which provides a lot of details for anyone requiring additional control.
Brief Explanation:
The dirPath argument represents the Web Deploy Provider to skip a directory whilst the filePath argument is used to skip an individual file.
The dirPath starts at wwwroot.
For ASP.NET Core applications, there’s another wwwroot under wwwroot; as such, the absolutePath in that case would look like this: absolutePath=wwwroot\\somefoldername which would map to D:\home\site\wwwroot\wwwroot\somefoldername
Solution:
Therefore, since I’m skipping files, i set the web deploy provider to filePath, and since we’re not using .NET Core, we set absolutePath to Web.Dev.config. That would map to D:\home\site\wwwroot\Web.Dev.config.
The same thing applies for the zip artifact, however, if we don’t prepend \\ before the wildcard it will fail with following error:
Error: Error: The regular expression '.zip’ is invalid. Error: parsing ".zip" - Quantifier {x,y} following nothing. Error count: 1.
-skip:objectName=filePath,absolutePath=Web.Dev.config
-skip:objectName=filePath,absolutePath=Web.Prod.config
-skip:objectName=filePath,absolutePath=Web.Test.config
-skip:objectName=filePath,absolutePath=\\*.zip
or with regular expression:
-skip:objectName=filePath,absolutePath="Web.Dev.config|Web.Prod.config|Web.Test.config|\\*.zip"
Thats it 😃
You can add an additional arguments line to the yml that will tell it to skip certain files. It will look something like this:
AdditionalArguments: '-skip:objectName=dirPath,absolutePath=wwwroot\\Uploads'
More details can be found in this thread

Does Azure YAML pipelne support wildcards in path filter in trigger?

I have this structure of projects (folders) in git repository:
/src
/src/Sample.Backend.Common
/src/Sample.Backend.Common.Tests
/src/Sample.Backend.Common.Domain
/src/Sample.Backend.Common.Domain.Tests
/src/Sample.Backend.Pricing.Abstractions
/src/Sample.Backend.Pricing.Domain
/src/Sample.Backend.Pricing.Domain.Tests
/src/Sample.Backend.Pricing.Persistence
/src/Sample.Backend.Pricing.Persistence.Tests
/src/Sample.Backend.Accounting.Abstractions
/src/Sample.Backend.Accounting.Domain
/src/Sample.Backend.Accounting.Domain.Tests
/src/Sample.Backend.Accounting.Persistence
/src/Sample.Backend.Accounting.Persistence.Tests
/src/Sample.Backend.Api
/src/Sample.Common
/src/Sample.Frontend.Common
/src/Sample.Frontend.Web
/src/Sample.Tests.Common
(The sample is simplified, in real there are much more projects/folders.)
I want different pipelines for different parts. For example a pipeline to be triggered whenever any file is commited in master branch in any Backend project. Something like this:
trigger:
branches:
include:
- master
paths:
include:
- src/Sample.Backend.*
- src/Sample.Common
- src/Sample.Tests.Common
The problem is, that filter src/Sample.Backend.* is not working. I have to add exact name of each Backend folder to get it working. I could use exclude but I have the same problem - there are many other projects and I would have to name them all.
I found that wildcards are not supported: https://github.com/MicrosoftDocs/azure-devops-docs/issues/397#issuecomment-422958966
Is there any other way to achieve the same result?
Does Azure YAML pipelne support wildcards in path filter in trigger?
This is a known request on our main forum for product:
Support wildcards (*) in Trigger > Path Filters
This feature has not yet been implemented; you could add your comment and vote this on user voice.
As a workaround for us, we add an inline PowerShell task as the first task to execute the git command line git diff HEAD HEAD~ --name-only then get the modified file names and filter the files name in the latest submit, and use Logging Command to sets variables which are then referenced in custom conditions in the next steps in the build pipeline:
and(succeeded(), eq(variables['CustomVar'], 'True'))
Our inline PowerShell script:
cd $(System.DefaultWorkingDirectory)
$editedFiles = git diff HEAD HEAD~ --name-only
echo "$($editedFiles.Length) files modified:"
$editedFiles | ForEach-Object {
echo $_
Switch -Wildcard ($_ ) {
'XXXX/Src/Sample.Backend.*' {
Write-Host ("##vso[task.setvariable variable=CustomVar]True")
}
'XXXX/Src/Sample.Common*' {
Write-Host ("##vso[task.setvariable variable=CustomVar]True")}
'XXXX/Src/Sample.Tests.Common' {
Write-Host ("##vso[task.setvariable variable=CustomVar]True")}
}
}
Then add the condition for all remaining tasks:
In this case, if the changed files do not meet our filters, then all remaining tasks will be skipped.
UPDATE: 09/09/2021
This is possible now as it is written here
Wild cards can be used when specifying inclusion and exclusion branches for CI or PR triggers in a pipeline YAML file. However, they cannot be used when specifying path filters. For instance, you cannot include all paths that match src/app//myapp*. This has been pointed out as an inconvenience by several customers. This update fills this gap. Now, you can use wild card characters (, *, or ?) when specifying path filters.
Note: documentation seems to be not updated yet.
Old answer:
No this is not possible at the moment. You have even feature request here and I would recommend to upvote it. (I already did this) Rick in above mentioned topic shared his idea how to overcome the issue:
I currently achieve this by having 3 files:
azure-pipelines.yml ( This calls some python on each commit )
azure-pipelines.py (This checks for changed folders and has some parameters to ignore certain folders, then calls the API directly)
azure-pipelines-trigger.yml ( This is called by the python based on the changed folders )
It works well enough, but it is unfortunate for the need to go through these loops.
But it needs an extra work.
This feature will roll out over the next two to three weeks according to the latest release notes
Update on this.
It took a few weeks but the change mentioned by pavlo in the comments above finally got rolled out and path triggers are now supported in YAML.

Replace values in multiple appsettings files stored in different artifacts during VSTS Release pipeline

I am trying to replace appsettings.json and e2e-appsettings.json variables stored in two different artifacts in the release pipeline.
As per below code, ONLY appsettings.json is updating, but for the second line it gives an error,
error: NO JSON file matched with specific pattern: e2e/XXX.EndToEnd.Integration.Tests/e2e-appsettings.json
As per the information it should be the relative path to root. So in this case not sure what should be the root as there are two build artifacts.
Further in download artifact logs says,
2019-04-09T02:33:55.9132583Z Downloaded e2e/XXX.EndToEnd.Integration.Tests/e2e-appsettings.json to D:\a\r1\a\EstimationCore\e2e\XXX.EndToEnd.Integration.Tests\e2e-appsettings.json
The artifact with other appsettings.json file which is working fine is a zip file. logs, Downloading app/app.zip to D:\a\r1\a\EstimationCore\app\app.zip
These I have already tried which gave the same error
- NO JSON file matched with specific pattern: e2e/XXX.EndToEnd.Integration.Tests/e2e-appsettings.json.
- NO JSON file matched with specific pattern: **/*e2e-appsettings.json
- NO JSON file matched with specific pattern: d:\a\r1\a\EstimationCore\e2e\XXX.EndToEnd.Integration.Tests\e2e-appsettings.json.
- NO JSON file matched with specific pattern: d:\a\r1\a\EstimationCore\**\**\e2e-appsettings.json.

How can I get "HelloWorld - BitBake Style" working on a newer version of Yocto?

In the book "Embedded Linux Systems with the Yocto Project", Chapter 4 contains a sample called "HelloWorld - BitBake style". I encountered a bunch of problems trying to get the old example working against the "Sumo" release 2.5.
If you're like me, the first error you encountered following the book's instructions was that you copied across bitbake.conf and got:
ERROR: ParseError at /tmp/bbhello/conf/bitbake.conf:749: Could not include required file conf/abi_version.conf
And after copying over abi_version.conf as well, you kept finding more and more cross-connected files that needed to be moved, and then some relative-path errors after that... Is there a better way?
Here's a series of steps which can allow you to bitbake nano based on the book's instructions.
Unless otherwise specified, these samples and instructions are all based on the online copy of the book's code-samples. While convenient for copy-pasting, the online resource is not totally consistent with the printed copy, and contains at least one extra bug.
Initial workspace setup
This guide assumes that you're working with Yocto release 2.5 ("sumo"), installed into /tmp/poky, and that the build environment will go into /tmp/bbhello. If you don't the Poky tools+libraries already, the easiest way is to clone it with:
$ git clone -b sumo git://git.yoctoproject.org/poky.git /tmp/poky
Then you can initialize the workspace with:
$ source /tmp/poky/oe-init-build-env /tmp/bbhello/
If you start a new terminal window, you'll need to repeat the previous command which will get get your shell environment set up again, but it should not replace any of the files created inside the workspace from the first time.
Wiring up the defaults
The oe-init-build-env script should have just created these files for you:
bbhello/conf/local.conf
bbhello/conf/templateconf.cfg
bbhello/conf/bblayers.conf
Keep these, they supersede some of the book-instructions, meaning that you should not create or have the files:
bbhello/classes/base.bbclass
bbhello/conf/bitbake.conf
Similarly, do not overwrite bbhello/conf/bblayers.conf with the book's sample. Instead, edit it to add a single line pointing to your own meta-hello folder, ex:
BBLAYERS ?= " \
${TOPDIR}/meta-hello \
/tmp/poky/meta \
/tmp/poky/meta-poky \
/tmp/poky/meta-yocto-bsp \
"
Creating the layer and recipe
Go ahead and create the following files from the book-samples:
meta-hello/conf/layer.conf
meta-hello/recipes-editor/nano/nano.bb
We'll edit these files gradually as we hit errors.
Can't find recipe error
The error:
ERROR: BBFILE_PATTERN_hello not defined
It is caused by the book-website's bbhello/meta-hello/conf/layer.conf being internally inconsistent. It uses the collection-name "hello" but on the next two lines uses _test suffixes. Just change them to _hello to match:
# Set layer search pattern and priority
BBFILE_COLLECTIONS += "hello"
BBFILE_PATTERN_hello := "^${LAYERDIR}/"
BBFILE_PRIORITY_hello = "5"
Interestingly, this error is not present in the printed copy of the book.
No license error
The error:
ERROR: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb: This recipe does not have the LICENSE field set (nano)
ERROR: Failed to parse recipe: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
Can be fixed by adding a license setting with one of the values that bitbake recognizes. In this case, add a line onto nano.bb of:
LICENSE="GPLv3"
Recipe parse error
ERROR: ExpansionError during parsing /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
[...]
bb.data_smart.ExpansionError: Failure expanding variable PV_MAJOR, expression was ${#bb.data.getVar('PV',d,1).split('.')[0]} which triggered exception AttributeError: module 'bb.data' has no attribute 'getVar'
This is fixed by updating the special python commands being used in the recipe, because #bb.data was deprecated and is now removed. Instead, replace it with #d, ex:
PV_MAJOR = "${#d.getVar('PV',d,1).split('.')[0]}"
PV_MINOR = "${#d.getVar('PV',d,1).split('.')[1]}"
License checksum failure
ERROR: nano-2.2.6-r0 do_populate_lic: QA Issue: nano: Recipe file fetches files and does not have license file information (LIC_FILES_CHKSUM) [license-checksum]
This can be fixed by adding a directive to the recipe telling it what license-info-containing file to grab, and what checksum we expect it to have.
We can follow the way the recipe generates the SRC_URI, and modify it slightly to point at the COPYING file in the same web-directory. Add this line to nano.bb:
LIC_FILES_CHKSUM = "${SITE}/v${PV_MAJOR}.${PV_MINOR}/COPYING;md5=f27defe1e96c2e1ecd4e0c9be8967949"
The MD5 checksum in this case came from manually downloading and inspecting the matching file.
Done!
Now bitbake nano ought to work, and when it is complete you should see it built nano:
/tmp/bbhello $ find ./tmp/deploy/ -name "*nano*.rpm*"
./tmp/deploy/rpm/i586/nano-dbg-2.2.6-r0.i586.rpm
./tmp/deploy/rpm/i586/nano-dev-2.2.6-r0.i586.rpm
I have recently worked on that hands-on hello world project. As far as I am concerned, I think that the source code in the book contains some bugs. Below there is a list of suggested fixes:
Inheriting native class
In fact, when you build with bitbake that you got from poky, it builds only for the target, unless you mention in your recipe that you are building for the host machine (native). You can do the latter by adding this line at the end of your recipe:
inherit native
Adding license information
It is worth mentioning that the variable LICENSE is important to be set in any recipe, otherwise bitbake rises an error. In our case, we try to build the version 2.2.6 of the nano editor, its current license is GPLv3, hence it should be mentioned as follow:
LICENSE = "GPLv3"
Using os.system calls
As the book states, you cannot dereference metadata directly from a python function. Which means it is mandatory to access metadata through the d dictionary. Bellow, there is a suggestion for the do_unpack python function, you can use its concept to code the next tasks (do_configure, do_compile):
python do_unpack() {
workdir = d.getVar("WORKDIR", True)
dl_dir = d.getVar("DL_DIR", True)
p = d.getVar("P", True)
tarball_name = os.path.join(dl_dir, p+".tar.gz")
bb.plain("Unpacking tarball")
os.system("tar -x -C " + workdir + " -f " + tarball_name)
bb.plain("tarball unpacked successfully")
}
Launching the nano editor
After successfully building your nano editor package, you can find your nano executable in the following directory in case you are using Ubuntu (arch x86_64):
./tmp/work/x86_64-linux/nano/2.2.6-r0/src/nano
Should you have any comments or questions, Don't hesitate !

Copy all files with given extension to output directory using CMake

I've seen that I can use this command in order to copy a directory using cmake:
file(COPY "myDir" DESTINATION "myDestination")
(from this post)
My problem is that I don't want to copy all of myDir, but only the .h files that are in there. I've tried with
file(COPY "myDir/*.h" DESTINATION "myDestination")
but I obtain the following error:
CMake Error at CMakeLists.txt:23 (file):
file COPY cannot find
"/full/path/to/myDIR/*.h".
How can I filter the files that I want to copy to a destination folder?
I've found the solution by myself:
file(GLOB MY_PUBLIC_HEADERS
"myDir/*.h"
)
file(COPY ${MY_PUBLIC_HEADERS} DESTINATION myDestination)
this also works for me:
install(DIRECTORY "myDir/"
DESTINATION "myDestination"
FILES_MATCHING PATTERN "*.h" )
The alternative approach provided by jepessen does not take into account the fact that sometimes the number of files to be copied is too high. I encountered the issue when doing such thing (more than 110 files)
Due to a limitation on Windows on the number of characters (2047 or 8191) in a single command line, this approach may randomly fail depending on the number of headers that are in the folder. More info here https://support.microsoft.com/en-gb/help/830473/command-prompt-cmd-exe-command-line-string-limitation
Here is my solution:
file(GLOB MY_HEADERS myDir/*.h)
foreach(CurrentHeaderFile IN LISTS MY_HEADERS)
add_custom_command(
TARGET MyTarget PRE_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CurrentHeaderFile} ${myDestination}
COMMENT "Copying header: ${CurrentHeaderFile}")
endforeach()
This works like a charm on MacOS. However, if you have another target that depends on MyTarget and needs to use these headers, you may have some compile errors due to not found includes on Windows. Therefore you may want to prefer the following option that defines an intermediate target.
function (CopyFile ORIGINAL_TARGET FILE_PATH COPY_OUTPUT_DIRECTORY)
# Copy to the disk at build time so that when the header file changes, it is detected by the build system.
set(input ${FILE_PATH})
get_filename_component(file_name ${FILE_PATH} NAME)
set(output ${COPY_OUTPUT_DIRECTORY}/${file_name})
set(copyTarget ${ORIGINAL_TARGET}-${file_name})
add_custom_target(${copyTarget} DEPENDS ${output})
add_dependencies(${ORIGINAL_TARGET} ${copyTarget})
add_custom_command(
DEPENDS ${input}
OUTPUT ${output}
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${input} ${output}
COMMENT "Copying file to ${output}."
)
endfunction ()
foreach(HeaderFile IN LISTS MY_HEADERS)
CopyFile(MyTarget ${HeaderFile} ${myDestination})
endforeach()
The downside indeed is that you end up with multiple target (one per copied file) but they should all end up together (alphabetically) since they start with the same prefix ORIGINAL_TARGET -> "MyTarget"