Deploy a .NET Core application with AppVeyor: file locked by external process (Appveyor agent) - deployment

I'm trying to deploy a project on a server using Appveyor agent. However, If I donot not restart or stop application before deployment, it do not works.
Web Deploy cannot modify the file 'TestProject.Application.dll' on the destination because it is locked by an external process. In order to allow the publish operation to succeed, you may need to either restart your application to release the lock, or use the AppOffline rule handler for .Net applications on your next publish attempt. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_FILE_IN_USE.
Is there an easy way to work with an app_offline.htm file? appveyor.yml configuration using the "app_offline" feature do not work in this kind of environment.
I was looking for something in the "before/after" section. Here's my appveyor.yml:
version: '1.0.{build}'
os: Visual Studio 2015
install:
- "SET PATH=C:\\Program Files\\dotnet\\bin;%PATH%"
branches:
only:
- master
assembly_info:
patch: true
file: '**\AssemblyInfo.*'
assembly_version: '{version}'
assembly_file_version: '{version}'
assembly_informational_version: '{version}'
build_script:
- nuget sources add -name "VNext" -source https://dotnet.myget.org/F/cli-deps/api/v3/index.json
- nuget sources add -name "nugetv3" -source https://api.nuget.org/v3/index.json
- dotnet restore
- dotnet build */*/project.json
after_build:
- ps: Remove-Item -Path src\TestProject.Web\web.config
- ps: Move-Item -Path src\TestProject.Web\web.$env:APPVEYOR_REPO_BRANCH.config -Destination src\TestProject.Web\web.config
- dotnet publish src\TestProject.Web\ --output %appveyor_build_folder%\publish
artifacts:
- path: .\publish
name: TestProject.Web
test: off
deploy:
- provider: Environment
name: east-webhost
artifact: TestProject.Web
remove_files: false
on:
branch: master

Please look at before/after deploy scripts. Also check this sample on how you can ensure that file released.
--ilya.

Related

Is it possible to split up a GitHub workflow such that each step has a separate badge?

I am relatively new to GitHub workflows and testing. I am working in a private GitHub repository with a dozen colleagues. We want to avoid using services like CircleCI for the time being and see how much we can do with just the integrated GitHub actions, since we are unsure about the kind of access a third party service would be getting to the repo.
Currently, we have two workflows (each one tests the same code for a separate Python environment) that get triggered on push or pull request in the master branch.
The steps of the workflow are as follows (the full workflow yml file is given at the bottom):
Install Anaconda
Create the conda environment (installing dependencies)
Patch libraries
Build a 3rd party library
Run python unit tests
It would be amazing to know immediately which part of the code failed given some new pull requests. Right now, every aspect of the codebase gets tested by a single python file run_tests.py. I was thinking of splitting up this file and creating a workflow per aspect I want to test separately, but then I would have to create a whole new environment, patch the libraries and build the 3rd party library every time I want to conduct a single test. These tests already take quite some time.
My question is now: is there any way to avoid doing that? Is there a way to build everything on the Linux server and re-use that, so that they don't need to be rebuilt every test? Is there a way to display a badge per python test that fails/succeeds, so that we can give more information than just "everything passed" or "everything failed". Is such a thing better suited for a service like CircleCI (or other recommendations are also welcome)?
Here is the full yml file for the workflow for the Python 3 environment. The Python2 one is identical except for the anaconda environment steps.
name: (Python 3) install and test
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master ]
pull_request:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
defaults:
run:
shell: bash -l {0}
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
# Install Anaconda3 and update conda package manager
- name: Install Anaconda3
run: |
wget https://repo.anaconda.com/archive/Anaconda3-2020.11-Linux-x86_64.sh --quiet
bash Anaconda3-2020.11-Linux-x86_64.sh -b -p ~/conda3-env-py3
source ~/conda3-env-py3/bin/activate
conda info
# Updating the root environment. Install dependencies (YAML)
# NOTE: The environment file (yaml) is in the 'etc' folder
- name: Install ISF dependencies
run: |
source ~/conda3-env-py3/bin/activate
conda-env create --name isf-py3 --file etc/env-py3.yml --quiet
source activate env-py3
conda list
# Patch Dask library
- name: Patch dask library
run: |
echo "Patching dask library."
source ~/conda3-env-py3/bin/activate
source activate env-py3
cd installer
python patch_dask_linux64.py
conda list
# Install pandas-msgpack
- name: Install pandas-msgpack
run: |
echo "Installing pandas-msgpack"
git clone https://github.com/abast/pandas-msgpack.git
# Applying patch to pandas-msgpack (generating files using newer Cython)
git -C pandas-msgpack apply ../installer/pandas_msgpack.patch
source ~/conda3-env-py3/bin/activate
source activate env-py3
cd pandas-msgpack; python setup.py install
pip list --format=freeze | grep pandas
# Compile neuron mechanisms
- name: Compile neuron mechanisms
run: |
echo "Compiling neuron mechanisms"
source ~/conda3-env-py3/bin/activate
source activate env-py3
pushd .
cd mechanisms/channels_py3; nrnivmodl
popd
cd mechanisms/netcon_py3; nrnivmodl
# Run tests
- name: Testing
run: |
source ~/conda3-env-py3/bin/activate
source activate env-py3
export PYTHONPATH="$(pwd)"
dask-scheduler --port=38786 --dashboard-address=38787 &
dask-worker localhost:38786 --nthreads 1 --nprocs 4 --memory-limit=100e15 &
python run_tests.py
Many thanks in advance
Tried:
Building everything in a single github workflow, testing everything in the same workflow.
Expected:
Gaining information on specific steps that failed or worked. Displaying this information as a badge on the readme page.
Actual result:
Only the overall success status can be displayed as badge. Only the success status of "running all tests" is available.

Powershell command Compress-Archive incredibly slow

I'm new to windows development and I'm attempting using github actions to do a build/deploy. In the build step I compress my project and upload it
jobs:
build:
runs-on: windows-latest
steps:
- uses: actions/checkout#v2
- name: Set up Node.js version
uses: actions/setup-node#v1
with:
node-version: '16.x'
- name: npm install, build, and test
run: |
npm install
npm run build --if-present
npm run test --if-present
- name: Zip contents for upload
shell: powershell
run: |
Compress-Archive -Path . -DestinationPath nextjs-app.zip
- name: Upload artifact for deployment job
uses: actions/upload-artifact#v2
with:
name: nextjs-app
path: nextjs-app.7z
and then in my deploy step I download it, expand it, and then deploy it.
deploy:
runs-on: windows-latest
needs: build
environment:
name: 'Production'
url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
steps:
- name: Download artifact from build job
uses: actions/download-artifact#v2
with:
name: nextjs-app
- name: Unzip archive file
shell: powershell
run: |
Expand-Archive nextjs-app.zip -DestinationPath .
The issue is that it takes an incredible amount of time to do the compression/expansion. I had previously done this on linux and it took around 2 minutes for compression/expansion, but using powershell it's taking about 15 minutes to compress and 12 minutes to expand. Why are the Compress/Expand commands going so slow? Am I doing something wrong? The expanded folder size is around 200MB
You may experiment with compression parameters, e. g. -CompressionLevel Fastest.
Alternatively use .NET API directly, as Santiago Squarzon suggests. This requires PowerShell (Core) 7+:
[IO.Compression.ZipFile]::CreateFromDirectory( $sourceDirectory, $zipFileName, 'Fastest', $false )
Note that .NET API has a different current directory than PowerShell, so best practice is to pass only absolute paths to .NET API. The simplest way to do this is to prepend $PWD (PowerShell's current directory) to any path, e. g. "$PWD\SomeFile.xyz".

Sonar scanner command found on Gitlab CI/CD

I am trying to integrate Sonarqube into my project CI/CD pipeline on Gitlab. I have followed the documentation on Gitlab and Sonarqube to the best of my understanding to get the job included in my yml file.
I am current experiencing the error as shown in the image below
This is my yml file script
build_project:
stage: build
script:
- xcodebuild clean -workspace TinggIOS/TinggIOS.xcworkspace -scheme TinggIOS | xcpretty
- xcodebuild test -workspace TinggIOS/TinggIOS.xcworkspace -scheme TinggIOS -destination 'platform=iOS Simulator,name=iPhone 11 Pro Max,OS=15' | xcpretty -s
tags:
- stage
image: macos-11-xcode-12
sonarqube-check:
stage: analyze
image:
name: sonarsource/sonar-scanner-cli:latest
entrypoint: [""]
variables:
SONAR_USER_HOME: "${CI_PROJECT_DIR}/.sonar" # Defines the location of the analysis task cache
GIT_DEPTH: "0" # Tells git to fetch all the branches of the project, required by the analysis task
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
script:
- sonar-scanner -Dsonar.qualitygate.wait=true
allow_failure: true
only:
- merge_requests
- feature/unit-test # or the name of your main branch
- develop
tags:
- stage
It looks like the
- sonar-scanner -Dsonar.qualitygate.wait=true
command is not found. Try to run that command on the machine you are setting up your pipeline (like ssh into that machine or ssh into it and try running that command). The issue might be that it isn't installed on there.

Avoid git clean with Azure Devops self-hosted Build Agent

I have a YAML build script in an Azure hosted git repository which gets triggered across 7 build agents running on a local VM. Every time this runs, the build performs a git clean which takes a significant amount of time due to a large node_modules folder which takes a long time to clean up.
The MSDN page here seems to suggest this is configurable but shows no detail of how to configure it. I can't tell whether this is a setting that should be specified on the agent, the YAML script, within DevOps on the pipeline, or where.
Is there any other documentation I'm missing or is this not possible?
Update:
The start of the YAML file is here:
variables:
BUILD_VERSION: 1.0.0.$(Build.BuildId)
buildConfiguration: 'Release'
process.clean: false
jobs:
###### ######################################################
###### 1 - Build and publish .NET
#############################################################
- job: net_build_publish
displayName: .NET build and publish
pool:
name: default
steps:
- script: echo $(BUILD_VERSION)
- task: DotNetCoreCLI#2
displayName: dotnet build $(buildConfiguration)
inputs:
command: 'build'
projects: |
myrepo/**/API/*.csproj
arguments: '-c $(buildConfiguration) /p:Version=$(BUILD_VERSION)'
The complete yaml is a lot longer, but the output from the first job includes this output in a Checkout task
Checkout myrepo#master to s
View raw log
Starting: Checkout myrepo#master to s
==============================================================================
Task : Get sources
Description : Get sources from a repository. Supports Git, TfsVC, and SVN repositories.
Version : 1.0.0
Author : Microsoft
Help : [More Information](https://go.microsoft.com/fwlink/?LinkId=798199)
==============================================================================
Syncing repository: myrepo (Git)
Prepending Path environment variable with directory containing 'git.exe'.
git version
git version 2.26.2.windows.1
git lfs version
git-lfs/2.11.0 (GitHub; windows amd64; go 1.14.2; git 48b28d97)
git config --get remote.origin.url
git clean -ffdx
Removing myrepo/Data/Core/API/bin/
Removing myrepo/Data/Core/API/customersettings.json
Removing myrepo/Data/Core/API/obj/
Removing myrepo/Data/Core/Shared/bin/
Removing myrepo/Data/Core/Shared/obj/
....
We have another job further down which runs npm install and npm build for an Angular project, and every build in the pipeline is taking 5 minutes to perform the npm install step, possibly because of this git clean when retrieving the repository?
Click on your pipeline to show the run history
Click Edit
Click the 3 dot kebab menu
Click Triggers
Click YAML
Click Get Sources
Set Clean to False and Save
To say this is obfuscated is an understatement!
I can't say what affect this will have though, I think the agent reuses the same folder each time a pipeline runs and I'm not Node.js developer so I don't know what leaving old node_modules hanging around will do!
P.S. what people were saying about pipeline caching I don't think is what you were asking, also pipeline caching zips up the cached folder and uploads it to your artifacts storage, it then downloads it each time, if you only have 1 build agent then actually not doing a git clean might be more efficent I'm not 100%
As I mentioned below. You need to calculate hash before you run npm install. If hash is the same as the one kept close to node_modules you can skip installing dependencies. This may help you achieve this:
steps:
- task: PowerShell#2
displayName: 'Calculate and save packages.config hash'
inputs:
targetType: 'inline'
pwsh: true
script: |
# generates a hash of package-lock.json
$newHash = Get-FileHash -Algorithm MD5 -Path (Get-ChildItem package-lock.json)
$hashPath = "$(System.DefaultWorkingDirectory)/cache-npm/hash.txt"
if(Test-Path -path $hashPath) {
if(Compare-Object -ReferenceObject $(Get-Content $hashPath) -DifferenceObject $newHash) {
Write-Host "##vso[task.setvariable variable=NodeModulesAreUpToDate;]true"
$newHash > $hashPath
Write-Host ("Hash File saved to " + $hashPath)
} else {
# files are the same
Write-Host "no need to install node_modules"
}
} else {
$newHash > $hashPath
Write-Host ("Hash File saved to " + $hashPath)
}
$storedHash = Get-Content $hashPath
Write-Host $storedHash
workingDirectory: '$(System.DefaultWorkingDirectory)/cache-npm'
- script: npm install
workingDirectory: '$(Build.SourcesDirectory)/cache-npm'
condition: ne(variables['NodeModulesAreUpToDate'], true)
git clean -ffdx will clean any change untracked by source control in the source. You may try Pipeline caching, which can help reduce build time by allowing the outputs or downloaded dependencies from one run to be reused in later runs, thereby reducing or avoiding the cost to recreate or redownload the same files again. Check the following link:
https://learn.microsoft.com/en-us/azure/devops/pipelines/release/caching?view=azure-devops#nodejsnpm
variables:
npm_config_cache: $(Pipeline.Workspace)/.npm
steps:
- task: Cache#2
inputs:
key: 'npm | "$(Agent.OS)" | package-lock.json'
restoreKeys: |
npm | "$(Agent.OS)"
path: $(npm_config_cache)
displayName: Cache npm
In the checkout step, it allows us to set the boolean option clean to true or false. The default is true so it runs git clean by default.
Below is a minimal example with clean set to false.
jobs:
- job: Build_Job
timeoutInMinutes: 0
pool: 'PoolOne'
steps:
- checkout: self
clean: false
submodules: recursive
- task: PowerShell#2
displayName: Make build
inputs:
targetType: 'inline'
script: |
bash -c 'make'
More documentation and related options can be found here

Create ZIP Files and Deploy as GitHub-Release on AppVeyor

I wan't to create ZIP Files on AppVeyor to publish it on GitHub as an Release.
Currently, the Build-Process make following Steps:
Install Node.js v7
Start the .\Build-All.bat
The Bild.bat has following Steps:
Create Temp and Build Directory
Move Source to Temp
Install depencies with npm install
Start electron-packager to create binary files (See directory structure of /Build/ directory)
Directory Structure:
/Source/
/Build/
L /DSTEd-darwin-x64/
L /DSTEd-linux-armv7l/
L /DSTEd-linux-ia32/
L /DSTEd-linux-x64/
L /DSTEd-mas-x64/
L /DSTEd-win32-ia32/
L /DSTEd-win32-x64/
/Temp/
/Build.bat
Here is that, what i want:
Package each Build-Directory (for sample /Build/DSTEd-win32-x64/) to an ZIP-Archive like /Build/DSTEd-win32-x64.zip
Add all ZIP-Archives (/Build/DSTEd-*-*.zip) to the release
I had created manually a Release on GitHub for sample; That is, what i want:
https://github.com/DST-Tools/DSTEd/releases/tag/1.0.0
Here is my appveyor.yml:
version: 1.0.0-{build}
# Set the Node Version
environment:
matrix:
- nodejs_version: "7"
# Install scripts. (runs after repo cloning)
install:
- ps: Install-Product node $env:nodejs_version
- npm -g install electron-packager
- .\Build-All.bat
# Caching
cache:
- node_modules
# Deployment Options
deploy:
tag: $(appveyor_build_version)
release: 'DSTEd v${appveyor_build_version} - Pre-Release (Preview)'
description: ' ![Preview](https://github.com/DST-Tools/DSTEd/raw/master/Screenshots/preview.png) ## Pre-Release v1.0.0 (Preview) Builded binarys for `Windows` (`32bit` & `64bit`), `Linux` (`32bit`, `64bit` & `armv7`) and `Mac OS X` (`darwin` & `mas`, only `64bit`).'
provider: GitHub
auth_token:
secure: b202f536350628ff69af69d08daee9f76a9cff20
artifact: '**\*.zip'
draft: false
prerelease: true
on:
branch: master
appveyor_repo_tag: true
matrix:
fast_finish: true
build: OFF
test: OFF
Missed part is artifact packaging. You can list all those folders are artifacts and Appveyor will zip them for you. After that deployment will "see" them.
Side note: you might want to remove on/branch:master part because in most cases tag name replaces branch name in incoming webhook. More details are here. In general I would recommend to start with simplest possible deployment configuration and add settings one by one after basic one works.
The packaging artifacts is very complex. By the docs, you can define filters there won't work correctly.
I've implement a own solution to trigger the before_deploy. Before the deployment-phase starts, a Script package the Files as ZIP and add these as an Artifact:
# Deployment Options
before_deploy:
- node .\Tools\PackageBuild.js
- ps: Get-ChildItem .\Build\*.zip | % { Push-AppveyorArtifact $_.FullName -FileName $_.Name }
On the Deployment-Process we add all available artifacts to leave the property blank:
deploy:
[...]
artifact: #leave blank