I wan't to create ZIP Files on AppVeyor to publish it on GitHub as an Release.
Currently, the Build-Process make following Steps:
Install Node.js v7
Start the .\Build-All.bat
The Bild.bat has following Steps:
Create Temp and Build Directory
Move Source to Temp
Install depencies with npm install
Start electron-packager to create binary files (See directory structure of /Build/ directory)
Directory Structure:
/Source/
/Build/
L /DSTEd-darwin-x64/
L /DSTEd-linux-armv7l/
L /DSTEd-linux-ia32/
L /DSTEd-linux-x64/
L /DSTEd-mas-x64/
L /DSTEd-win32-ia32/
L /DSTEd-win32-x64/
/Temp/
/Build.bat
Here is that, what i want:
Package each Build-Directory (for sample /Build/DSTEd-win32-x64/) to an ZIP-Archive like /Build/DSTEd-win32-x64.zip
Add all ZIP-Archives (/Build/DSTEd-*-*.zip) to the release
I had created manually a Release on GitHub for sample; That is, what i want:
https://github.com/DST-Tools/DSTEd/releases/tag/1.0.0
Here is my appveyor.yml:
version: 1.0.0-{build}
# Set the Node Version
environment:
matrix:
- nodejs_version: "7"
# Install scripts. (runs after repo cloning)
install:
- ps: Install-Product node $env:nodejs_version
- npm -g install electron-packager
- .\Build-All.bat
# Caching
cache:
- node_modules
# Deployment Options
deploy:
tag: $(appveyor_build_version)
release: 'DSTEd v${appveyor_build_version} - Pre-Release (Preview)'
description: ' ![Preview](https://github.com/DST-Tools/DSTEd/raw/master/Screenshots/preview.png) ## Pre-Release v1.0.0 (Preview) Builded binarys for `Windows` (`32bit` & `64bit`), `Linux` (`32bit`, `64bit` & `armv7`) and `Mac OS X` (`darwin` & `mas`, only `64bit`).'
provider: GitHub
auth_token:
secure: b202f536350628ff69af69d08daee9f76a9cff20
artifact: '**\*.zip'
draft: false
prerelease: true
on:
branch: master
appveyor_repo_tag: true
matrix:
fast_finish: true
build: OFF
test: OFF
Missed part is artifact packaging. You can list all those folders are artifacts and Appveyor will zip them for you. After that deployment will "see" them.
Side note: you might want to remove on/branch:master part because in most cases tag name replaces branch name in incoming webhook. More details are here. In general I would recommend to start with simplest possible deployment configuration and add settings one by one after basic one works.
The packaging artifacts is very complex. By the docs, you can define filters there won't work correctly.
I've implement a own solution to trigger the before_deploy. Before the deployment-phase starts, a Script package the Files as ZIP and add these as an Artifact:
# Deployment Options
before_deploy:
- node .\Tools\PackageBuild.js
- ps: Get-ChildItem .\Build\*.zip | % { Push-AppveyorArtifact $_.FullName -FileName $_.Name }
On the Deployment-Process we add all available artifacts to leave the property blank:
deploy:
[...]
artifact: #leave blank
Related
I am relatively new to GitHub workflows and testing. I am working in a private GitHub repository with a dozen colleagues. We want to avoid using services like CircleCI for the time being and see how much we can do with just the integrated GitHub actions, since we are unsure about the kind of access a third party service would be getting to the repo.
Currently, we have two workflows (each one tests the same code for a separate Python environment) that get triggered on push or pull request in the master branch.
The steps of the workflow are as follows (the full workflow yml file is given at the bottom):
Install Anaconda
Create the conda environment (installing dependencies)
Patch libraries
Build a 3rd party library
Run python unit tests
It would be amazing to know immediately which part of the code failed given some new pull requests. Right now, every aspect of the codebase gets tested by a single python file run_tests.py. I was thinking of splitting up this file and creating a workflow per aspect I want to test separately, but then I would have to create a whole new environment, patch the libraries and build the 3rd party library every time I want to conduct a single test. These tests already take quite some time.
My question is now: is there any way to avoid doing that? Is there a way to build everything on the Linux server and re-use that, so that they don't need to be rebuilt every test? Is there a way to display a badge per python test that fails/succeeds, so that we can give more information than just "everything passed" or "everything failed". Is such a thing better suited for a service like CircleCI (or other recommendations are also welcome)?
Here is the full yml file for the workflow for the Python 3 environment. The Python2 one is identical except for the anaconda environment steps.
name: (Python 3) install and test
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master ]
pull_request:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
defaults:
run:
shell: bash -l {0}
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
# Install Anaconda3 and update conda package manager
- name: Install Anaconda3
run: |
wget https://repo.anaconda.com/archive/Anaconda3-2020.11-Linux-x86_64.sh --quiet
bash Anaconda3-2020.11-Linux-x86_64.sh -b -p ~/conda3-env-py3
source ~/conda3-env-py3/bin/activate
conda info
# Updating the root environment. Install dependencies (YAML)
# NOTE: The environment file (yaml) is in the 'etc' folder
- name: Install ISF dependencies
run: |
source ~/conda3-env-py3/bin/activate
conda-env create --name isf-py3 --file etc/env-py3.yml --quiet
source activate env-py3
conda list
# Patch Dask library
- name: Patch dask library
run: |
echo "Patching dask library."
source ~/conda3-env-py3/bin/activate
source activate env-py3
cd installer
python patch_dask_linux64.py
conda list
# Install pandas-msgpack
- name: Install pandas-msgpack
run: |
echo "Installing pandas-msgpack"
git clone https://github.com/abast/pandas-msgpack.git
# Applying patch to pandas-msgpack (generating files using newer Cython)
git -C pandas-msgpack apply ../installer/pandas_msgpack.patch
source ~/conda3-env-py3/bin/activate
source activate env-py3
cd pandas-msgpack; python setup.py install
pip list --format=freeze | grep pandas
# Compile neuron mechanisms
- name: Compile neuron mechanisms
run: |
echo "Compiling neuron mechanisms"
source ~/conda3-env-py3/bin/activate
source activate env-py3
pushd .
cd mechanisms/channels_py3; nrnivmodl
popd
cd mechanisms/netcon_py3; nrnivmodl
# Run tests
- name: Testing
run: |
source ~/conda3-env-py3/bin/activate
source activate env-py3
export PYTHONPATH="$(pwd)"
dask-scheduler --port=38786 --dashboard-address=38787 &
dask-worker localhost:38786 --nthreads 1 --nprocs 4 --memory-limit=100e15 &
python run_tests.py
Many thanks in advance
Tried:
Building everything in a single github workflow, testing everything in the same workflow.
Expected:
Gaining information on specific steps that failed or worked. Displaying this information as a badge on the readme page.
Actual result:
Only the overall success status can be displayed as badge. Only the success status of "running all tests" is available.
I have a YAML build script in an Azure hosted git repository which gets triggered across 7 build agents running on a local VM. Every time this runs, the build performs a git clean which takes a significant amount of time due to a large node_modules folder which takes a long time to clean up.
The MSDN page here seems to suggest this is configurable but shows no detail of how to configure it. I can't tell whether this is a setting that should be specified on the agent, the YAML script, within DevOps on the pipeline, or where.
Is there any other documentation I'm missing or is this not possible?
Update:
The start of the YAML file is here:
variables:
BUILD_VERSION: 1.0.0.$(Build.BuildId)
buildConfiguration: 'Release'
process.clean: false
jobs:
###### ######################################################
###### 1 - Build and publish .NET
#############################################################
- job: net_build_publish
displayName: .NET build and publish
pool:
name: default
steps:
- script: echo $(BUILD_VERSION)
- task: DotNetCoreCLI#2
displayName: dotnet build $(buildConfiguration)
inputs:
command: 'build'
projects: |
myrepo/**/API/*.csproj
arguments: '-c $(buildConfiguration) /p:Version=$(BUILD_VERSION)'
The complete yaml is a lot longer, but the output from the first job includes this output in a Checkout task
Checkout myrepo#master to s
View raw log
Starting: Checkout myrepo#master to s
==============================================================================
Task : Get sources
Description : Get sources from a repository. Supports Git, TfsVC, and SVN repositories.
Version : 1.0.0
Author : Microsoft
Help : [More Information](https://go.microsoft.com/fwlink/?LinkId=798199)
==============================================================================
Syncing repository: myrepo (Git)
Prepending Path environment variable with directory containing 'git.exe'.
git version
git version 2.26.2.windows.1
git lfs version
git-lfs/2.11.0 (GitHub; windows amd64; go 1.14.2; git 48b28d97)
git config --get remote.origin.url
git clean -ffdx
Removing myrepo/Data/Core/API/bin/
Removing myrepo/Data/Core/API/customersettings.json
Removing myrepo/Data/Core/API/obj/
Removing myrepo/Data/Core/Shared/bin/
Removing myrepo/Data/Core/Shared/obj/
....
We have another job further down which runs npm install and npm build for an Angular project, and every build in the pipeline is taking 5 minutes to perform the npm install step, possibly because of this git clean when retrieving the repository?
Click on your pipeline to show the run history
Click Edit
Click the 3 dot kebab menu
Click Triggers
Click YAML
Click Get Sources
Set Clean to False and Save
To say this is obfuscated is an understatement!
I can't say what affect this will have though, I think the agent reuses the same folder each time a pipeline runs and I'm not Node.js developer so I don't know what leaving old node_modules hanging around will do!
P.S. what people were saying about pipeline caching I don't think is what you were asking, also pipeline caching zips up the cached folder and uploads it to your artifacts storage, it then downloads it each time, if you only have 1 build agent then actually not doing a git clean might be more efficent I'm not 100%
As I mentioned below. You need to calculate hash before you run npm install. If hash is the same as the one kept close to node_modules you can skip installing dependencies. This may help you achieve this:
steps:
- task: PowerShell#2
displayName: 'Calculate and save packages.config hash'
inputs:
targetType: 'inline'
pwsh: true
script: |
# generates a hash of package-lock.json
$newHash = Get-FileHash -Algorithm MD5 -Path (Get-ChildItem package-lock.json)
$hashPath = "$(System.DefaultWorkingDirectory)/cache-npm/hash.txt"
if(Test-Path -path $hashPath) {
if(Compare-Object -ReferenceObject $(Get-Content $hashPath) -DifferenceObject $newHash) {
Write-Host "##vso[task.setvariable variable=NodeModulesAreUpToDate;]true"
$newHash > $hashPath
Write-Host ("Hash File saved to " + $hashPath)
} else {
# files are the same
Write-Host "no need to install node_modules"
}
} else {
$newHash > $hashPath
Write-Host ("Hash File saved to " + $hashPath)
}
$storedHash = Get-Content $hashPath
Write-Host $storedHash
workingDirectory: '$(System.DefaultWorkingDirectory)/cache-npm'
- script: npm install
workingDirectory: '$(Build.SourcesDirectory)/cache-npm'
condition: ne(variables['NodeModulesAreUpToDate'], true)
git clean -ffdx will clean any change untracked by source control in the source. You may try Pipeline caching, which can help reduce build time by allowing the outputs or downloaded dependencies from one run to be reused in later runs, thereby reducing or avoiding the cost to recreate or redownload the same files again. Check the following link:
https://learn.microsoft.com/en-us/azure/devops/pipelines/release/caching?view=azure-devops#nodejsnpm
variables:
npm_config_cache: $(Pipeline.Workspace)/.npm
steps:
- task: Cache#2
inputs:
key: 'npm | "$(Agent.OS)" | package-lock.json'
restoreKeys: |
npm | "$(Agent.OS)"
path: $(npm_config_cache)
displayName: Cache npm
In the checkout step, it allows us to set the boolean option clean to true or false. The default is true so it runs git clean by default.
Below is a minimal example with clean set to false.
jobs:
- job: Build_Job
timeoutInMinutes: 0
pool: 'PoolOne'
steps:
- checkout: self
clean: false
submodules: recursive
- task: PowerShell#2
displayName: Make build
inputs:
targetType: 'inline'
script: |
bash -c 'make'
More documentation and related options can be found here
I neet to run jobB only when jobA passes. I have to create a release tag only when my test stage passes successfully (I have added a code for that to happen here). My repository has 'README.md' file already. I am just checking its existence in my test stage. So, my test stage will always pass. Please let me know how do I write a code to create a release tag. A tag can be for example v1.1
stages:
- build
- test
- release
jobA:
stage: test
script:
- test -e README.md && exit 0
jobB:
stage: release
when: on_success
script:
# code for creating a release tag
In addition of Greg Woz's answer, in which GITLAB_API_TOKEN (a PAT -- PErsonal Access Token) is used, you now can also use a dedicated private project token:
See GitLab 13.9 (February 2021)
Support PRIVATE-TOKEN to create releases using the Release-CLI
In this milestone, we added the ability to use the release-cli with a PRIVATE-TOKEN as defined in the Create a release API documentation.
This enables overriding the user that creates a release, and supports automation by allowing connection of a project-level PRIVATE-TOKEN or by using a bot user account’s PRIVATE-TOKEN.
See Documentation and Issue.
It may not answer fully for your question but hopefully it helps. Perhaps try the below. My solution assumes you have package.json with version as it's the most popular case - but there is nothing wrong if you use any other way to define version:
version:
stage: version
only:
- master
script:
- VERSION_NUMBER=$(node -p "require('./package.json').version")
- "curl --header \"PRIVATE-TOKEN: $GITLAB_API_TOKEN\" --request POST \"https://gitlab.com/api/v4/projects/$CI_PROJECT_ID/repository/tags?tag_name=v${VERSION_NUMBER}&ref=$CI_COMMIT_SHA&message=Tagged%20with%20version%20v${VERSION_NUMBER}\""
Then you can use:
publish:
stage: publish
only:
- tags
script:
- npm version --no-git-tag-version ${CI_COMMIT_TAG:1}
- npm publish --access public
for example in a next step.
Gitlab 13.2 introduced release keyword in .gitlab-ci.yml. There's no need anymore to create a "jobB" job, you can add keyword to main job as long as it can access release-cli:
stages:
- build
- test
- release
jobA:
stage: test
image: registry.gitlab.com/gitlab-org/release-cli:latest
script:
- test -e README.md && exit 0
release:
tag_name: $TAG
description: './path/to/CHANGELOG.md'
GitLab Docs has some examples to dynamically provide $TAG and description.
Probably the dependencies mechanism is a right tool for doing that. So, you can use README.MD (or test results folder) as an artifact in jobA and add the dependency on jobA in the jobB
jobB:
....
dependencies:
- jobA
If the jobA fail it won't provide dependent artifact for jobB. Thus, it also will fail.
(Edit)
Have you checked Protected tags gitlab settings? (Settings -> Repository -> Protected tags)
Default settings are:
By default, protected tags are designed to:
Prevent tag creation by everybody except Maintainers
Prevent anyone from updating the tag
Prevent anyone from deleting the tag
Currently I'm working on a pipeline script for Azure Devops. I want to provide a maven settings file as a secure files for the pipeline. The problem is, when I define a job only for providing the file, the file isn't there anymore when the next job starts.
I tried to define a job with a DownloadSecureFile task and a copy command to get the settings file. But when the next job starts the file isn't there anymore and therefore can't be used.
I already checked that by using pwd and ls in the pipeline.
This is part of my current YAML file (that actually works):
some variables
...
trigger:
branches:
include:
- stable
- master
jobs:
- job: Latest_Release
condition: eq(variables['Build.SourceBranchName'], 'master')
steps:
- task: DownloadSecureFile#1
name: settingsxml
displayName: Download maven settings xml
inputs:
secureFile: settings.xml
- script: |
cp $(settingsxml.secureFilePath) ./settings.xml
docker login -u $(AzureRegistryUser) -p $(AzureRegistryPassword) $(AzureRegistryUrl)
docker build -t $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest) .
docker push $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest)
....
other jobs
I wanted to put the DownloadSecureFile task and "cp $(settingsxml.secureFilePath) ./settings.xml" into an own job, because there are more jobs that need this file for other branches/releases and I don't want to copy the exact same code to all jobs.
This is the YAML file as I wanted it:
some variables
...
trigger:
branches:
include:
- stable
- master
jobs:
- job: provide_maven_settings
# no condition because all branches need the file
- task: DownloadSecureFile#1
name: settingsxml
displayName: Download maven settings xml
inputs:
secureFile: settings.xml
- script: |
cp $(settingsxml.secureFilePath) ./settings.xml
- job: Latest_Release
condition: eq(variables['Build.SourceBranchName'], 'master')
steps:
- script: |
docker login -u $(AzureRegistryUser) -p $(AzureRegistryPassword) $(AzureRegistryUrl)
docker build -t $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest) .
docker push $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest)
....
other jobs
In my dockerfile the settings file is used like this:
FROM maven:3.6.1-jdk-8-alpine AS MAVEN_TOOL_CHAIN
COPY pom.xml /tmp/
COPY src /tmp/src/
COPY settings.xml /root/.m2/ # can't find file when executing this
WORKDIR /tmp/
RUN mvn install
...
The error happens, when docker build is started, because it can't find the settings file. It can though, when I use my first YAML example. I have a feeling that it has something to do with each job having a "Checkout" phase, but I'm not sure about that.
Each job in Azure DevOps is running on different agent, so when you use Microsoft Hosted Agents and you separator the pipeline to few jobs, if you copy the secure file in one job, the second job running in new fresh agent that of course don't have the file.
You can solve your issue by using Self Hosted agent (then copy the file to your machine and the second job running in the same machine).
Or you can upload the file to somewhere else (secured) that you can downloaded it in the second job (so why not do it from the start...).
I'm trying to deploy a project on a server using Appveyor agent. However, If I donot not restart or stop application before deployment, it do not works.
Web Deploy cannot modify the file 'TestProject.Application.dll' on the destination because it is locked by an external process. In order to allow the publish operation to succeed, you may need to either restart your application to release the lock, or use the AppOffline rule handler for .Net applications on your next publish attempt. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_FILE_IN_USE.
Is there an easy way to work with an app_offline.htm file? appveyor.yml configuration using the "app_offline" feature do not work in this kind of environment.
I was looking for something in the "before/after" section. Here's my appveyor.yml:
version: '1.0.{build}'
os: Visual Studio 2015
install:
- "SET PATH=C:\\Program Files\\dotnet\\bin;%PATH%"
branches:
only:
- master
assembly_info:
patch: true
file: '**\AssemblyInfo.*'
assembly_version: '{version}'
assembly_file_version: '{version}'
assembly_informational_version: '{version}'
build_script:
- nuget sources add -name "VNext" -source https://dotnet.myget.org/F/cli-deps/api/v3/index.json
- nuget sources add -name "nugetv3" -source https://api.nuget.org/v3/index.json
- dotnet restore
- dotnet build */*/project.json
after_build:
- ps: Remove-Item -Path src\TestProject.Web\web.config
- ps: Move-Item -Path src\TestProject.Web\web.$env:APPVEYOR_REPO_BRANCH.config -Destination src\TestProject.Web\web.config
- dotnet publish src\TestProject.Web\ --output %appveyor_build_folder%\publish
artifacts:
- path: .\publish
name: TestProject.Web
test: off
deploy:
- provider: Environment
name: east-webhost
artifact: TestProject.Web
remove_files: false
on:
branch: master
Please look at before/after deploy scripts. Also check this sample on how you can ensure that file released.
--ilya.