I have been trying to get CI working using gitlab-runner and the shell command line using yml. My tests are run and work without a problem. The only problem I have is that when the test concludes the test results get deleted from the directory and I don't know why. When the test is running it exists for about a minute and than it disappears. There are no errors shown. gitlab-runner.exe --debug run doesn't show me anything useful either. My code is as below.
stages:
- edit
- play
- build
unit-test:
script: C:/"Unity projects"/2019.3.0f6/Editor/Unity.exe -batchmode -projectPath=. -runTests -testPlatform editmode -logFile -testResults "./unit-tests-edit.xml" | Out-Default
stage: edit
tags:
- Desktop
play-test:
script: C:/"Unity projects"/2019.3.0f6/Editor/Unity.exe -batchmode -projectPath=. -runTests -testPlatform editmode -logFile -testResults "./unit-tests-play.xml" | Out-Default
stage: play
tags:
- Desktop
unity-build:
script: C:/"Unity projects"/2019.3.0f6/Editor/Unity.exe -batchmode -logFile -projectPath=. -executeMethod WindowsBuildScript.PerformBuild -quit
stage: build
tags:
- Desktop
From what I understand this code should be correct. I had the same problem with automatic building the Unity project. After adding more stuff to my project it doesn't build at all anymore or something and I am unsure what happened. Does anybody have an idea why the files automatically delete themselves? Files in question are the unit-tests-edit.xml and the unit-tests-play. I reckon that if I am able to fix those problems the build should show up correctly as well.
That's because every job gets its own shell environment. You need to use artifacts to pass a file from one job to another.
https://docs.gitlab.com/ee/ci/pipelines/job_artifacts.html
Related
Problem
I am running an inline shell script on a remote machine via SSH in an Azure Dev Ops pipeline. Depending on some conditions, running the pipeline should throw a custom warning.
The feature works, albeit in a very twisted way: when running the pipeline, the warning always appears. To be more precise:
If the condition it not met, the warning appears once.
If the condition is met, the warning appears twice.
The example below should give a clear illustration of the issue.
Example
Let's say we have the following .yaml pipeline template. Please adapt the pool and sshEndpoint settings for your setup.
pool: 'Default'
steps:
- checkout: none
displayName: Suppress regular checkout
- task: SSH#0
displayName: 'Run shell inline on remote machine'
inputs:
sshEndpoint: 'user#machine'
runOptions: inline
inline: |
if [[ 3 -ne 0 ]]
then
echo "ALL GOOD!"
else
echo "##vso[task.logissue type=warning]Something is fishy..."
fi
failOnStdErr: false
Expected behavior
So, the above bash script should echo ALL GOOD! as 3 is not 0. Therefore, the warning message should not be triggered.
Current behavior
However, when I run the above pipeline in Azure Dev Ops, the step log overview says there is a warning:
The log itself looks like this (connection details blacked out):
So even though the code takes the ALL GOOD! code path, the warning message appears. I assume it is because the whole bash script is echoed to the log - but that is just an assumption.
Question
How can I make sure that the warning only appears in the executed pipeline when the conditions for it are satisfied?
Looks like the SSH task logs the script to the console prior to executing it. You can probably trick the log parser:
HASHHASH="##"
echo $HASHHASH"vso[task.logissue type=warning]Something is fishy..."
I'd consider it a bug that the script is appended to the log as-as. I've filed an issue. (Some parsing does seem to take place as well)...
1. How can I prevent Gitlab from adding -NoProfile when creating the docker container?
2. What's the difference between using powershell with -NoProfile and without it?
3. Is there any way if I run powershell -NoProfile to somehow load/run the default profile and to revert the effect of setting -NoProfile flag?
And now the story behind these questions:
MSBuild (Visual Studio build tool command) fails to build Xamarin app when used in docker container started by Gitlab with Powershell -NoProfile.
I created a docker image for CI purspose and everything is working properly if I run the container manually, but when it is run during Gitlab runner job, it fails (more exactly msbuild /t:SignAndroidPackage fails, because of some file does not gets generated). I inspected this (put a sleep of 1 hour in gitlab-ci.yml and attached to the container on the runner machine) and found out that Gitlab starts the container with PowerShell -NoProfile...I tried that manually(start the container with -NoProfile) and I reproduced the issue.
the error is:
Could not find a part of the path 'C:\builds\gitlab\HMI.Framework\KDI\sub\HostApplicationToolkit\sub\KBle\KBle\sub\XamarinBluetooth\Source\Plugin.BLE.Android\Resources\Resource.Designer.cs'
Here the Resource.Designer.cs is missing (and it should be auto-generated during the build process)
This is the dockerFile:
# escape=`
# Use the latest Windows Server Core image with .NET Framework 4.8.
FROM mcr.microsoft.com/dotnet/framework/sdk:3.5-windowsservercore-ltsc2019
ENV VS_STUDIO_INSTALL_LINK=https://download.visualstudio.microsoft.com/download/pr/befdb1f9-8676-4693-b031-65ee44835915/c541feeaa77b97681f7693fc5bed2ff82b331b168c678af1a95bdb1138a99802/vs_Community.exe
ENV VS_INSTALLER_PATH=C:\TEMP\vs2019.exe
ENV ANDROID_COMPILE_SDK=29
# Restore the default Windows shell for correct batch processing.
SHELL ["cmd", "/S", "/C"]
ADD $VS_STUDIO_INSTALL_LINK $VS_INSTALLER_PATH
RUN %VS_INSTALLER_PATH% --quiet --wait --norestart --nocache --includeRecommended --includeOptional`
--add Microsoft.VisualStudio.Workload.NetCrossPlat `
--add Microsoft.VisualStudio.Workload.XamarinBuildTools `
|| IF "%ERRORLEVEL%"=="3010" EXIT 0
RUN del %VS_INSTALLER_PATH%
# set some util paths
RUN setx JAVA_HOME "c:\Program Files\Android\jdk\microsoft_dist_openjdk_1.8.0.25\"
RUN setx path "%path%;c:\Program Files (x86)\Android\android-sdk\tools\bin;c:\Program Files (x86)\Android\android-sdk\platform-tools"
# update android SDK with API 29
RUN echo y| sdkmanager "platforms;android-%ANDROID_COMPILE_SDK%"
#ENTRYPOINT ["powershell.exe", "-NoLogo", "-ExecutionPolicy", "Bypass"]
This is the gitlab-ci.yml
image: visualstudio2019-xamarin-ci:1.0-windowsservercore-ltsc2019
stages:
- build
- test
variables:
GIT_SUBMODULE_STRATEGY: 'recursive'
build_kdi:
stage: build
only:
- CI_CD
tags:
- docker-windows
script:
- '& "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Current\Bin\MSBuild.exe" ./src/KDI/KDI.sln /t:Restore /p:AndroidBuildApplicationPackage=true /p:Configuration=Release'
- '& "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Current\Bin\MSBuild.exe" /p:Configuration=Release /t:SignAndroidPackage /property:SolutionDir="C:\builds\gitlab\HMI.Framework\KDI\src\KDI\" /p:AndroidSigningKeyStore="C:\builds\gitlab\HMI.Framework\KDI\distribution\Android\KrohneAndroidKeystore.keystore" .\src\KDI\KDI\KDI.Android\KDI.Android.csproj'
run_bb_tests:
stage: test
only:
- CI_CD
tags:
- docker-windows
script:
- docker-ci/install_and_run_bb_tests.bat "phone_ip" "5555" arg1 arg2 || true
- adb pull /storage/emulated/0/Android/data/myApp/files/Exports/BBT_Exports/BBTests_CI C:\test_results
artifacts:
paths:
- c:\test_results\BBTests_CI #expire in 1 month by default
If I start this image using : docker run -it myImage powershell everything works well, but if I run it docker run -it myImage powershell -NoProfile, msbuild fails at some step in building the xamarin app
How can I prevent Gitlab from adding -NoProfile when creating the docker container?
Apparently you can't...yet but there is a feature request in place
What's the difference between using powershell with -NoProfile and without it?
A PowerShell profile is just a file that contains things, like custom functions, for a specific user. For instance, if I were to have Function Banana(){Write-Host "Awesome"} in my PowerShell profile e.g $Home[My ]Documents\PowerShell\Profile.ps1. Whenever I open up PowerShell, it loads that function automatically. So I can open PowerShell, type banana and have the string Awesome written to stdout. If I were to start powershell with -NoProfile, and enter banana, I'd get CommandNotFoundException.
source
Is there any way if I run powershell -NoProfile to somehow load/run the default profile and to revert the effect of setting -NoProfile flag?
Maybe. And I only say maybe because I haven't personally tested it, but in theory it should work. If the profile is in the container, all you have to do is dot source it.
. "C:\path\to\powershell\profile.ps1"
dot sourcing the .ps1 file will run anything within it, inside the current scope, which includes loading functions, if any are present.
dot source docs
My expertise on Gitlab is limited, but due to my overall experience with Windows containers, PowerShell, as well as the nature of PowerShell profiles, I have doubts that the profiling is the root cause of this issue, but I don't have enough information/evidence to definitively say. Hope this helps in some way!
I am using cake script to build my application. While the build process all the logging informations are displayed in the console. I want to write all the console output into a specific log file in the same path where the build.ps1 is located.
Build process is like, from gitlab-ci a particular bat file will be called. That bat file will get necessary build informations and build.ps1 will be called as below.
call start /wait /i cmd /c powershell.exe -Command %PSFILE_PATH% --rebrand="app_name"
pause
[PSFILE_PATH - will have the build.ps1 file path with file name.
eg: "F:\Build\app_name\build.ps1"]
Info: I have tried using ".\build.ps1 > output.log" this works while running the build in my local machine. But, in my application build process(via gitlab-ci runner) I'm unable to use this command.
Please suggest a way(other than ".\build.ps1 > output.log") to log all the outputs printed in the console into a file while running build.ps1.
Thanks in advance.
You can persist this information from the Gitlab-ci build by saving it as an artifact. Just add this to your job's description in the gitlab-ci.yml:
artifacts:
name: 'Build output' # or whatever
when: always # or on_success/on_failure depending on your use case
paths:
- output.log
expire_in: 1 day # or 4 hours or however long you want to keep it stored
This way you can look at the output.log file via pipelines > your pipeline > artifacts.
I have created a toolchain, which downloads the code from the bitbucket repository and builds the docker image in IBM Cloud.
After the code builds the image, the build stage fails while building the artifactory.
Error:
Preparing the build artifacts...
Customer script does not exist for the job, exitting
I have specified the Build archive directory as the folder name. Do I need to write any scripts for archiving?
That particular error occurs when one of our checks -- the existence of /home/pipeline/$TASK_ID/_customer_script.sh -- fails.
Archiving happens automatically but that file needs to be present as we use it as part of the traceability around how the artifact was created. Is it possible that file is getting removed? (Also will look into removing or making the check non-fatal however that will take time)
This issue appears to be caused by setting a working directory for the job. _customer_script.sh gets dropped into the working directory, but the script Simon is referring to (/opt/IBM/pipeline/bin/ids-buildables-notify.sh) only checks the top-level directory the code input is at (/home/pipeline/$TASK_ID/).
Three options to fix this, assuming you're doing a container registry job:
Run cp _customer_script.sh /home/pipeline/$TASK_ID in your script. The ids-buildables-notify.sh script does some grepping for your bx cr build call, so make sure that's still in there.
touch /home/pipeline/$TASK_ID/_customer_script.sh and export PIPELINE_IMAGE_URL=<your image url>. If PIPELINE_IMAGE_URL is set, the notify script doesn't bother with being clever, which I prefer.
Don't change the working directory.
A script which works for me:
#!/bin/bash
echo -e "Build environment variables:"
echo "REGISTRY_URL=${REGISTRY_URL}"
echo "REGISTRY_NAMESPACE=${REGISTRY_NAMESPACE}"
echo "IMAGE_NAME=${IMAGE_NAME}"
echo "BUILD_NUMBER=${BUILD_NUMBER}"
echo -e "Building container image"
set -x
export PIPELINE_IMAGE_URL=$REGISTRY_URL/$REGISTRY_NAMESPACE/$IMAGE_NAME:$BUILD_NUMBER
bx cr build -t $PIPELINE_IMAGE_URL .
set +x
touch /home/pipeline/$TASK_ID/_customer_script.sh
I'm completely new to Bamboo, so thank you in advance for the help.
I'm trying to create a Bamboo Run that zips files from a git repo and uploads it to Artifactory. Currently my build contains 2 tasks - source code checkout and a simple powershell script. The first time I run it it builds perfectly fine, but without any modifications any consecutive runs fail.
The error I'm getting in the log is the following:
Failing task since return code of [powershell -ExecutionPolicy bypass -Command /bin/sh /opt/bamboo/agent/temp/OR-J8U-JOB1-4-ScriptBuildTask-539645121146088515.ps1] was -1 while expected 0
Replacing the powershell script with empty space does not resolve the issue - only removing the script completely allows the build to succeed, but I cannot reinsert a new script or it will fail. I read other online questions suggesting that I "merge the user-level PATH environment information in to the system-level PATH" but I cannot find the user-level environment information, my environmental variables section is completely empty.
Like Vlad, I found that it was more efficient to implement my powershell script with batch.