How to build a gitlab pipeline if my code needs to be ran on a windows server? - powershell

I implemented a bunch of infrastructure checks (PowerShell scripts) that need to be ran on Window Servers (most of them use Get-WmiObject cmdlet). I put them along with their Pester tests on GitLab and trying to build a pipeline.
I have read creating-your-first-windows-container-with-docker-for-windows and building-a-simple-release-pipeline-in-powershell-using-psake-pester-and-psdeploy but I am very confused. My understanding is that to have the code run on GitLab CI, I will need to build a Windows Server docker image?
the following is my .gitlab-ci.yml file but it has authentication errors, the image can be found here:
image: ltsc2019
stages:
- build
- test
- deploy
build:
stage: build
script:
# run PowerShell script
- powershell -File "\Deploy\Build.ps1"
test:
stage: test
script:
- powershell -File "\Deploy\CodeCoverage.ps1"
deploy:
stage: deploy
script:
- powershell -File "\Deploy\Deploy_Local.ps1"
It wouldn't pass the initial build and here are the error I got:
# Error 1
ERROR: Job failed: Error response from daemon: pull access denied for ltsc2019, repository does not exist or may require 'docker login' (executor_docker.go:168:3s)
# Error 2 (this happened because I added 'shell: "powershell"'
# after executor in the gitlab-runner congif file)
ERROR: Preparation failed: Docker doesn't support shells that require script file

ltsc2019 is one tag of the mcr.microsoft.com/windows/servercore.
You need to refer this image at the beginning of your .gitlab-ci.yml :
image: mcr.microsoft.com/windows/servercore:ltsc2019

Anyone who struggles to get docker images working on your Docker for Windows, Please read Docker executor currently doesn't support Docker for Windows. Please check out executor if you are building a pipeline that needs a container to run it

Related

With powershell a Gitlab-ci job passed while failure occured

My gitlab-ci.yml contains follwing Lint code
stages:
- build
build:
stage: build
script:
- exit 1
When running, the job doesn't fail!
Running with gitlab-runner 13.10.0 (54944146)
Preparing the "shell" executor
Using Shell executor...
Preparing environment
Getting source from Git repository
Fetching changes with git depth set to 50...
Reinitialized existing Git repository
Checking out b70613dd
git-lfs/2.13.2 (GitHub; windows amd64; go 1.14.13; git fc664697)
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:03
$ exit 1
Cleaning up file based variables
00:03
Job succeeded
How to avoid having false successes while the job should fail.
PowerShell 5 (the Windows default PS instance) returns a false result. When using PowerShell Core, the problem no longer appears.

Using AWS CLI from Azure pipeline

I'm trying to use AWS cli within a script section of an Azure pipeline. The script section is in a template file and it's accessed from the main pipeline.
steps:
- bash: |
step_function_state=`aws stepfunctions list-executions --state-machine-arn $(stateMachineArn) --status-filter RUNNING | jq -r '.executions[]|.status' | head -1`
echo "State machine RUNNING status: ${step_function_state}"
# Rest of the script#
displayName: "Test Script"
env:
AWS_ACCESS_KEY_ID: $(AWS_ACCESS_KEY_ID)
AWS_DEFAULT_REGION: $(AWS_DEFAULT_REGION)
AWS_SECRET_ACCESS_KEY: $(AWS_SECRET_ACCESS_KEY)
stateMachineArn, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION are stored in a variable group. When running the pipeline it gives the following error,
An error occurred (UnrecognizedClientException) when calling the ListExecutions operation: The security token included in the request is invalid.
Using the same credentials I am able to run my local CLI and get the results.
I tried printenv command and all the AWS variables are in the environment too. What could I possibly do wrong?
I realized that this issue occurred due to credential mismatch.
After adding the correct credentials (same as local cli) the pipeline CLI also started to work.
Based on the error log it felt like aws_session_token could be an issue but the actual issue was in aws_access_key_id and aws_secret_access_key.

How to keep secure files after a job finishes in Azure Devops Pipeline?

Currently I'm working on a pipeline script for Azure Devops. I want to provide a maven settings file as a secure files for the pipeline. The problem is, when I define a job only for providing the file, the file isn't there anymore when the next job starts.
I tried to define a job with a DownloadSecureFile task and a copy command to get the settings file. But when the next job starts the file isn't there anymore and therefore can't be used.
I already checked that by using pwd and ls in the pipeline.
This is part of my current YAML file (that actually works):
some variables
...
trigger:
branches:
include:
- stable
- master
jobs:
- job: Latest_Release
condition: eq(variables['Build.SourceBranchName'], 'master')
steps:
- task: DownloadSecureFile#1
name: settingsxml
displayName: Download maven settings xml
inputs:
secureFile: settings.xml
- script: |
cp $(settingsxml.secureFilePath) ./settings.xml
docker login -u $(AzureRegistryUser) -p $(AzureRegistryPassword) $(AzureRegistryUrl)
docker build -t $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest) .
docker push $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest)
....
other jobs
I wanted to put the DownloadSecureFile task and "cp $(settingsxml.secureFilePath) ./settings.xml" into an own job, because there are more jobs that need this file for other branches/releases and I don't want to copy the exact same code to all jobs.
This is the YAML file as I wanted it:
some variables
...
trigger:
branches:
include:
- stable
- master
jobs:
- job: provide_maven_settings
# no condition because all branches need the file
- task: DownloadSecureFile#1
name: settingsxml
displayName: Download maven settings xml
inputs:
secureFile: settings.xml
- script: |
cp $(settingsxml.secureFilePath) ./settings.xml
- job: Latest_Release
condition: eq(variables['Build.SourceBranchName'], 'master')
steps:
- script: |
docker login -u $(AzureRegistryUser) -p $(AzureRegistryPassword) $(AzureRegistryUrl)
docker build -t $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest) .
docker push $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest)
....
other jobs
In my dockerfile the settings file is used like this:
FROM maven:3.6.1-jdk-8-alpine AS MAVEN_TOOL_CHAIN
COPY pom.xml /tmp/
COPY src /tmp/src/
COPY settings.xml /root/.m2/ # can't find file when executing this
WORKDIR /tmp/
RUN mvn install
...
The error happens, when docker build is started, because it can't find the settings file. It can though, when I use my first YAML example. I have a feeling that it has something to do with each job having a "Checkout" phase, but I'm not sure about that.
Each job in Azure DevOps is running on different agent, so when you use Microsoft Hosted Agents and you separator the pipeline to few jobs, if you copy the secure file in one job, the second job running in new fresh agent that of course don't have the file.
You can solve your issue by using Self Hosted agent (then copy the file to your machine and the second job running in the same machine).
Or you can upload the file to somewhere else (secured) that you can downloaded it in the second job (so why not do it from the start...).

How do I use PowerShell with Gitlab CI in Gitlab Pages?

How do I use PowerShell commands/scripts with Gitlab CI in a .gitlab-ci.yml file which is used to deploy to gitlab pages?
I am trying to execute the build.ps1 file from .gitlab-ci.yml, but when it reaches the build.ps1 line, it gives an error saying
/bin/bash: line 5: .build.ps1: command not found
I am trying to use the PowerShell script to convert a file in my repo and have the converted file deployed to gitlab pages using .gitlab-ci.yml
Here is my code:
.gitlab.yml
pages:
stage: deploy
script:
- mkdir .public
- .\build.ps1
- cp -r * .public
- mv .public public
artifacts:
paths:
- public
only:
- master
I have been able to figure out a solution to my own question.
Solution
To Run PowerShell Command/Script from a .gitlab-ci.yml file on a gitlab.com using the Gitlab CI, you need to make sure that the contents of your .gitlab-ci.yml file is as shown below.
Note: The .gitlab-ci.yml below works without having to install a Gitlab Runner on your own machine and has been tested on the http://gitlab.com website.
image: philippheuer/docker-gitlab-powershell
pages:
stage: deploy
script:
- mkdir .public
# run PowerShell Script
- powershell -File build.ps1
# run PowerShell Command
- powershell -Command "Get-Date"
- cp -r * .public
- mv .public public
artifacts:
paths:
- public
only:
- master
The docker image philippheuer/docker-gitlab-powershell is outdated. The source on Github was also deleted.
I use in my gitlab-ci.yml the following image mcr.microsoft.com/powershell:latest more Informations available here
scriptjob:
stage: script
image:
name: "mcr.microsoft.com/powershell:latest"
script:
- pwsh ./myscript.ps1
For anyone who is having trouble launching grunt within their gitlab CI/CD via a powershell file, add this line to the top of your file:
$env:path += ";" + (Get-Item "Env:AppData").Value + "\npm"

Pipeline fails due to `hijack: Backend error`

I'm following Stark & Wayne tutorial and got into a problem:
Pipeline fails with
hijack: Backend error: Exit status: 500, message {"Type":"","Message": "
runc exec: exit status 1: exec failed: container_linux.go:247:
starting container process caused \"exec format error\"\n","Handle":""}
I have one git resource and one job with one task:
- task: test
file: resource-ci/ci/test.yml
test.yml file:
platform: linux
image_resource:
type: docker-image
source:
repository: busybox
tag: latest
inputs:
- name: resource-tool
run:
path: resource-tool/scripts/deploy.sh
deploy.sh is simple dummy file with one echo command
echo [+] Testing in the process ...
So what could it be?
This error means that the shell it's trying to invoke in your script is unavailable on the container running your task.
Busybox doesn't come with bash, it only comes with /bin/sh, check the shebang in deploy.sh, making sure it looks like:
#!/bin/sh
# rest of script
I also ran into this error when I forgot to add a ! at the top of my pipelines shell script:
#/bin/bash
# rest of script