Gitlab Runner cannot retrieve dependency repo in a Powershell executor - powershell

CI Runner Context
Gitlab version : 13.12.2 (private server)
Gitlab Runner version : 14.9.1
Executor : shell executor (PowerShell)
Exploitation system : Windows 10
Project in Python (may be unrelated)
(using Poetry for dependency management)
The Problem
I am setting up an automated integration system for a project that has several internal dependencies that are hosted on the same server as the project being integrated. If I run the CI with a poetry update in the yml file, the Job console sends an exit with error code 128 upon calling a git clone on my internal dependency.
To isolate the problem, I tried simply calling a git clone on that same repo. The response is that the runner cannot authenticate itself to the Gitlab server.
What I Have Tried
Reading through the Gitlab docs, I found that the runners need authorization to pull any private dependencies. For that, Gitlab has created deploy keys.
So I followed the instructions to create the deploy key for the dependency and added it to the sub-project's deploy key list. I then ran into the exact same permissions problem.
What am I missing?
(For anyone looking for this case for a Winodws PowerShell, the user that the runner uses is nt authority/system, a system only user that I have not found a way to access as a human. I had to make the CI runner do the ssh key creation steps.)
Example .gitlab-ci.yml file:
#Commands in PowerShell
but_first:
#The initial stage, always happens first
stage: .pre
script:
# Start ssh agent for deploy keys
- Start-Service ssh-agent
# Check if ssh-agent is running
- Get-Service ssh-agent
- git clone ssh://git#PRIVATE_REPO/software/dependency-project.git

I solved my problem of pulling internal dependencies via completely bypassing the ssh pull of the source code and by switching from poetry to hatch for dependency management (I'll explain why further down).
Hosting the compiled dependencies
For this, I compiled my dependency project's source code into a distribution-ready package (in this context it was a python wheel).
Then used Gitlab's Packages and Registries offering to host my package. Instead of having packages in each source code project, I pushed the packages of all my dependencies to a project I created for this single purpose.
My .gitlab-ci.yaml file looks like this when publishing to that project:
deploy:
# Could be used to build the code into an installer
stage: Deploy
script:
- echo "deploying"
- hatch version micro
# only wheel is built (without target, both wheel and sdist are built)
- hatch build -t wheel
- echo "Build done ..."
- hatch publish --repo http://<private gitlab repo>/api/v4/projects/<project number>/packages/pypi --user gitlab-ci-token --auth $CI_JOB_TOKEN
- echo "Publishing done!"
Pulling those hosted dependencies (& why I ditched poetry)
My first problem was having pip find the extra pypi repository with all my packages. But pip already has a solution for that!
In it's pip.ini file(to find where it is, you can do pip config -v list), 2 entries need to be added:
[global]
extra-index-url = http://__token__:<your api token>#<private gitlab repo>/api/v4/projects/<project number>/packages/pypi/simple
[install]
trusted-host = <private gitlab repo>
This makes it functionally the same as adding the --extra-index-url and --trusted-host tags while calling pip install.
Since I was using a dependency manager, I was not directly using pip, but the manager's wrapper for pip. And here comes the main reason why I decided to change dependency managers: poetry does not read or recognize pip.ini. So any changes done in any of those files will be ignored.
With the configuration of the pip.ini file, any dependencies I have in the private package repo will also be searched for the installation of projects. So the line:
- git clone ssh://git#PRIVATE_REPO/software/dependency-project.git
changes to a simple line:
- pip install dependency-project
Or a line in pyproject.toml:
dependencies = [
"dependency-project",
"second_project",
]

Related

Trying to Yarn add a private Github repo and get 'couldn't find the binary git' error

I have a really simple repo in GitHub (/Hooks/), currently containing just 1 file, a simple Hooks.ts typescript file. On my local machine, in my Workspace, I've created a project folder and I can yarn add normal repositories like yarn add fuse.js but I wanted to yarn add my private repo yarn add Hooks using this format yarn add git+ssh//git#github.com:OrganisationName/Hooks.git but I just get Error: couldn't find the binary git'. I have permissions to the Hooks repo because I can push/pull from it. I'm on OSX Mojave (10.14.16) and installed Yarn via brew. My yarn version (yarn -v) is 1.22.10. This is the latest brew will install after running brew upgrade yarn.
This error "couldn't find the binary git" is related with not having installed git where the installation is made, Are you running these. commands inside a container?
you might as well be installing openssh, is necessary too.
for example in an alpine container
apk add --no-cache git openssh
yarn install
If, just in case, you don't want to access to the repo trough SSH, you can access trough https+deploy-token, heres a gitlab example:
git+https://<token-name>:<token>#gitlab.com/Username/Repository#<branch|tag>

GitHub PR doesn't trigger GitLab pipeline

I'm trying to use GitHub to trigger on PR a GitLab pipeline.
Practically when a developer creates a PR in GitHub, his/her code get tested against a GitLab pipeline.
I'm trying to follow this user guide: https://docs.gitlab.com/ee/ci/ci_cd_for_external_repos/github_integration.html
and we have a silver account, but it won't work. When creating the PR, the GitLab pipeline is not triggered.
Anyone with this kind of experience who can help?
Thanks
Joe
I've found the cause of the issue.
In order for GitHub to trigger GitLab as CD/CI mostly in PR request, you need to have a Silver/Premium account AND, very important, being the root owner.
Any other case, you won't be able to see github in the integration list on GitLab. People from gitlab had the brilliant idea to hide it instead of showing it disabled (which would had been a tip to understand that you needed an upgraded license)
In the video above it's not explained.
Firstly, you need to give us the content of your .gitlab-ci.yaml file. In your question you asked about GitHub but you're following Gitlab documentation which is completely different. Both are using git commands to commit and push repos but Github & Gitlab are different.
For Github pipelines, you need to create a repository, then you go to Actions. Github will propose you to configure a .github/workflows directory which contain a file.yaml. In this .yaml file you can code your pipelines. According to your project, Github will propose you several linux machines with the adequate configuration to run your files (If it's a Java Project --> you'll be proposed maven machines, Python --> Python Machines, React/Angular -> machines with npm installed, Docker, Kubernetes for deployments...) and you're limited to 4 private project as far as I know (check this last information).
For Gitlab you have two options, you can use preconfigured machines like github, and you call them by adding for example atag: npm in your .gitlab-ci.yaml file, to call a machine with npm installed, but you need to pay an amount of money. Or you can configure your own runners by following the Gitlab documentation with gitlab commands (which is the best option), but you'll need good machines and servers to run npm - mvn - python3 - ... commands
Of course, in your Gitlab repository, and finally to answer your question this an example, of .gitlab-ci.yaml file with two simple stages: build & test, the only statement specifies that these pipelines will run if there is a merge request ( I use the preconfigured machines of Gitlab as a sample here) More details on my python github project https://github.com/mehdimaaref7/Scrapping-Sentiment-Analysis and for gitlab https://docs.gitlab.com/runner/
stages:
- build
- test
build:
tags:
- shell
- linux
stage: build
script:
- echo "Building"
- mkdir build
- touch build/info.txt
artifacts:
paths:
- build/
only:
- merge_requests
test:
tags:
- shell
- linux
stage: test
script:
- echo "Testing"
- test -f "build/info.txt"
only:
- merge_requests

mkdocs site doesn't exist after build on codeship

I'm trying to use codeship to automate building docs from a repository.
After the Executing the command mkdocs build --clean I get a path to where my site folder is supposed to be.
INFO - Cleaning site directory
INFO - Building documentation to directory: /home/rof/src/bitbucket.org/josephkobti/test/site
The thing is that I can't find that folder using the ssh console for debugging.
The reason for the folder not existing was a misunderstanding of Codeship's SSH Debug Build feature, documented here https://documentation.codeship.com/basic/builds-and-configuration/ssh-access/
The VMs started for the debug feature are not the actual VMs that run the automated builds. They are new VMs running the same initialization steps as the automated builds (i.e. cloning the repository, configuring project specific environment variables, ...) but none of the actual setup or test commands.
Because of this the mkdocs build --clean command wasn't run either when Joseph connected via to the debug VM, and as such the generated site wasn't available.

LPRUN on AppVeyor with a reference to a custom NUGET repo reference

I am using a LinqPad script to automate an internal health check via AppVeyor.
The script references a custom nuget package hosted on our appveyor account.
The build does the following:
Pulls down a GitHub repo
via LPRUN executes health-check.linq
Locally this works.
On AppVeyor it does not.
I have the following build process
SETUP
Via chocolatey --> install Linqpad5
choco install linqpad5
BUILD
nuget install Example.Package
[this is our own NUGET package hosted on AppVeyor [SUCCESS]]
xcopy "c:\projects\example_project\utilities" %AppData%\LINQPad\ /i
[copy our custom NuGetSources.xml file containing our nuget repo location to linqpads folder]
cd "C:\Program Files (x86)\LINQPad5\"
lprun "C:\projects\example_project\utilities\health-check.linq"
ERROR
'Error downloading 'Example.Package' - An error occurred while retrieving package metadata for 'Example.Package' from source 'Example Company Repo'.
Does anyone have any hints on how to reference a custom NUGET repo from a LINQPAD script on APPVEYOR?
More Info
We use AppVeyor for our CI. It allows us to write our own custom NUGET packages for internal use within our own projects.
We have a repository ('FinPad') that contains numerous .linqpad files that automate our processes & house keeping.
Each FinPad script contains a reference to a package called 'FairGo.FinPower' on our own AppVeyor nuget repo. This custom nuget package contains numerous 3rd party .Net DLLs & our own custom code to connect to a 3rd party financial loan management system we use as a backend - http://www.finpower.com.au/ (hosted by us on Azure)
One such script is a 'Health Check' - this confirms a specific environment is operating OK.
For our TEST environment I wanted to schedule our 'health check linqpad' script to run every 15 minutes (and on failure alert stackify & slack)
The process works as follows ( using a custom Azure build machine from AppVeyor)
every 15 minutes
run a custom AppVeyor build called 'Health Check TEST' -->
pull down the GitHub project 'FinPad' to c:\projects\finpad
before build --> choco install linqpad5
Run the below command to copy our own nuget.config (with our own FairGo nuget repo references) to the location linqpad wants for reference
copy "c:\projects\finpad\utilities\nuget.config" "C:\Users\appveyor\AppData\Roaming\NuGet"
*** the above nuget.config contains the url / username /password to our appveyor nuget repo in plain text
build --> cd "C:\Program Files (x86)\LINQPad5\"
after build --> lprun "C:\projects\FinPad\utilities\health-check-test.linq"
For reference the Health check linqpad file is
http://share.linqpad.net/tuthme.linq
Locally this works fine (I manually configured our NUGET repo by the LinqPad GUI locally & assumed it only updated the nuget.config in 'C:\Users\ME\AppData\Roaming\NuGet\nuget.config'. (so I added this file to the repo & copy it every build)
On AppVeyor the build gives the below error
Downloading NuGet package FairGo.FinPower and dependencies from https://api.nuget.org/v3/index.json
Error downloading 'FairGo.FinPower' - Unable to find package 'FairGo.FinPower'.
Command exited with code 1
I have this running through a Console App now - but would really like to get this working through a LinqPad script if possible.

Setting Environment with Puppet and test it with TravisCI

I want to use TravisCI for testing pull requests for my github repo. But i use puppet for setting environment and installing dependencies. Is there any way to build dependencies with puppet in .travis.yml.
You need to customize the build environment by writing a shell script that installs and runs puppet.