Why is "/" not owned by root during Azure Pipelines builds? - azure-devops

I'm trying to debug why snaps do not run on Azure pipelines builds, and what I have found is that "/" is not owned by root during these builds, it is owned by uid 500 (not 0).
Does anybody know why "/" is not owned by root? Is this a bug with Azure Pipelines?
For example the following example does not work:
pr:
- 1.*
jobs:
- job: ldc2_snap
timeoutInMinutes: 0
pool:
vmImage: ubuntu-16.04
steps:
- script: |
set -x
snap version
lxd --version
sudo apt-get update
sudo snap install --classic --candidate snapcraft
export PATH="${PATH}:/snap/bin"
snapcraft --version
snapcraft
displayName: Build ldc2 snap package
This fails because snap-confine (which is run by snapcraft / snapd) won't run if "/" is not owned by root. We (snapd developers) do not want to allow snap-confine to run with non-root owned "/" without understanding why this is the case, as it seems like a bug with Azure Pipelines.

You can try to run your pipeline on agent ubuntu-18.04. I can reproduce the same issue with agent ubuntu-16.04. But the issue seems gone on agent ubuntu-18.04.
If you want to configure your own self-hosted agent. You can refer to the detailed steps here

Agents run out of a working directory (defined by the system variable Pipeline.Workspace / environment variable PIPELINE_WORKSPACE). You only have access to that working directory on the hosted agent. It's not a bug, it's an intentional limitation.
If you need something to access the root of the file system, provision your own private agents.

Related

Gitlab runner and flutter_dotenv in CI/CD

I want to build a desktop application for linux with flutter that will be used by clients in a controlled environment, I mean it is not a public app that will be downloaded by anyone.
I've thought about building the app from the Gitlab runners and storing it in the gitlab packages. Then the clients, managed by me, download the new versions through the gitlab api.
I'm using the flutter_dotenv package to load environment variables in my flutter app. The documentation recommends adding the .env file to the assets and also adding it to the .gitignore.
The truth is that in the CI/CD when I build the flutter app that is going to be used in production, it gives an error because the .env file does not exist and yet it is being referenced in the assets. This is the error it shows.
The only thing that occurs to me is from the runner itself to know if I am building for test or for prod and create the .env as appropriate. Obviously there is no other option for testing, but I'm not sure if it's good practice and if it's the responsibility of the gitlab runners to create the prod env.
Any ideas?
You don't say what your CI/CD environment is - edit - I see it. GitLab. If it were Azure - you could upload your .env to local secure files storage and in a step of your CI/CD deployment you could copy it in from secure files to the working directory where your app is being built. That is how I have dealt with a missing local.properties file for Android deployment from Azure
sample for Azure, maybe can be adapted to GitLab format
- task: DownloadSecureFile#1
name: local_properties
displayName: 'Download local properties'
inputs:
secureFile: 'local.properties'
- script: |
echo Installing $(local_properties.secureFilePath) to the android build folder $(Build.SourcesDirectory)/android ...
sudo cp $(local_properties.secureFilePath) $(Build.SourcesDirectory)/android
sudo chown -R $(whoami) /home/vsts/work/1/s/android/local.properties

Azure pipelines on a self hosted agent gives error NU1301: Unable to load the service index for source during dotnet restore

Having the same issue on a self hosted agent but I'm not specifying password in the yml. just specifying the vstsFeed
- checkout: self
submodules: true
persistCredentials: true
- task: NuGetToolInstaller#1
inputs:
versionSpec: 6.2.1
- task: UseDotNet#2
displayName: Using Dotnet Version 6.0.400
inputs:
packageType: 'sdk'
version: '6.0.400'
- task: DotNetCoreCLI#2
displayName: Restore Nuget packages
inputs:
command: 'restore'
projects: '**/*.sln'
feedsToUse: 'select'
vstsFeed: 'ba05a72a-c4fd-43a8-9505-a97db9bf4d00/6db9ddb0-5c18-4a24-a985-75924292d079'
and it fails with following error error NU1301: Unable to load the service index for source
Nuget feed is on another project of the same organization. I can see that pipeline produces a temp nuget config where it specifies username and password for this feed during run. Been breaking my head for the last 72 hours non-stop to find what is the issue. Azure pipelines and nuget sucks. 99% of the problems we had so far was with nuget not working smoothly with azure pipelines. Microsoft has to take a step back and resolve pipelines and nuget ssues.
Just to make sure: The NuGet feed is on the same Azure instance as the agent is registered with, right?
I remember similar issues on my on-premise Azure DevOps server, but also sometimes on the paid cloud variant. Sometimes it was flaky service state, sometimes the agent itself..
Kevin did give a good point with the permissions - if those are set, you're good to go from a permissions point of view - actually reader permission is enough for a restore - make sure to check the views panel too.
If after permissions check you still got issues, you might try my "just-making-sure-lines for your .yml file:
# NuGet Authentication (safety step, normally not required as all within the same organization/project)
- task: NuGetAuthenticate#1
displayName: "Nuget Authentication"
It shouldn't be required but I have it on all my pipes since I had such issues, and it reduced the occurence of the error line you posted in my cases (Hybrid devops architecture).
Another thing I ended up with is to specifiy the feeds explicitly in a repository-wide "NuGet.Config" file - and using this file within my yml files or with script lines instead of tasks now.
If nothing helps, enable diagnostics/verbose logging to get more error details. In the worst case: Log in to your agent machine, open a terminal in the same agent work folder and manually issue a dotnet restore command to see whats going on.
Post the additional results if still no progress.
Good luck
From your description, you are using the Nuget feed on another project of the same organization.
You need to check the following points:
Check the permission of the Build Service account.
Here are the steps:
Step1: Navigate to Artifacts ->Target Feed ->Feed Settings -> Permission.
Step2: Grant the Build service account Contributor Role . Build service account name: ProjectnamethePipelinelocated Build Service (Organization name)
For example:
Check the Limit job authorization scope to current project for non-release pipelines option is Enabled in Project Settings -> Settings.
If yes, you need to disable the option and then the pipeline can use the resource outside the project.
Note: To disable this option, you need to disable the option in Organization Settings-> Settings first. Then you could disable the option in Project level.

Gitlab Runner cannot retrieve dependency repo in a Powershell executor

CI Runner Context
Gitlab version : 13.12.2 (private server)
Gitlab Runner version : 14.9.1
Executor : shell executor (PowerShell)
Exploitation system : Windows 10
Project in Python (may be unrelated)
(using Poetry for dependency management)
The Problem
I am setting up an automated integration system for a project that has several internal dependencies that are hosted on the same server as the project being integrated. If I run the CI with a poetry update in the yml file, the Job console sends an exit with error code 128 upon calling a git clone on my internal dependency.
To isolate the problem, I tried simply calling a git clone on that same repo. The response is that the runner cannot authenticate itself to the Gitlab server.
What I Have Tried
Reading through the Gitlab docs, I found that the runners need authorization to pull any private dependencies. For that, Gitlab has created deploy keys.
So I followed the instructions to create the deploy key for the dependency and added it to the sub-project's deploy key list. I then ran into the exact same permissions problem.
What am I missing?
(For anyone looking for this case for a Winodws PowerShell, the user that the runner uses is nt authority/system, a system only user that I have not found a way to access as a human. I had to make the CI runner do the ssh key creation steps.)
Example .gitlab-ci.yml file:
#Commands in PowerShell
but_first:
#The initial stage, always happens first
stage: .pre
script:
# Start ssh agent for deploy keys
- Start-Service ssh-agent
# Check if ssh-agent is running
- Get-Service ssh-agent
- git clone ssh://git#PRIVATE_REPO/software/dependency-project.git
I solved my problem of pulling internal dependencies via completely bypassing the ssh pull of the source code and by switching from poetry to hatch for dependency management (I'll explain why further down).
Hosting the compiled dependencies
For this, I compiled my dependency project's source code into a distribution-ready package (in this context it was a python wheel).
Then used Gitlab's Packages and Registries offering to host my package. Instead of having packages in each source code project, I pushed the packages of all my dependencies to a project I created for this single purpose.
My .gitlab-ci.yaml file looks like this when publishing to that project:
deploy:
# Could be used to build the code into an installer
stage: Deploy
script:
- echo "deploying"
- hatch version micro
# only wheel is built (without target, both wheel and sdist are built)
- hatch build -t wheel
- echo "Build done ..."
- hatch publish --repo http://<private gitlab repo>/api/v4/projects/<project number>/packages/pypi --user gitlab-ci-token --auth $CI_JOB_TOKEN
- echo "Publishing done!"
Pulling those hosted dependencies (& why I ditched poetry)
My first problem was having pip find the extra pypi repository with all my packages. But pip already has a solution for that!
In it's pip.ini file(to find where it is, you can do pip config -v list), 2 entries need to be added:
[global]
extra-index-url = http://__token__:<your api token>#<private gitlab repo>/api/v4/projects/<project number>/packages/pypi/simple
[install]
trusted-host = <private gitlab repo>
This makes it functionally the same as adding the --extra-index-url and --trusted-host tags while calling pip install.
Since I was using a dependency manager, I was not directly using pip, but the manager's wrapper for pip. And here comes the main reason why I decided to change dependency managers: poetry does not read or recognize pip.ini. So any changes done in any of those files will be ignored.
With the configuration of the pip.ini file, any dependencies I have in the private package repo will also be searched for the installation of projects. So the line:
- git clone ssh://git#PRIVATE_REPO/software/dependency-project.git
changes to a simple line:
- pip install dependency-project
Or a line in pyproject.toml:
dependencies = [
"dependency-project",
"second_project",
]

GitHub PR doesn't trigger GitLab pipeline

I'm trying to use GitHub to trigger on PR a GitLab pipeline.
Practically when a developer creates a PR in GitHub, his/her code get tested against a GitLab pipeline.
I'm trying to follow this user guide: https://docs.gitlab.com/ee/ci/ci_cd_for_external_repos/github_integration.html
and we have a silver account, but it won't work. When creating the PR, the GitLab pipeline is not triggered.
Anyone with this kind of experience who can help?
Thanks
Joe
I've found the cause of the issue.
In order for GitHub to trigger GitLab as CD/CI mostly in PR request, you need to have a Silver/Premium account AND, very important, being the root owner.
Any other case, you won't be able to see github in the integration list on GitLab. People from gitlab had the brilliant idea to hide it instead of showing it disabled (which would had been a tip to understand that you needed an upgraded license)
In the video above it's not explained.
Firstly, you need to give us the content of your .gitlab-ci.yaml file. In your question you asked about GitHub but you're following Gitlab documentation which is completely different. Both are using git commands to commit and push repos but Github & Gitlab are different.
For Github pipelines, you need to create a repository, then you go to Actions. Github will propose you to configure a .github/workflows directory which contain a file.yaml. In this .yaml file you can code your pipelines. According to your project, Github will propose you several linux machines with the adequate configuration to run your files (If it's a Java Project --> you'll be proposed maven machines, Python --> Python Machines, React/Angular -> machines with npm installed, Docker, Kubernetes for deployments...) and you're limited to 4 private project as far as I know (check this last information).
For Gitlab you have two options, you can use preconfigured machines like github, and you call them by adding for example atag: npm in your .gitlab-ci.yaml file, to call a machine with npm installed, but you need to pay an amount of money. Or you can configure your own runners by following the Gitlab documentation with gitlab commands (which is the best option), but you'll need good machines and servers to run npm - mvn - python3 - ... commands
Of course, in your Gitlab repository, and finally to answer your question this an example, of .gitlab-ci.yaml file with two simple stages: build & test, the only statement specifies that these pipelines will run if there is a merge request ( I use the preconfigured machines of Gitlab as a sample here) More details on my python github project https://github.com/mehdimaaref7/Scrapping-Sentiment-Analysis and for gitlab https://docs.gitlab.com/runner/
stages:
- build
- test
build:
tags:
- shell
- linux
stage: build
script:
- echo "Building"
- mkdir build
- touch build/info.txt
artifacts:
paths:
- build/
only:
- merge_requests
test:
tags:
- shell
- linux
stage: test
script:
- echo "Testing"
- test -f "build/info.txt"
only:
- merge_requests

Gitlab Runner - New folder for each build

I'm using Gitlab CI for my project. When I push on develop branch, it runs tests and update the code on my test environment (a remote server).
But the gitlab runner is already using the same build folder : builds/a3ac64e9/0/myproject/myproject
But I would like to create a now folder every time :
builds/a3ac64e9/1/yproject/myproject
builds/a3ac64e9/2/yproject/myproject
builds/a3ac64e9/3/yproject/myproject
and so on
Using this, I could just update my website by changing a symbolic link pointing to the last runner directory.
Is there a way to configure Gitlab Runner this way ?
While it doesn't make sense to use your build directory as your deployment directory, you can setup a custom build directory
Open config.toml in a text editor: (more info on where to find it here)
Set enabled = true under [runners.custom_build_dir] (more info here)
[runners.custom_build_dir]
enabled = true
In your .gitlab-ci.yml file, under variables set GIT_CLONE_PATH. It must start with $CI_BUILDS_DIR/, e.g. $CI_BUILDS_DIR/$CI_JOB_ID/$CI_PROJECT_NAME, which will probably give you what you're looking for, although if you have multiple stages, they will have different job IDs. Alternatively, you could try $CI_BUILDS_DIR/$CI_COMMIT_SHA, which would give you a unique folder for each commit. (More info here)
variables:
GIT_CLONE_PATH: '$CI_BUILDS_DIR/$CI_JOB_ID/$CI_PROJECT_NAME'
Unfortunately there is currently an issue with using GIT_BUILDS_DIR in GIT_CLONE_PATH, if you're using Windows and Powershell, so you may have to do something like this as a work-around, if all your runners have the same build directory: GIT_CLONE_PATH: 'C:\GitLab-Runner/builds/$CI_JOB_ID/$CI_PROJECT_NAME'
You may want to take a look at the variables available to you (predefined variables) to find the most suitable variables for your path.
You might want to read the following answer Changing the build intermediate paths for gitlab-runner
I'll repost my answer here:
Conceptually, this approach is not the way to go; the build directory is not a deployment directory, it's a temporary directory, to build or to deploy from, whereas on a shell executor this could be fixed.
So what you need is to deploy from that directory with a script as per gitlab-ci.yml below, to the correct directory of deployment.
stages:
- deploy
variables:
TARGET_DIR: /home/ab12/public_html/$CI_PROJECT_NAME
deploy:
stage: deploy
script:
mkdir -pv $TARGET_DIR
rsync -r --delete ./ $TARGET_DIR
tags:
- myrunner
This will move your projectfiles in /home/ab12/public_html/
naming your projects as project1 .. projectn, all your projects could use this same .gitlab-ci.yml file.
You can not achieve this only with Gitlab CI runner configuration, but you can create 2 runners, and assign them exclusively to each branch by using a combination of only and tags keywords.
Assuming your two branches are named master and develop and two runners have been tagged with master_runner and develop_runner tags, your .gitlab-ci.yml can look like this:
master_job:
<<: *your_job
only:
- master
tags:
- master_runner
develop_job:
<<: *your_job
only:
- develop
tags:
- develop_runner
(<<: *your_job is your actual job that you can factorize)