GitLab K8s Runner fails for get_sources - kubernetes

we are trying to move gitlab-runners from standard CentOS VMs to kebernetes.
But after setup and registration, pipeline fails with unknown error:
Running with gitlab-runner 15.7.0 (259d2fd4)
on Kubernetes-local JXRw3mH1
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: gitlab-runner
Using Kubernetes executor with image gitlab-test.domain:5005/image:latest ...
Using attach strategy to execute scripts...
Preparing environment
00:04
Waiting for pod gitlab-runner/runner-jxrw3mh1-project-290-concurrent-0dpd88 to be running, status is Pending
Running on runner-jxrw3mh1-project-290-concurrent-0dpd88 via gitlab-runner-d7df6c548-hsgxg...
Getting source from Git repository
00:00
error: could not lock config file /root/.gitconfig: Read-only file system
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: command terminated with exit code 1
Inside the log of the job pod we found:
helper Running on runner-jxrw3mh1-project-290-concurrent-0dpd88 via gitlab-runner-d7df6c548-hsgxg...
helper
helper {"command_exit_code": 0, "script": "/scripts-290-207166/prepare_script"}
helper error: could not lock config file /root/.gitconfig: Read-only file system
helper
helper {"command_exit_code": 1, "script": "/scripts-290-207166/get_sources"}
helper
helper {"command_exit_code": 0, "script": "/scripts-290-207166/cleanup_file_variables"}
Inside the log of the gitlab-runner pod we found:
Starting in container "helper" the command ["gitlab-runner-build" "<<<" "/scripts-290-207167/get_sources" "2>&1 | tee -a /logs-290-207167/output.log"] with script: #!/usr/bin/env bash
if set -o | grep pipefail > /dev/null; then set -o pipefail; fi; set -o errexit
set +o noclobber
: | eval $'export FF_CMD_DISABLE_DELAYED_ERROR_LEVEL_EXPANSION=$\'false\'\nexport FF_NETWORK_PER_BUILD=$\'false\'\nexport FF_USE_LEGACY_KUBERNETES_EXECUTION_STRATEGY=$\'false\'\nexport FF_USE_DIRECT_DOWNLOAD
exit 0
job=207167 project=290 runner=JXRw3mH1
Remote process exited with the status: CommandExitCode: 1, Script: /scripts-290-207167/get_sources job=207167 project=290 runner=JXRw3mH1
Container "helper" exited with error: command terminated with exit code 1 job=207167 project=290 runner=JXRw3mH1
notes:
the error "error: could not lock config file /root/.gitconfig: Read-only file system" is due to the current user inside container is different by root
the file /logs-290-207167/output.log contains the log of the job pod
Inside job pod shell we also tested some git commands and perform successfully fetch and clone using our personal credentials (the same user that perform the run of the pipeline from gitlab gui).
We think the problem can be related on gitlab-ci-token, but we have finished our investigation... :frowning:

Related

my pipeline for auto devops in gitlab not working property

this is my test
i run the job with autodevops runner
i run this without .gitlab-ci.yaml file. but i receive the error like this.
$ if [[ -z "$CI_COMMIT_TAG" ]]; then # collapsed multi-line command
$ /build/build.sh
Building Heroku-based application using gliderlabs/herokuish docker image...
Attempting to pull a previously built image for use with --cache-from...
invalid reference format
invalid reference format
No previously cached image found. The docker build will proceed without using a cached image
invalid argument "/master:fa1708343b13496937aa567a1aecdc184f43d197" for "-t, --tag" flag: invalid reference format
See 'docker build --help'.
Cleaning up file based variables
30:00
ERROR: Job failed: command terminated with exit code 1

Snakemake: Singularity parameters --home and --bind set by default but disallowed on HPC

I have already posted this as an issue on Github at https://github.com/snakemake/snakemake/issues/279 but haven't got any response yet. I hope to find help here.
Version
I am using the following versions on our HPC cluster:
Snakemake c5.4.4
singularity version 3.5.3
Minimal example
singularity: "docker://bash"
rule test:
shell: "echo test"
Describe the bug
snakemake --use-singularity --debug
returns this message:
Building DAG of jobs...
Pulling singularity image docker://bash.
Using shell: /bin/bash
Provided cores: 1
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 test
1
[Fri Mar 13 15:59:30 2020]
rule test:
jobid: 0
Activating singularity image /data/nanopore/test/.snakemake/singularity/36b22e49e8a03fd08160e9345dd1034e.simg
FATAL: container creation failed: not mounting user requested home: user bind control is disallowed
[Fri Mar 13 15:59:30 2020]
Error in rule test:
jobid: 0
RuleException:
CalledProcessError in line 4 of /data/nanopore/test/Snakefile:
Command ' singularity exec --home /data/nanopore/test --bind /opt/snakemake/v5.4.4/lib/python3.5/site-packages/snakemake-5.4.4-py3.5.egg:/mnt/snakemake /data/nanopore/test/.snakemake/singularity/36b22e49e8a03fd08160e9345dd1034e.simg bash -c 'set -euo pipefail; echo test'' returned non-zero exit status 255
File "/data/nanopore/test/Snakefile", line 4, in __rule_test
File "/usr/lib/python3.5/concurrent/futures/thread.py", line 55, in run
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: /data/nanopore/test/.snakemake/log/2020-03-13T155917.601627.snakemake.log
Apparently, snakemake runs singularity with default values for --home and --bind. These were disallowed by the administrator, however.
Executing
singularity exec --home /data/nanopore/test --bind /opt/snakemake/v5.4.4/lib/python3.5/site-packages/snakemake-5.4.4-py3.5.egg:/mnt/snakemake /data/nanopore/test/.snakemake/singularity/36b22e49e8a03fd08160e9345dd1034e.simg bash -c 'set -euo pipefail;'
returns:
FATAL: container creation failed: not mounting user requested home: user bind control is disallowed
Additional context
Is there a way to disable the Singularity default parameter setting in snakemake? Inside the singularity container the /data directory is still writeable and readable anyway.
Thanks a lot

VSCode throws error when setting PATH environment variable in devcontainer.json

I have the following devcontainer.json file in a project.
When I try to open VSCode in a container, it crashes. The container builds successfully, but the following logs are emitted during startup. When I remove the environment variable configuration, the container starts up and stays running just fine.
I followed the example for configuring environment variables inside the dev container, according to the Visual Studio Code documentation for Advanced Container Configuration.
Question: How do I properly configure the PATH environment variable in my devcontainer.json file?
devcontainer.json
{
"name": "Ubuntu 18.04 & Git",
"dockerFile": "Dockerfile",
"settings": {
"terminal.integrated.shell.linux": "/bin/bash"
},
"containerEnv": {
"PATH": "${containerEnv:PATH}:/root/.customfolder/bin/"
}
}
Logs
6499 ms] Successfully built 096d41dceada
[6503 ms] Successfully tagged vsc-asdf-73cee28d5205fdd4a6063fc596248885:latest
[6506 ms] Start: Run: git rev-parse --show-toplevel
[6533 ms] Start: Starting container
[6533 ms] Start: Run: docker run -a STDOUT -a STDERR --mount type=bind,source=/Users/username/git/asdf,target=/workspaces/asdf,consistency=cached --mount source=/Users/username/.aws/credentials,target=/root/.aws/credentials,type=bind -l vsch.quality=stable -l vsch.remote.devPort=0 -l vsch.local.folder=/Users/username/git/asdf -e PATH=${containerEnv:PATH}:/root/.customfolder/bin/ --entrypoint /bin/sh vsc-pulumi-73cee28d5205fdd4a6063fc596248885 -c echo Container started ; while sleep 1; do :; done
[6852 ms] /bin/sh: 1: sleep: not found
[6852 ms] Container started
[6873 ms] Start: Inspecting container
[6879 ms] Start: Run in container: uname -m
[7031 ms] Start: Run in container: cat /etc/passwd
[7035 ms] Shell server terminated (code: 1, signal: null)
Error response from daemon: Container 8e0f6eeb22c358b0dfd8f1c1410c10b382ea66aa432e7e400a4564671619046f is not running
An error occurred setting up the container
Environment
MacOS Catalina
Docker Desktop 2.2.0.0
Microsoft Visual Studio Code 1.42.0
VSCode Remote-Containers extension 0.101.0
You should be able to change the property from containerEnv to remoteEnv to resolve the issue.
Only the remoteEnv property supports referencing existing container env vars. The containerEnv property is like -e for the Docker CLI and is therefore evaluated before the container is created. This is mainly useful when your Dockerfile itself depends on certain env vars being set (though you can modify the PATH inside your Dockerfile if you so desire).
For everything else, remoteEnv is the way to go since VS Code and all sub-processes like terminals us it. Since this is evaluated after container create, you can update the path as the examples illustrates.
"remoteEnv": {
"PATH": "${containerEnv:PATH}:/some/other/path",
"MY_REMOTE_VARIABLE": "some-other-value-here",
"MY_REMOTE_VARIABLE2": "${localEnv:SOME_LOCAL_VAR}"
}
"containerEnv": {
"PATH": "${localEnv:PATH}:/workspaces/v8/depot_tools"
}
I think that is what you need. localEnv here means the container env.

Powershell command is not recognized in gitlab.ci.yml

I am trying to execute a PowerShell script from gitlab.ci.yml file, but the PowerShell command is not recognized. The PowerShell which I am trying to trigger just has a single line to print some message.
Running with gitlab-runner 11.5.0 (3afdaba6)
on docker-auto-scale fa6cab46
Using Docker executor with image ruby:2.5 ...
Pulling docker image ruby:2.5 ...
Using docker image sha356:7834f561ba80e65515163209a3f952fcd1d11f9ce4420ba63d952e5b52b77e1 for ruby:2.5 ...
Running on runner-fa6cab46-project-9772339-concurrent-0 via runner-srm-1545289526-a4a36a59...
Cloning repository...
Cloning into 'builds/leo.danny/jenkinsintegration'...
Checking out 4b9beba2 as master...
Skipping Git submodules setup
$ powershell "./JenkinsPowershell.ps1"
/bin/bash: line 70: powershell: command not found
ERROR: Job failed: exit code 1

correct way to create symlink in deploy.rb

I have an error when i deploy an application:
[neon.locum.ru] executing command
*** [err :: neon.locum.ru] find: `/home/hosting_grandinvest/projects/demo/releases/20130116145843/public/images /home/hosting_grandinvest/projects/demo/releases/20130116145843/public/stylesheets /home/hosting_grandinvest/projects/demo/releases/20130116145843/public/javascripts': Нет такого файла или каталога
command finished in 91ms
triggering after callbacks for `deploy:update_code'
* 2013-01-16 16:58:45 executing `make_images_link'
* executing "ln -s /home/hosting_grandinvest/projects/demo/shared/public/images /home/hosting_grandinvest/projects/demo/releases/20130116145843/public/images"
As you see it's because first it tries to find /public/images dir. and only then creates a symlink for that directory.
beggining of my deploy.rb
require 'bundler/capistrano'
after "deploy:update_code", :make_images_link
task :make_images_link, roles => :app do
images_dir = "#{shared_path}/public/images"
run "ln -s #{images_dir} #{release_path}/public/images"
end
the deploy finishes
Gem.source_index called from /home/hosting_grandinvest/projects/demo/shared/gems/ruby/1.8/gems/rails-2.3.15/lib/rails/gem_dependency.rb:21.
master process ready
worker=0 ready
reaped #<Process::Status: pid=18656,exited(0)> worker=0
master complete
in public/images dir are located some files used by css ( background: url(/images/front/logo.gif) no-repeat 0 0;) and they are Not displayed !but when i try to access these files directly
(http://hosting.net/images/front/logo.gif) i can see them!
Any suggestions on how to solve this error and make capistrano work?
UPDATE 1
I've included public/images/front in repo and after code deployment swap empty folder with a link
after "deploy:update_code", :make_images_link
task :make_images_link, roles => :app do
images_dir = "#{shared_path}/public/images"
realease_images = "#{release_path}/public/images"
run "rm -rf #{realease_images}"
run "ln -s #{images_dir} #{realease_images}"
end
When i deploy error still exists, but images appeared!
In the end i've included 'public'images' dir in my repository.
and as step 2 i run a callback that i've specified in update 1.