Creating linked_dirs in Capistrano 3 fails - capistrano

I am attempting to set up Capistrano with a SilverStripe build and am running into a few troubles setting up the shared directories.
I set the linked_dirs in deploy.rb with the following:
set :linked_dirs, %w{assets vendor}
Since adding this line I get the following error:
[617afa7f] Command: /usr/bin/env mkdir -p /var/www/website/releases/20160215083713 /var/www/website/releases/20160215083713
INFO [617afa7f] Finished in 0.250 seconds with exit status 0 (successful).
DEBUG [88c3de20] Running /usr/bin/env [ -L /var/www/website/releases/20160215083713/assets ] as capistrano#128.199.231.152
DEBUG [88c3de20] Command: [ -L /var/www/website/releases/20160215083713/assets ]
DEBUG [88c3de20] Finished in 0.258 seconds with exit status 1 (failed).
DEBUG [3d61c1c4] Running /usr/bin/env [ -d /var/www/website/releases/20160215083713/assets ] as capistrano#128.199.231.152
DEBUG [3d61c1c4] Command: [ -d /var/www/website/releases/20160215083713/assets ]
DEBUG [3d61c1c4] Finished in 0.254 seconds with exit status 1 (failed).
INFO [3016a8cd] Running /usr/bin/env ln -s /var/www/website/shared/assets /var/www/website/releases/20160215083713/assets as capistrano#128.199.231.152
I am a mega noob when it comes to Capistrano and a semi noob when it comes to server configuration and permissions, so any pointers would be appreciated.

It probably hasn't actually failed. One thing to know about Capistrano is that (success) and (failed) are actually returning the result of the exit status, (success) if 0 and (failed) if non-0.
If we look at the command in question, it says that /usr/bin/env [ -L /var/www/website/releases/20160215083713/assets ] failed. This command is saying "return 0 if /var/www/website/releases/20160215083713/assets exists and is a link (-L). This fails, but that just means it returns non-0, thus the link needs to be created. Note that the next command also fails (-d) with asserting that the path is a directory. And the last line in your output is actually creating the link in question.
You can see the test in the Capistrano codebase here: https://github.com/capistrano/capistrano/blob/master/lib/capistrano/tasks/deploy.rake#L128
You can clean up and simplify the output with https://github.com/mattbrictson/airbrussh. This is developed by one of the primary Capistrano devs.
As a sidenote, similarly all the green text in your terminal is stdout and the red text is stderr. This can also be confusing.

Related

GitLab K8s Runner fails for get_sources

we are trying to move gitlab-runners from standard CentOS VMs to kebernetes.
But after setup and registration, pipeline fails with unknown error:
Running with gitlab-runner 15.7.0 (259d2fd4)
on Kubernetes-local JXRw3mH1
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: gitlab-runner
Using Kubernetes executor with image gitlab-test.domain:5005/image:latest ...
Using attach strategy to execute scripts...
Preparing environment
00:04
Waiting for pod gitlab-runner/runner-jxrw3mh1-project-290-concurrent-0dpd88 to be running, status is Pending
Running on runner-jxrw3mh1-project-290-concurrent-0dpd88 via gitlab-runner-d7df6c548-hsgxg...
Getting source from Git repository
00:00
error: could not lock config file /root/.gitconfig: Read-only file system
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: command terminated with exit code 1
Inside the log of the job pod we found:
helper Running on runner-jxrw3mh1-project-290-concurrent-0dpd88 via gitlab-runner-d7df6c548-hsgxg...
helper
helper {"command_exit_code": 0, "script": "/scripts-290-207166/prepare_script"}
helper error: could not lock config file /root/.gitconfig: Read-only file system
helper
helper {"command_exit_code": 1, "script": "/scripts-290-207166/get_sources"}
helper
helper {"command_exit_code": 0, "script": "/scripts-290-207166/cleanup_file_variables"}
Inside the log of the gitlab-runner pod we found:
Starting in container "helper" the command ["gitlab-runner-build" "<<<" "/scripts-290-207167/get_sources" "2>&1 | tee -a /logs-290-207167/output.log"] with script: #!/usr/bin/env bash
if set -o | grep pipefail > /dev/null; then set -o pipefail; fi; set -o errexit
set +o noclobber
: | eval $'export FF_CMD_DISABLE_DELAYED_ERROR_LEVEL_EXPANSION=$\'false\'\nexport FF_NETWORK_PER_BUILD=$\'false\'\nexport FF_USE_LEGACY_KUBERNETES_EXECUTION_STRATEGY=$\'false\'\nexport FF_USE_DIRECT_DOWNLOAD
exit 0
job=207167 project=290 runner=JXRw3mH1
Remote process exited with the status: CommandExitCode: 1, Script: /scripts-290-207167/get_sources job=207167 project=290 runner=JXRw3mH1
Container "helper" exited with error: command terminated with exit code 1 job=207167 project=290 runner=JXRw3mH1
notes:
the error "error: could not lock config file /root/.gitconfig: Read-only file system" is due to the current user inside container is different by root
the file /logs-290-207167/output.log contains the log of the job pod
Inside job pod shell we also tested some git commands and perform successfully fetch and clone using our personal credentials (the same user that perform the run of the pipeline from gitlab gui).
We think the problem can be related on gitlab-ci-token, but we have finished our investigation... :frowning:

Snakemake: Singularity parameters --home and --bind set by default but disallowed on HPC

I have already posted this as an issue on Github at https://github.com/snakemake/snakemake/issues/279 but haven't got any response yet. I hope to find help here.
Version
I am using the following versions on our HPC cluster:
Snakemake c5.4.4
singularity version 3.5.3
Minimal example
singularity: "docker://bash"
rule test:
shell: "echo test"
Describe the bug
snakemake --use-singularity --debug
returns this message:
Building DAG of jobs...
Pulling singularity image docker://bash.
Using shell: /bin/bash
Provided cores: 1
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 test
1
[Fri Mar 13 15:59:30 2020]
rule test:
jobid: 0
Activating singularity image /data/nanopore/test/.snakemake/singularity/36b22e49e8a03fd08160e9345dd1034e.simg
FATAL: container creation failed: not mounting user requested home: user bind control is disallowed
[Fri Mar 13 15:59:30 2020]
Error in rule test:
jobid: 0
RuleException:
CalledProcessError in line 4 of /data/nanopore/test/Snakefile:
Command ' singularity exec --home /data/nanopore/test --bind /opt/snakemake/v5.4.4/lib/python3.5/site-packages/snakemake-5.4.4-py3.5.egg:/mnt/snakemake /data/nanopore/test/.snakemake/singularity/36b22e49e8a03fd08160e9345dd1034e.simg bash -c 'set -euo pipefail; echo test'' returned non-zero exit status 255
File "/data/nanopore/test/Snakefile", line 4, in __rule_test
File "/usr/lib/python3.5/concurrent/futures/thread.py", line 55, in run
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: /data/nanopore/test/.snakemake/log/2020-03-13T155917.601627.snakemake.log
Apparently, snakemake runs singularity with default values for --home and --bind. These were disallowed by the administrator, however.
Executing
singularity exec --home /data/nanopore/test --bind /opt/snakemake/v5.4.4/lib/python3.5/site-packages/snakemake-5.4.4-py3.5.egg:/mnt/snakemake /data/nanopore/test/.snakemake/singularity/36b22e49e8a03fd08160e9345dd1034e.simg bash -c 'set -euo pipefail;'
returns:
FATAL: container creation failed: not mounting user requested home: user bind control is disallowed
Additional context
Is there a way to disable the Singularity default parameter setting in snakemake? Inside the singularity container the /data directory is still writeable and readable anyway.
Thanks a lot

Continue after a failing command in appveyor

In appveyor I use the statement:
- initexmf --admin --force --mklinks
but due to a problem it gives the message:
initexmf --admin --force --mklinks
Sorry, but "MiKTeX Configuration Utility" did not succeed for the following reason:
Script configuration file not found.
The log file hopefully contains the information to get MiKTeX going again:
C:\ProgramData\MiKTeX\2.9\miktex\log\initexmf_admin.log
The system cannot find the path specified.
Command exited with code 1
due to the error code the process terminates and I cannot type the C:\ProgramData\MiKTeX\2.9\miktex\log\initexmf_admin.log anymore, so a bit hard to debug ...
questions:
How to continue after an error
How to stop after the outputting the file (exit 1 ?)
To run a script on failure use on_failure section, for example to push initexmf_admin.log to artifacts:
on_failure:
- appveyor PushArtifact C:\ProgramData\MiKTeX\2.9\miktex\log\initexmf_admin.log

INSTRUMENTATION_FAILED when run android automation test from sh

I'm doing automation test for an android app. My test code is ready and good to run from android studio and manual command from android device. But when I change the command to sh, it failed with INSTRUMENTATION_FAILED. Anyone can help me how to fix it? I just don't understand, why it's working when directly run from terminal, but failed when run from sh.
Manual input comment, which is working:
am instrument -w -r -e debug false -e class com.amap.auto.androidautomation.testcases.basemap.SmokeTest com.amap.auto.androidautomation.test/android.support.test.runner.AndroidJUnitRunner
Result:
INSTRUMENTATION_STATUS: numtests=9
INSTRUMENTATION_STATUS: stream=
com.amap.auto.androidautomation.testcases.basemap.SmokeTest:
INSTRUMENTATION_STATUS: id=AndroidJUnitRunner
INSTRUMENTATION_STATUS: test=smoke01
INSTRUMENTATION_STATUS: class=com.amap.auto.androidautomation.testcases.
Run from sh:(the command is the same as manual, just put it in a sh file)
sh r.sh
Result:
INSTRUMENTATION_STATUS: id=ActivityManagerService
INSTRUMENTATION_STATUS: Error=Unable to find instrumentation info for: Component
Info{com.amap.auto.androidautomation.test/android.support.test.runner.AndroidJUn
}tRunner
INSTRUMENTATION_STATUS_CODE: -1
android.util.AndroidException: INSTRUMENTATION_FAILED: com.amap.auto.androidauto
mation.test/android.support.test.runner.AndroidJUnitRunner
at com.android.commands.am.Am.runInstrument(Am.java:1093)
at com.android.commands.am.Am.onRun(Am.java:371)
at com.android.internal.os.BaseCommand.run(BaseCommand.java:47)
at com.android.commands.am.Am.main(Am.java:100)
at com.android.internal.os.RuntimeInit.nativeFinishInit(Native Method)
at com.android.internal.os.RuntimeInit.main(RuntimeInit.java:251)
: not foundnload/r.sh[2]: exit

correct way to create symlink in deploy.rb

I have an error when i deploy an application:
[neon.locum.ru] executing command
*** [err :: neon.locum.ru] find: `/home/hosting_grandinvest/projects/demo/releases/20130116145843/public/images /home/hosting_grandinvest/projects/demo/releases/20130116145843/public/stylesheets /home/hosting_grandinvest/projects/demo/releases/20130116145843/public/javascripts': Нет такого файла или каталога
command finished in 91ms
triggering after callbacks for `deploy:update_code'
* 2013-01-16 16:58:45 executing `make_images_link'
* executing "ln -s /home/hosting_grandinvest/projects/demo/shared/public/images /home/hosting_grandinvest/projects/demo/releases/20130116145843/public/images"
As you see it's because first it tries to find /public/images dir. and only then creates a symlink for that directory.
beggining of my deploy.rb
require 'bundler/capistrano'
after "deploy:update_code", :make_images_link
task :make_images_link, roles => :app do
images_dir = "#{shared_path}/public/images"
run "ln -s #{images_dir} #{release_path}/public/images"
end
the deploy finishes
Gem.source_index called from /home/hosting_grandinvest/projects/demo/shared/gems/ruby/1.8/gems/rails-2.3.15/lib/rails/gem_dependency.rb:21.
master process ready
worker=0 ready
reaped #<Process::Status: pid=18656,exited(0)> worker=0
master complete
in public/images dir are located some files used by css ( background: url(/images/front/logo.gif) no-repeat 0 0;) and they are Not displayed !but when i try to access these files directly
(http://hosting.net/images/front/logo.gif) i can see them!
Any suggestions on how to solve this error and make capistrano work?
UPDATE 1
I've included public/images/front in repo and after code deployment swap empty folder with a link
after "deploy:update_code", :make_images_link
task :make_images_link, roles => :app do
images_dir = "#{shared_path}/public/images"
realease_images = "#{release_path}/public/images"
run "rm -rf #{realease_images}"
run "ln -s #{images_dir} #{realease_images}"
end
When i deploy error still exists, but images appeared!
In the end i've included 'public'images' dir in my repository.
and as step 2 i run a callback that i've specified in update 1.