Rails: Queue Classic with Capistrano - queue

I want to start Queue Classic (QC) within my Capistrano recipe:
namespace :queue_classic do
desc "Start QC worker"
task :start, roles: :web do
run "cd #{release_path} && RAILS_ENV=production bundle exec rake qc:work"
end
after "deploy:restart", "queue_classic:restart"
end
Capistrano runs the line correctly, so that the QC worker starts, but not as a daemon. As a result Capistrano won't continue to run the recipes.
How can I start the QC worker in the background and let Capistrano finish its tasks?
Thank you!

I use foreman for this:
after "deploy:update", "foreman:export" # Export foreman scripts
after "deploy:restart", "foreman:restart" # Restart application scripts
after "deploy:stop", "foreman:stop" # Restart application scripts
after "deploy:start", "foreman:start"
# Foreman tasks
desc 'Export the Procfile to Ubuntu upstart scripts'
task :export, :roles => :queue do
run "cd #{release_path}; #{sudo} $(rbenv which foreman) export upstart /etc/init -f ./Procfile -a #{application} -u #{user} -l #{release_path}/log/foreman"
end
desc "Start the application services"
task :start, :roles => :queue do
run "#{sudo} start #{application}"
end
desc "Stop the application services"
task :stop, :roles => :queue do
run "#{sudo} stop #{application}"
end
desc "Restart the application services"
task :restart, :roles => :queue do
run "#{sudo} stop #{application}"
run "#{sudo} start #{application}"
#run "sudo start #{application} || sudo restart #{application}"
end
Then in your Procfile put something like
worker: RAILS_ENV=production bundle exec rake qc:work

Use the & sign at the and and put process in background.
run "cd #{release_path} && RAILS_ENV=production bundle exec rake qc:work &"
Killin and restarting is than a bit tricky, but it could be done using ps, grep and kill.

Related

GitLab K8s Runner fails for get_sources

we are trying to move gitlab-runners from standard CentOS VMs to kebernetes.
But after setup and registration, pipeline fails with unknown error:
Running with gitlab-runner 15.7.0 (259d2fd4)
on Kubernetes-local JXRw3mH1
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: gitlab-runner
Using Kubernetes executor with image gitlab-test.domain:5005/image:latest ...
Using attach strategy to execute scripts...
Preparing environment
00:04
Waiting for pod gitlab-runner/runner-jxrw3mh1-project-290-concurrent-0dpd88 to be running, status is Pending
Running on runner-jxrw3mh1-project-290-concurrent-0dpd88 via gitlab-runner-d7df6c548-hsgxg...
Getting source from Git repository
00:00
error: could not lock config file /root/.gitconfig: Read-only file system
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: command terminated with exit code 1
Inside the log of the job pod we found:
helper Running on runner-jxrw3mh1-project-290-concurrent-0dpd88 via gitlab-runner-d7df6c548-hsgxg...
helper
helper {"command_exit_code": 0, "script": "/scripts-290-207166/prepare_script"}
helper error: could not lock config file /root/.gitconfig: Read-only file system
helper
helper {"command_exit_code": 1, "script": "/scripts-290-207166/get_sources"}
helper
helper {"command_exit_code": 0, "script": "/scripts-290-207166/cleanup_file_variables"}
Inside the log of the gitlab-runner pod we found:
Starting in container "helper" the command ["gitlab-runner-build" "<<<" "/scripts-290-207167/get_sources" "2>&1 | tee -a /logs-290-207167/output.log"] with script: #!/usr/bin/env bash
if set -o | grep pipefail > /dev/null; then set -o pipefail; fi; set -o errexit
set +o noclobber
: | eval $'export FF_CMD_DISABLE_DELAYED_ERROR_LEVEL_EXPANSION=$\'false\'\nexport FF_NETWORK_PER_BUILD=$\'false\'\nexport FF_USE_LEGACY_KUBERNETES_EXECUTION_STRATEGY=$\'false\'\nexport FF_USE_DIRECT_DOWNLOAD
exit 0
job=207167 project=290 runner=JXRw3mH1
Remote process exited with the status: CommandExitCode: 1, Script: /scripts-290-207167/get_sources job=207167 project=290 runner=JXRw3mH1
Container "helper" exited with error: command terminated with exit code 1 job=207167 project=290 runner=JXRw3mH1
notes:
the error "error: could not lock config file /root/.gitconfig: Read-only file system" is due to the current user inside container is different by root
the file /logs-290-207167/output.log contains the log of the job pod
Inside job pod shell we also tested some git commands and perform successfully fetch and clone using our personal credentials (the same user that perform the run of the pipeline from gitlab gui).
We think the problem can be related on gitlab-ci-token, but we have finished our investigation... :frowning:

Creating linked_dirs in Capistrano 3 fails

I am attempting to set up Capistrano with a SilverStripe build and am running into a few troubles setting up the shared directories.
I set the linked_dirs in deploy.rb with the following:
set :linked_dirs, %w{assets vendor}
Since adding this line I get the following error:
[617afa7f] Command: /usr/bin/env mkdir -p /var/www/website/releases/20160215083713 /var/www/website/releases/20160215083713
INFO [617afa7f] Finished in 0.250 seconds with exit status 0 (successful).
DEBUG [88c3de20] Running /usr/bin/env [ -L /var/www/website/releases/20160215083713/assets ] as capistrano#128.199.231.152
DEBUG [88c3de20] Command: [ -L /var/www/website/releases/20160215083713/assets ]
DEBUG [88c3de20] Finished in 0.258 seconds with exit status 1 (failed).
DEBUG [3d61c1c4] Running /usr/bin/env [ -d /var/www/website/releases/20160215083713/assets ] as capistrano#128.199.231.152
DEBUG [3d61c1c4] Command: [ -d /var/www/website/releases/20160215083713/assets ]
DEBUG [3d61c1c4] Finished in 0.254 seconds with exit status 1 (failed).
INFO [3016a8cd] Running /usr/bin/env ln -s /var/www/website/shared/assets /var/www/website/releases/20160215083713/assets as capistrano#128.199.231.152
I am a mega noob when it comes to Capistrano and a semi noob when it comes to server configuration and permissions, so any pointers would be appreciated.
It probably hasn't actually failed. One thing to know about Capistrano is that (success) and (failed) are actually returning the result of the exit status, (success) if 0 and (failed) if non-0.
If we look at the command in question, it says that /usr/bin/env [ -L /var/www/website/releases/20160215083713/assets ] failed. This command is saying "return 0 if /var/www/website/releases/20160215083713/assets exists and is a link (-L). This fails, but that just means it returns non-0, thus the link needs to be created. Note that the next command also fails (-d) with asserting that the path is a directory. And the last line in your output is actually creating the link in question.
You can see the test in the Capistrano codebase here: https://github.com/capistrano/capistrano/blob/master/lib/capistrano/tasks/deploy.rake#L128
You can clean up and simplify the output with https://github.com/mattbrictson/airbrussh. This is developed by one of the primary Capistrano devs.
As a sidenote, similarly all the green text in your terminal is stdout and the red text is stderr. This can also be confusing.

Capistrano error on rake assets:precompile:all but still working (using RVM)

I've made a new Rails 3.2 application. When i deploy it with Capistrano, I get an error when compiling assets. But the assets ARE compiled, and the application deployed as it should.
On the server I've installed RVM systemwide and then created:
User: skolemapicture (added to group rvm)
Deploy folder: /home/skolemapicture/site
.rvmrc in /home/skolemapicture/site/.rvmrc
My deploy.rb config looks like this (omitted lines that have nothing to do with the problem)
set :application, "skolemapicture"
set :deploy_to , "/home/skolemapicture/site"
set :user , "skolemapicture"
set :use_sudo , false
ssh_options[:forward_agent] = true
require "bundler/capistrano"
require "rvm/capistrano"
set(:ruby_version) { '1.9.3' }
set(:rvm_ruby_string) { "#{ruby_version}##{application}" }
set(:rvm_path) { "/usr/local/rvm" }
set(:rvm_type) { :system }
namespace :deploy do
task :precompile, :role => :app do
run "cd #{release_path}/ && bundle exec rake assets:precompile"
end
end
after "deploy:finalize_update", "deploy:precompile"
The error i get at "cap deploy" is:
* 2013-02-13 10:36:21 executing `deploy:precompile'
* executing "cd /home/skolemapicture/site/releases/20130213093619/ && bundle exec rake assets:precompile"
servers: ["web01.mapicture.com"]
[web01.mapicture.com] executing command
*** [err :: web01.mapicture.com] /usr/local/rvm/rubies/ruby-1.9.3-p374/bin/ruby /home/skolemapicture/site/shared/bundle/ruby/1.9.1/bin/rake assets:precompile:all RAILS_ENV=production RAILS_GROUPS=assets
*** [err :: web01.mapicture.com]
But the assets ARE compiled. So why this error?
/ Carsten
It is probably actually precompiling the assets successfully with your custom deploy:precompile task.
It is failing on the capistrano default assets:precompile task.
You will notice that the failed command is
/usr/local/rvm/rubies/ruby-1.9.3-p374/bin/ruby /home/skolemapicture/site/shared/bundle/ruby/1.9.1/bin/rake assets:precompile:all RAILS_ENV=production RAILS_GROUPS=assets
not your custom precompile task:
cd #{release_path}/ && bundle exec rake assets:precompile
Try removing your deploy:precompile task and adding
load 'deploy/assets'
to your Capfile if it is not already there.
If that doesn't fix it can you post your entire Capfile and deploy.rb?

correct way to create symlink in deploy.rb

I have an error when i deploy an application:
[neon.locum.ru] executing command
*** [err :: neon.locum.ru] find: `/home/hosting_grandinvest/projects/demo/releases/20130116145843/public/images /home/hosting_grandinvest/projects/demo/releases/20130116145843/public/stylesheets /home/hosting_grandinvest/projects/demo/releases/20130116145843/public/javascripts': Нет такого файла или каталога
command finished in 91ms
triggering after callbacks for `deploy:update_code'
* 2013-01-16 16:58:45 executing `make_images_link'
* executing "ln -s /home/hosting_grandinvest/projects/demo/shared/public/images /home/hosting_grandinvest/projects/demo/releases/20130116145843/public/images"
As you see it's because first it tries to find /public/images dir. and only then creates a symlink for that directory.
beggining of my deploy.rb
require 'bundler/capistrano'
after "deploy:update_code", :make_images_link
task :make_images_link, roles => :app do
images_dir = "#{shared_path}/public/images"
run "ln -s #{images_dir} #{release_path}/public/images"
end
the deploy finishes
Gem.source_index called from /home/hosting_grandinvest/projects/demo/shared/gems/ruby/1.8/gems/rails-2.3.15/lib/rails/gem_dependency.rb:21.
master process ready
worker=0 ready
reaped #<Process::Status: pid=18656,exited(0)> worker=0
master complete
in public/images dir are located some files used by css ( background: url(/images/front/logo.gif) no-repeat 0 0;) and they are Not displayed !but when i try to access these files directly
(http://hosting.net/images/front/logo.gif) i can see them!
Any suggestions on how to solve this error and make capistrano work?
UPDATE 1
I've included public/images/front in repo and after code deployment swap empty folder with a link
after "deploy:update_code", :make_images_link
task :make_images_link, roles => :app do
images_dir = "#{shared_path}/public/images"
realease_images = "#{release_path}/public/images"
run "rm -rf #{realease_images}"
run "ln -s #{images_dir} #{realease_images}"
end
When i deploy error still exists, but images appeared!
In the end i've included 'public'images' dir in my repository.
and as step 2 i run a callback that i've specified in update 1.

Capistrano cannot find SVN client

I have an SVN client locally and on the Solaris production server, and they are in my path, so when I type svn somethng the command is found (my PC and Solaris).
This is the error:
C:\dev\apps>cap deploy:migrations
* executing `deploy:migrations'
* executing `deploy:update_code'
executing locally: "svn info https://svn.domain.co.uk/svn/apps -rHEAD"
*** executable 'svn' not present or not in $PATH on the local system!
* executing "svn checkout -q -r6 https://svn.domain.co.uk/svn/apps /sites/r
ails-data/apps/releases/20100120114312 && (echo 6 > /sites/rails-data/apps/relea
ses/20100120114312/REVISION)"
servers: ["solaris001.ds.domain.com"]
Password:
[solaris001.ds.domain.com] executing command
** [solaris001.ds.domain.com :: err] ld.so.1: svn: fatal: libaprutil-1.so.
0: open failed: No such file or directory
** [solaris001.ds.domain.com :: err] Killed
command finished
failed: "sh -c 'svn checkout -q -r6 https://svn.domain.co.uk/svn/apps /sites/
rails-data/apps/releases/20100120114312 && (echo 6 > /sites/rails-data/apps/rele
ases/20100120114312/REVISION)'" on solaris001.ds.domain.com
In my PC and in Solaris I can successfully run the commands that Capistrano is unable it cannot find the library and the executable.
This is my recipe:
set :application, "apps"
set :user, 'me'
set :domain, "solaris001.ds.domain.com"
set :repository, "https://svn.domain.co.uk/svn/apps"
set :use_sudo, false
set :deploy_to, "/sites/rails-data/#{application}"
role :app, domain
role :web, domain
namespace :deploy do
task :start, :roles => :app do
run "touch #{current_release}/tmp/restart.txt"
end
task :stop, :roles => :app do
# Do nothing.
end
desc "Restart Application"
task :restart, :roles => :app do
run "touch #{current_release}/tmp/restart.txt"
end
end
The solution: http://riccardotacconi.blogspot.com/2010/01/rails-deployment-with-capistrano-and.html