Capistrano 3 version of on_rollback? - capistrano

Upgrading to capistrano 3, the following no longer seems to work:
namespace :project do
desc "Prevents executing by creating lockfile"
task :disable do
on roles(:app) do
execute "touch /tmp/proj_lockfile"
end
on_rollback do
execute "rm /tmp/proj_lockfile"
end
end
end
...
NoMethodError: undefined method `on_rollback' for main:Object
config/deploy.rb:34:in `block (2 levels) in <top (required)>'
Tasks: TOP => deploy:starting => transformer:disable
(See full trace by running task with --trace)
Is there a new task etc to do this?

There is no Capistrano 3 equivalent of on_rollback.
In Capistrano 3, if any command fails, the deployment stops, and the release that was being deployed is left in place (possibly working and deployed, possibly not working and deployed, or not deployed - Capistrano no longer attempts to clean up at all).
Note that this also means that :on_error => :continue is unsupported with no replacement; you have to begin/rescue any exceptions yourself.
The rollbacks section of the documentation is completely empty.

Take a look at the "Rollback flow" documentation: http://capistranorb.com/documentation/getting-started/flow/
deploy:starting
deploy:started
deploy:reverting - revert server(s) to previous release
deploy:reverted - reverted hook
deploy:publishing
deploy:published
deploy:finishing_rollback - finish the rollback, clean up everything
deploy:finished

Related

Undefined method [] for nil when trying to run cap deploy:restart

I have a Rails 5.2 app, and with cap 3.4.1 we are suddenly getting this weird error:
[b35efe76] Phusion Passenger(R) 6.0.8
DEBUG [b35efe76] Finished in 0.305 seconds with exit status 0 (successful).
(Backtrace restricted to imported tasks)
cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing as deploy#host.com: undefined method `[]' for nil:NilClass
Trying to narrow it down, it happens when attempting to restart, because this is the line that fails:
cap production deploy:restart
The question is, how do I go about finding what file is trying to call [] on a nil value on? Running cap with --trace is of no value because it just gives me internal errors - nothing in my code. Basically, how do I find out what is nil?
One more clue, currently, I have a bunch of servers, if I run the cap restart command on server A, it restarts fine, on server B, it throws this error, so I'm guessing there's an environment variable on server A but not B, but the error is just so opaque I don't know where to begin.
Thanks for any help,
kevin
A wild guess: I had a similar problem and I could solve it by upgrading capistrano-passenger to >= 0.2.1
Looks like the version change of passenger from 6.0.7 to 6.0.8 has introduces a problem. I see you are also on 6.0.8, so it might affect you, too!
Link to the capistrano-passenger issue

Capistrano3 - Delete failed release on deploy:failed

I'm using Capistrano 3. How can I delete failed releases on deploy:failed without writing my own task? The point is that failed releases stay in the releases directory and accidentally I can rollback to broken release by typing cap deploy:rollback
UPD:
For now I'm using this task but still looking for a better solution.
namespace :deploy do
desc 'Delete failed release'
task :rm_failed do
on roles(:all) do
execute(:rm, "-rf #{release_path}")
end
end
end
after 'deploy:failed', 'deploy:rm_failed'
My limited experience with Capistrano indicates that there is no built-in mechanism to perform the action you want. Writing a task that is triggered using deploy:failed is the proper way to handle the situation as it can be application specific.
A better solution that addresses the potential for deploy:clean to fail after the deploy:symlink:release --
namespace :deploy do
desc 'Delete failed release'
task :rm_failed do
on roles(:web) do
if test "[ -d #{current_path} ]"
current_release = capture(:readlink, current_path).to_s
if current_release != release_path.to_s
execute :rm, "-rf #{release_path}"
end
end
end
end
end
after 'deploy:failed', 'deploy:rm_failed'
The Capistrano page on rollbacks has a larger discussion on handling rollbacks and failures.

gitlab vagrant vm rake:test aborted

i'm trying to install the gitlab-vagrant-vm from these repository: https://github.com/gitlabhq/gitlab-vagrant-vm
if i'm running
bundle exec rake gitlab:test it runs the test with all it's cases and checks.
but the summary gives me following lines of error.
Error summary:
Errors (1)
Project Issue Tracker :: I set the issue tracker to "Redmine" :: And change the issue tracker to "Redmine"
Steps Summary: (1379) Successful, (0) Undefined, (0) Pending, (0) Failed, (1) Error
Coverage report generated for RSpec to /vagrant/gitlabhq/coverage. 4937 / 7059 LOC (69.94%) covered.
rake aborted!
Command failed with status (1): [/opt/rbenv/versions/2.0.0-p247/bin/ruby -S...]
Tasks: TOP => spinach
(See full trace by running task with --trace)
rake aborted!
rake spinach failed!
/vagrant/gitlabhq/lib/tasks/gitlab/test.rake:15:in `block (3 levels) in '
/vagrant/gitlabhq/lib/tasks/gitlab/test.rake:12:in `each'
/vagrant/gitlabhq/lib/tasks/gitlab/test.rake:12:in `block (2 levels) in '
Tasks: TOP => gitlab:test
(See full trace by running task with --trace)
vagrant#precise32:/vagrant/gitlabhq$
what's this error about, and how to fix it? i really want to work with gitlab, and trying for several days now...
i am using:
osx 10.9 with server 3
vagrant 1.3.5
virtualbox 4.3
in the vagrant vm i ran "apt-get update / upgrade" to get the latest ubuntu
greetz!

msdeploy - stop deploy in postsync if presync fails

I am using msdeploy -presync to backup the current deployment of a website in IIS before the -postsync deploys it, however I recently had a situation where the -presync failed (raised a warning due to a missing dll) and the -postsync continued and overwrote the code.
Both the presync and postsync run batch files.
Obviously this is bad as the backup failed so there is no backout route if the deployment has bugs or fails.
Is there anyway to stop the postsync if the presync raises warnings with msdeploy?
Perhaps the issue here is that the presync failure was raised as a warning not an error.
Supply successReturnCodes parameter set to 0 (success return code convention) to presync option such as:
-preSync:runCommand="your script",successReturnCodes=0
More info at: http://technet.microsoft.com/en-us/library/ee619740(v=ws.10).aspx

What am I doing wrong for Sphinx to fail to start during cap deploy?

I'm struggling to get Sphinx back up and running after deploying a rails app to my VPS.
Specifically, I'm thrown this error:
** [out :: myapp.com] => Mixing in Lockdown version: 1.6.4
** [out :: myapp.com]
** [out :: myapp.com] Failed to start searchd daemon. Check /var/www/myapp/releases/20100227224936/log/searchd.log.
** [out :: myapp.com] Failed to start searchd daemon. Check /var/www/myapp/releases/20100227224936/log/searchd.log
However, a log file isn't created!
This is the deploy.rb I am using (with thanks to Updrift :) )
namespace :deploy do
desc "Restart the app"
task :restart, :roles => :app do
# This regen's the config file, stops Sphinx if running, then starts it.
# No indexing is done, just a restart of the searchd daemon
# thinking_sphinx.running_start
# The above does not re-index. If any of your define_index blocks
# in your models have changed, you will need to perform an index.
# If these are changing frequently, you can use the following
# in place of running_start
thinking_sphinx.stop
thinking_sphinx.index
thinking_sphinx.start
# Restart the app
run "touch #{current_path}/tmp/restart.txt"
end
desc "Cleanup older revisions"
task :after_deploy do
cleanup
end
end
I'm using the Thinking Sphinx gem, v 1.3.16, passenger 2.2.10. Any thoughts you have would be greatly appreciated.
Many thanks!
Greg
UPDATE: Further to some more google searching, I've found a couple of other people with similar errors - seemingly related to port listening errors eg here and [I'm not allowed to link to the other one]. My production.sphinx.conf similarly has used port 9312, despite me specifying in sphinx.yml to use 3312.
Does anyone have any idea what might be causing this? Thanks.
I should have rung up the IT Crowd: "Have you tried turning it off and on again?"
http://groups.google.com/group/thinking-sphinx/browse_thread/thread/dde565ea40e31075
Rebooting the server released the port.