Capistrano3 - Delete failed release on deploy:failed - capistrano

I'm using Capistrano 3. How can I delete failed releases on deploy:failed without writing my own task? The point is that failed releases stay in the releases directory and accidentally I can rollback to broken release by typing cap deploy:rollback
UPD:
For now I'm using this task but still looking for a better solution.
namespace :deploy do
desc 'Delete failed release'
task :rm_failed do
on roles(:all) do
execute(:rm, "-rf #{release_path}")
end
end
end
after 'deploy:failed', 'deploy:rm_failed'

My limited experience with Capistrano indicates that there is no built-in mechanism to perform the action you want. Writing a task that is triggered using deploy:failed is the proper way to handle the situation as it can be application specific.
A better solution that addresses the potential for deploy:clean to fail after the deploy:symlink:release --
namespace :deploy do
desc 'Delete failed release'
task :rm_failed do
on roles(:web) do
if test "[ -d #{current_path} ]"
current_release = capture(:readlink, current_path).to_s
if current_release != release_path.to_s
execute :rm, "-rf #{release_path}"
end
end
end
end
end
after 'deploy:failed', 'deploy:rm_failed'
The Capistrano page on rollbacks has a larger discussion on handling rollbacks and failures.

Related

ERROR Failed to perform transaction: Transaction failed with vm status: Validation(UnknownScript)

I am following Diem (libra) documentation on My First Transaction, everything worked fine until Submit a Transaction. As mentioned in the following document, I tried to submit a transaction. https://developers.libra.org/docs/my-first-transaction#submit-a-transaction
But it fails with below error, tried going thru troubleshooting articles, but not much helpful.
libra% transfer 0 1 10
>> Transferring
[ERROR] Failed to perform transaction: Transaction failed with vm status: Validation(UnknownScript)
libra%
Wondering if anyone able to successfully submit the transaction.
OS: macos
Seems you are building the client cli off master branch, you may need to build using testnet instead. I faced the same issue, it worked after switching to testnet branch.
$ git checkout testnet
$./scripts/cli/start_cli_testnet.sh
.
.
.
libra% transfer 0 1 10
>> Transferring
Transaction submitted to validator
To query for transaction status, run: query txn_acc_seq 0 0 <fetch_events=true|false

Capistrano 3 version of on_rollback?

Upgrading to capistrano 3, the following no longer seems to work:
namespace :project do
desc "Prevents executing by creating lockfile"
task :disable do
on roles(:app) do
execute "touch /tmp/proj_lockfile"
end
on_rollback do
execute "rm /tmp/proj_lockfile"
end
end
end
...
NoMethodError: undefined method `on_rollback' for main:Object
config/deploy.rb:34:in `block (2 levels) in <top (required)>'
Tasks: TOP => deploy:starting => transformer:disable
(See full trace by running task with --trace)
Is there a new task etc to do this?
There is no Capistrano 3 equivalent of on_rollback.
In Capistrano 3, if any command fails, the deployment stops, and the release that was being deployed is left in place (possibly working and deployed, possibly not working and deployed, or not deployed - Capistrano no longer attempts to clean up at all).
Note that this also means that :on_error => :continue is unsupported with no replacement; you have to begin/rescue any exceptions yourself.
The rollbacks section of the documentation is completely empty.
Take a look at the "Rollback flow" documentation: http://capistranorb.com/documentation/getting-started/flow/
deploy:starting
deploy:started
deploy:reverting - revert server(s) to previous release
deploy:reverted - reverted hook
deploy:publishing
deploy:published
deploy:finishing_rollback - finish the rollback, clean up everything
deploy:finished

msdeploy - stop deploy in postsync if presync fails

I am using msdeploy -presync to backup the current deployment of a website in IIS before the -postsync deploys it, however I recently had a situation where the -presync failed (raised a warning due to a missing dll) and the -postsync continued and overwrote the code.
Both the presync and postsync run batch files.
Obviously this is bad as the backup failed so there is no backout route if the deployment has bugs or fails.
Is there anyway to stop the postsync if the presync raises warnings with msdeploy?
Perhaps the issue here is that the presync failure was raised as a warning not an error.
Supply successReturnCodes parameter set to 0 (success return code convention) to presync option such as:
-preSync:runCommand="your script",successReturnCodes=0
More info at: http://technet.microsoft.com/en-us/library/ee619740(v=ws.10).aspx

Inspect and retry resque jobs via redis-cli

I am unable to run the resque-web on my server due to some issues I still have to work on but I still have to check and retry failed jobs in my resque queues.
Has anyone any experience on how to peek the failed jobs queue to see what the error was and then how to retry it using the redis-cli command line?
thanks,
Found a solution on the following link:
http://ariejan.net/2010/08/23/resque-how-to-requeue-failed-jobs
In the rails console we can use these commands to check and retry failed jobs:
1 - Get the number of failed jobs:
Resque::Failure.count
2 - Check the errors exception class and backtrace
Resque::Failure.all(0,20).each { |job|
puts "#{job["exception"]} #{job["backtrace"]}"
}
The job object is a hash with information about the failed job. You may inspect it to check more information. Also note that this only lists the first 20 failed jobs. Not sure how to list them all so you will have to vary the values (0, 20) to get the whole list.
3 - Retry all failed jobs:
(Resque::Failure.count-1).downto(0).each { |i| Resque::Failure.requeue(i) }
4 - Reset the failed jobs count:
Resque::Failure.clear
retrying all the jobs do not reset the counter. We must clear it so it goes to zero.

What am I doing wrong for Sphinx to fail to start during cap deploy?

I'm struggling to get Sphinx back up and running after deploying a rails app to my VPS.
Specifically, I'm thrown this error:
** [out :: myapp.com] => Mixing in Lockdown version: 1.6.4
** [out :: myapp.com]
** [out :: myapp.com] Failed to start searchd daemon. Check /var/www/myapp/releases/20100227224936/log/searchd.log.
** [out :: myapp.com] Failed to start searchd daemon. Check /var/www/myapp/releases/20100227224936/log/searchd.log
However, a log file isn't created!
This is the deploy.rb I am using (with thanks to Updrift :) )
namespace :deploy do
desc "Restart the app"
task :restart, :roles => :app do
# This regen's the config file, stops Sphinx if running, then starts it.
# No indexing is done, just a restart of the searchd daemon
# thinking_sphinx.running_start
# The above does not re-index. If any of your define_index blocks
# in your models have changed, you will need to perform an index.
# If these are changing frequently, you can use the following
# in place of running_start
thinking_sphinx.stop
thinking_sphinx.index
thinking_sphinx.start
# Restart the app
run "touch #{current_path}/tmp/restart.txt"
end
desc "Cleanup older revisions"
task :after_deploy do
cleanup
end
end
I'm using the Thinking Sphinx gem, v 1.3.16, passenger 2.2.10. Any thoughts you have would be greatly appreciated.
Many thanks!
Greg
UPDATE: Further to some more google searching, I've found a couple of other people with similar errors - seemingly related to port listening errors eg here and [I'm not allowed to link to the other one]. My production.sphinx.conf similarly has used port 9312, despite me specifying in sphinx.yml to use 3312.
Does anyone have any idea what might be causing this? Thanks.
I should have rung up the IT Crowd: "Have you tried turning it off and on again?"
http://groups.google.com/group/thinking-sphinx/browse_thread/thread/dde565ea40e31075
Rebooting the server released the port.