Capistrano - cleaning up old releases issue - capistrano

I followed the advice in another post about how to clean up old capistrano releases, however, i've realised that the way I've implemented this has messed up the paths in my application (just on a staging site thankfully!). I am using the code below in my config/deploy/staging.rb script, but it can't be running at the correct point, as after deployment I end up with application failing as it's trying to load classes from earlier releases. If I remove the keep_releases line and the one below, and redeploy, everything works again. Has anyone come across this issue?
set :use_sudo, false
set :keep_releases, 1
after "deploy:update", "deploy:cleanup"
namespace :deploy do
task :symlink_shared do
// run some commands i need
end
end
before "deploy:restart", "deploy:symlink_shared"

So it seems the order wasn't right - I changed keep_releases to 2, removed the line underneath and then changed the last line to the following:
"deploy:update", "deploy:symlink_shared"
after "deploy:restart", "deploy:cleanup"

Related

Deployment with Chef

I am using the chef resource "deploy_revision" to deploy a python code on my nodes. I started this initially for a dev environment but now slowly there is a need for this to expand and i am not sure - if this is a good choice. Below is the code.
data_bag = Chef::EncryptedDataBagItem.load("#{node.chef_environment}", "#{node.chef_environment}")
deploy_revision "/opt/mount/#{node[:application_name]}" do
repo "#{node[:application_repo]}"
user "deployer"
keep_releases 10
action :deploy
migrate false
symlink_before_migrate.clear
create_dirs_before_symlink
purge_before_symlink.clear
symlinks.clear
symlinks {}
notifies :restart, "service[abc]"
end
This pulls down the new code whenever there is one, during the automatic chef-run every 30mins on the nodes. This is cool but not so cool in other nodes which are not a part of the development environment. I have 4 environments:
dev
test
stage
prod
If i create 4 remote branches on the git, Is there a way on how to make this deploy from specific branch on specific environments? Something like, the dev nodes deploy the dev remote branch, test deploys the test remote branch and so on.. This way, i can put a gate on the auto deploys that happening every 30mins. i referred the chef docs, there is this "deploy_branch" but i not sure it just says its the same as "deploy_revision".
https://docs.chef.io/resource_deploy.html#layout-modifiers
There is an attribute branch available as per the chef document. So adding the attribute like is what i need?
deploy_revision "/opt/mount/#{node[:application_name]}" do
repo "#{node[:application_repo]}"
user "deployer"
branch "node.chef_environment"
keep_releases 10
action :deploy
migrate false
symlink_before_migrate.clear
create_dirs_before_symlink
purge_before_symlink.clear
symlinks.clear
symlinks {}
notifies :restart, "service[abc]"
end
Then, i came across this bug report (closed): https://tickets.opscode.com/browse/CHEF-5084. It seems to be specifying branches with the attribute, "revision". So, can i use this attribute with the node environment as the parameter? Like this
revision "node.chef_environments"
If you guys, think deployment using chef is not a good idea. Do you think- i should look out for Capistrano?
we do this all the time. Just add:
deploy_revision '/path' do
...
revision node['chef_environment']
...
end
Or in our case, we calculate the branch from the chef_environment
deploy_revision '/path' do
...
revision node['chef_environment'].match(/develop|staging|production|test/)[0]
...
end
This is how it worked for me.
deploy_revision "/opt/mount/#{node[:application_name]}" do
repo "#{node[:application_repo]}"
user "deployer"
revision node.chef_environment
keep_releases 10
action :deploy
migrate false
symlink_before_migrate.clear
create_dirs_before_symlink
purge_before_symlink.clear
symlinks.clear
symlinks {}
notifies :restart, "service[abc]"
end

Capistrano floating literal anymore; put 0 before dot (SyntaxError)

I have a little bit of experience using Capistrano but only on projects that are already using it and I am currently trying to set it up on a new project. I have created my config/deploy.rb file, added the appropriate configuration and am now trying to run "cap deploy:setup" to set the correct capistrano structure up on my remote server however now when I try to run this or any other Capistrano commands I get:
/Library/Ruby/Gems/2.0.0/gems/capistrano-2.15.5/lib/capistrano/configuration/loading.rb:93:in `instance_eval': ./config/deploy.rb:8: no . floating literal anymore; put 0 before dot (SyntaxError)
role :web, xxx.x.xx.xx
It looks like an issue with the format of the IP that I have provided for my app but no variations seem to work.
Am I missing something obvious here?
Has anyone else come across this issue?
Thanks
James
I found a resolution for this in the end through trial and error. It turns out that I needed to wrap the role :web and role :app values in quotes:
"xx.xx.xx.xx"
Then it worked correctly.

Webistrano - how to clear global HTML cache after deployment

I am new to webistrano so apologies if this is a trivial matter...
I am using webistrano to deploy php code to several production servers, this is all working great. My problem is that I need to clear HTML cache on my cache servers (varnish cache) after the code update. I can't figure out how to build a recipe that will be executed on the webistrano machine (and will run the relevant shell script that will clear the cache) and not on each of the deployment target machines.
Thanks for the help,
Yariv
Simpliest method is to execute varnishadm tool with proper parameters inside deploy:restart
set :varnish_ban_pattern, "req.url ~ ^/"
set :varnish_terminal_address_port, "127.0.0.1:6082"
set :varnish_varnishadm, "/usr/bin/varnishadm"
task :restart, :roles => :web do
run "#{varnish_varnishadm} -T #{varnish_terminal_address_port} ban \"#{varnish_ban_pattern}\""
end
Thanks for the answer. I actually need to do some more stuf than to only clear the the cache so I will execute a bash script locally as described in below:
How do I execute a Capistrano task locally?

Capistrano in Open Source Projects and different environments

I consider to use Capistrano to deploy my rails app on my server. Currently I'm using a script, which does all the work for me. But Capistrano looks pretty nice and I want to give it a try.
My first problem/question now is: How to use Capistrano properly in open source projects? I don't want to publish my deploy.rb for several reasons:
It contains sensible informations about my server. I don't want to publish them :)
It contains the config for MY server. For other people, which deploy that open source project to their own server, the configuration may differ. So it's pretty senseless to publish my configuration, because it's useless for other people.
Second problem/question: How do I manage different environments?
Background: On my server I provide two different environments for my application: The stable system using the current stable release branch and located under www.domain.com. And a integration environment for the develop team under dev.domain.com running the master branch.
How do I tell Capistrano to deploy the stable system or the dev system?
The way I handle sensitive information (passwords etc.) in Capistrano is the same way I handle them in general: I use an APP_CONFIG hash that comes from a YAML file that isn't checked into version control. This is a classic technique that's covered e.g. in RailsCast #226, or see this StackOverflow question.
There are a few things you have to do a little differently when using this approach with Capistrano:
Normally APP_CONFIG is loaded from your config/application.rb (so it happens early enough to be usable everywhere else); but Capistrano cap tasks won't load that file. But you can just load it from config/deploy.rb too; here's the top of a contrived config/deploy.rb file using an HTTP repository that requires a username/password.
require 'bundler/capistrano'
APP_CONFIG = YAML.load_file("config/app_config.yml")
set :repo_user, APP_CONFIG['repo_user']
set :repo_password, APP_CONFIG['repo_password']
set :repository, "http://#{repo_user}:#{repo_password}#hostname/repositoryname.git/"
set :scm, :git
# ...
The config/app_config.yml file is not checked into version control (put that path in your .gitignore or similar); I normally check in a config/app_config.yml.sample that shows the parameters that need to be configured:
repo_user: 'usernamehere'
repo_password: 'passwordhere'
If you're using the APP_CONFIG for your application, it probably needs to have different values on your different deploy hosts. So have your Capistrano setup make a symlink from the shared/ directory to each release after it's checked out. You want to do this early in the deploy process, because applying migrations might need a database password. So in your config/deploy.rb put this:
after 'deploy:update_code', 'deploy:symlink_app_config'
namespace :deploy do
desc "Symlinks the app_config.yml"
task :symlink_app_config, :roles => [:web, :app, :db] do
run "ln -nfs #{deploy_to}/shared/config/app_config.yml #{release_path}/config/app_config.yml"
end
end
Now, for the second part of your question (about deploying to multiple hosts), you should configure separate Capistrano "stages" for each host. You put everything that's common across all stages in your config/deploy.rb file, and then you put everything that's unique to each stage into config/deploy/[stagename].rb files. You'll have a section in config/deploy.rb that defines the stages:
# Capistrano settings
require 'bundler/capistrano'
require 'capistrano/ext/multistage'
set :stages, %w(preproduction production)
set :default_stage, 'preproduction'
(You can call the stages whatever you want; the Capistrano stage name is separate from the Rails environment name, so the stage doesn't have to be called "production".) Now when you use the cap command, insert the stage name between cap and the target name, e.g.:
$ cap preproduction deploy #deploys to the 'preproduction' environment
$ cap production deploy #deploys to the 'production' environment
$ cap deploy #deploys to whatever you defined as the default

Unicorn restart issue with capistrano

We're deploying with cap and using a script that send USR2 to the unicorn process to reload and it usually works but every once in a while it will fail. When that happens looking in the unicorn log reveals that it's looking for a Gemfile in an old release directory that no longer exists.
Exception :
/usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.21/lib/bundler/definition.rb:14:in `build': /var/www/railsapps/inventory/releases/20111128233407/Gemfile not found (Bundler::GemfileNotFound)
To clarify that's not the current release but an older one that's since been removed.
When it works it does seem to work correctly - ie it does pickup the new code - so I don't think it's somehow stuck referring to the old release.
Any ideas?
In your unicorn.rb add the before_exec block
current_path = "/var/www/html/my project/current"
before_exec do |server|
ENV['BUNDLE_GEMFILE'] = "#{current_path}/Gemfile"
end
Read more about it here http://blog.willj.net/2011/08/02/fixing-the-gemfile-not-found-bundlergemfilenotfound-error/
You should set the BUNDLE_GEMFILE environment variable before you start the server, point it at current/Gemfile.