I'm using capistrano 2 to deploy my application. According to the documentation, the deploy:cold task will call deploy:update followed by deploy:migrate and finally call deploy:start.
In my application I don't have a migration step, and when I run cap deploy:cold, it does the update but then shows this:
* 2014-04-09 14:57:45 executing `deploy:migrate'
`deploy:migrate' is only run for servers matching {:roles=>:db, :only=>{:primary=>true}}, but no servers matched
and deploy:start is not being called. If I call it manually, it works fine - but never through deploy:cold. What am I missing?
The problem is that Capistrano 2 defines a default task called deploy:migrate that is specified to run only for the role :db. If you don't have that role in your application (which most do not), then when deploy:migrate runs it will error out on the missing role, and that kind of error normally terminates the deployment scenario.
This can happen for other tasks, if you have a complex setup where some tasks are only run for specific servers. This behavior is not documented at all at Capistrano's site, which is a shame.
Starting with Capistrano 2.7 there is a way to circumvent the "is only run for servers matching role" error, by specifying a task property that tells Capistrano to skip tasks if no servers matches any of its specified roles:
For example:
namespace :deploy do
task :stuff, :roles => :somerole, :on_no_matching_servers => :continue do
# stuff
end
end
Now this works well for custom tasks, but how to set this for the default task migrate ? you have to override the task with your own setting and add the :on_no_matching_servers option. I've added this to the global.rb file:
namespace :deploy do
task :migrate, :on_no_matching_servers => :continue do end
end
Related
Im new to chef and trying to understand why this code does not return any error while if i do the same with 'start' i will get an error for such service does not exist.
service 'non-existing-service' do
action :stop
end
# chef-apply test.rb
Recipe: (chef-apply cookbook)::(chef-apply recipe)
* service[non-existing-service] action stop (up to date)
Don't know which plattform you are running on if you are running on Windows it should at least log
Chef::Log.debug "#{#new_resource} does not exist - nothing to do"
given that you have debug as log level.
You could argue this is the wrong behaviour, but if the service dose not exist it for sure isen't running.
Source code
https://github.com/chef/chef/blob/master/lib/chef/provider/service/windows.rb#L147
If you are getting one of the variants of the init.d provider, they default to getting the current status of a service by grepping the process table. Because Chef does its own idempotence checks internally before calling the provider's stop method, it would see there is no such process in the table and assume it was already stopped.
If I understand correctly the standard git deploy implementation with capistrano v3 deploys the same repository on all roles. I have a more difficult app that has several types of servers and each type has its own code base with its own repository. My database server for example does not need to deploy any code.
How do I tackle such a problem in capistrano v3?
Should I write my own deployment tasks for each of the roles?
How do I tackle such a problem in capistrano v3?
All servers get the code, as in certain environments the code is needed to perform some actions. For example in a typical setup the web server needs your static assets, the app server needs your code to serve the app, and the db server needs your code to run migrations.
If that's not true in your environment and you don't want the code on the servers in some roles, you could easily send a pull request to add the no_release feature back from Cap2 in to Cap3.
You can of course take the .rake files out of the Gem, and load those in your Capfile, which is a perfectly valid way to use the tool, and modify them for your own needs.
The general approach is that if you don't need code on your DB server, for example, why is it listed in your deployment file?
I can confirm you can use no_release: true to disable a server from deploying the repository code.
I needed to do this so I could specifically run a restart task for a different server.
Be sure to give your server a role so that you can target it. There is a handy function called release_roles() you can use to target servers that have your repository code.
Then you can separate any tasks (like my restart) to be independent from the deploy procedure.
For Example:
server '10.10.10.10', port: 22, user: 'deploy', roles: %w{web app db assets}
server '10.10.10.20', port: 22, user: 'deploy', roles: %w{frontend}, no_release: true
namespace :nginx do
desc 'Reloading PHP will clear OpCache. Remove Nginx Cache files to force regeneration.'
task :reload do
on roles(:frontend) do
execute "sudo /usr/sbin/service php7.1-fpm reload"
execute "sudo /usr/bin/find /var/run/nginx-cache -type f -delete"
end
end
end
after 'deploy:finished', 'nginx:reload'
after 'deploy:rollback', 'nginx:reload'
# Example of a task for release_roles() only
desc 'Update composer'
task :update do
on release_roles(:all) do
execute "cd #{release_path} && composer update"
end
end
before 'deploy:publishing', 'composer:update'
I can think of many scenarios where this would come in handy.
FYI, this link has more useful examples:
https://capistranorb.com/documentation/advanced-features/property-filtering/
I am new to webistrano so apologies if this is a trivial matter...
I am using webistrano to deploy php code to several production servers, this is all working great. My problem is that I need to clear HTML cache on my cache servers (varnish cache) after the code update. I can't figure out how to build a recipe that will be executed on the webistrano machine (and will run the relevant shell script that will clear the cache) and not on each of the deployment target machines.
Thanks for the help,
Yariv
Simpliest method is to execute varnishadm tool with proper parameters inside deploy:restart
set :varnish_ban_pattern, "req.url ~ ^/"
set :varnish_terminal_address_port, "127.0.0.1:6082"
set :varnish_varnishadm, "/usr/bin/varnishadm"
task :restart, :roles => :web do
run "#{varnish_varnishadm} -T #{varnish_terminal_address_port} ban \"#{varnish_ban_pattern}\""
end
Thanks for the answer. I actually need to do some more stuf than to only clear the the cache so I will execute a bash script locally as described in below:
How do I execute a Capistrano task locally?
I consider to use Capistrano to deploy my rails app on my server. Currently I'm using a script, which does all the work for me. But Capistrano looks pretty nice and I want to give it a try.
My first problem/question now is: How to use Capistrano properly in open source projects? I don't want to publish my deploy.rb for several reasons:
It contains sensible informations about my server. I don't want to publish them :)
It contains the config for MY server. For other people, which deploy that open source project to their own server, the configuration may differ. So it's pretty senseless to publish my configuration, because it's useless for other people.
Second problem/question: How do I manage different environments?
Background: On my server I provide two different environments for my application: The stable system using the current stable release branch and located under www.domain.com. And a integration environment for the develop team under dev.domain.com running the master branch.
How do I tell Capistrano to deploy the stable system or the dev system?
The way I handle sensitive information (passwords etc.) in Capistrano is the same way I handle them in general: I use an APP_CONFIG hash that comes from a YAML file that isn't checked into version control. This is a classic technique that's covered e.g. in RailsCast #226, or see this StackOverflow question.
There are a few things you have to do a little differently when using this approach with Capistrano:
Normally APP_CONFIG is loaded from your config/application.rb (so it happens early enough to be usable everywhere else); but Capistrano cap tasks won't load that file. But you can just load it from config/deploy.rb too; here's the top of a contrived config/deploy.rb file using an HTTP repository that requires a username/password.
require 'bundler/capistrano'
APP_CONFIG = YAML.load_file("config/app_config.yml")
set :repo_user, APP_CONFIG['repo_user']
set :repo_password, APP_CONFIG['repo_password']
set :repository, "http://#{repo_user}:#{repo_password}#hostname/repositoryname.git/"
set :scm, :git
# ...
The config/app_config.yml file is not checked into version control (put that path in your .gitignore or similar); I normally check in a config/app_config.yml.sample that shows the parameters that need to be configured:
repo_user: 'usernamehere'
repo_password: 'passwordhere'
If you're using the APP_CONFIG for your application, it probably needs to have different values on your different deploy hosts. So have your Capistrano setup make a symlink from the shared/ directory to each release after it's checked out. You want to do this early in the deploy process, because applying migrations might need a database password. So in your config/deploy.rb put this:
after 'deploy:update_code', 'deploy:symlink_app_config'
namespace :deploy do
desc "Symlinks the app_config.yml"
task :symlink_app_config, :roles => [:web, :app, :db] do
run "ln -nfs #{deploy_to}/shared/config/app_config.yml #{release_path}/config/app_config.yml"
end
end
Now, for the second part of your question (about deploying to multiple hosts), you should configure separate Capistrano "stages" for each host. You put everything that's common across all stages in your config/deploy.rb file, and then you put everything that's unique to each stage into config/deploy/[stagename].rb files. You'll have a section in config/deploy.rb that defines the stages:
# Capistrano settings
require 'bundler/capistrano'
require 'capistrano/ext/multistage'
set :stages, %w(preproduction production)
set :default_stage, 'preproduction'
(You can call the stages whatever you want; the Capistrano stage name is separate from the Rails environment name, so the stage doesn't have to be called "production".) Now when you use the cap command, insert the stage name between cap and the target name, e.g.:
$ cap preproduction deploy #deploys to the 'preproduction' environment
$ cap production deploy #deploys to the 'production' environment
$ cap deploy #deploys to whatever you defined as the default
I have a system in production that has several servers in several roles. I would like to test a new app server by deploying to that specific server, without having to redeploy to every server in production. Is there a way to ask Capistrano to deploy to a specific server? Ideally I'd like to be able to run something like
cap SERVER=app2.example.com ROLE=app production deploy
if I just wanted to deploy to app2.example.com.
Thanks!
[update]
I tried the solution suggested by wulong by executing:
cap HOSTS=app2.server.hostname ROLE=app qa deploy
but capistrano seemed be trying to execute tasks for other roles on that server in addition to app tasks. Maybe I need to update my version of cap (I'm running v2.2.0)?
I ended up posting a question on the capistrano users list here, and got the following response from Jamis (edited a bit by me here for clarity):
Try the HOSTS environment variable:
cap HOSTS=app2.example.com production deploy
Note that doing this will treat app2 as being in every role, not just
whichever role(s) it happens to be declared in.
If what you want is to do a regular deploy, but only act on app2, and
only as app2 is declared in your recipe file, you can use the HOSTFILTER
variable instead:
cap HOSTFILTER=app2.example.com production deploy
[...]
Consider this concrete example. Suppose your
script defines three servers, A, B, and C. And it defines a task, "foo",
that (by default) wants to run on A and B, but not C. Like this:
role :app, "A", "B"
role :web, "C"
task :foo, :roles => :app do
run "echo hello"
end
Now, if you do cap foo, it will run the echo command on both A and B.
If you do cap HOSTS=C foo, it will run the echo command on C,
regardless of the :roles parameter to the task.
If you do cap HOSTFILTER=C foo, it will not run the echo command at
all, because the intersection of (A B) and (C) is an empty set. (There
are no hosts in foo's host list that match C.)
If you do cap HOSTFILTER=A foo, it will run the echo command on only
A, because (A B) intersected with (A) is (A).
Lastly, if you do cap HOSTFILTER=A,B,C foo, it will run the echo
command on A and B (but not C), because (A B) intersected with (A B C)
is (A B).
To summarize: HOSTS completely overrides the hosts or roles declaration
of the task, and forces everything to run against the specified host(s).
The HOSTFILTER, on the other hand, simply filters the existing hosts
against the given list, choosing only those servers that are already in
the tasks server list.
The following should work out of the box:
cap HOSTS=app2.example.com ROLE=app deploy
If you want to deploy to >1 server with the same role:
cap HOSTS=app2.example.com,app3.example.com,app4.example.com ROLE=app deploy
I have similar problem and tried the following. It works:
cap production ROLES=web HOSTS=machine1 stats
You should be able to do something like this in deploy.rb:
task :production do
if ENV['SERVER'] && ENV['ROLE']
role ENV['ROLE'], ENV['SERVER']
else
# your full config
end
end
You can also specifiy task-level hosts parameter this way:
task :ship_artifacts, :hosts => ENV['DEST_HOST'] do
end