I have configured chef environment and i am able to deploy my application using capistrano . Now i want to my chef to handle capistrano to deploy my apps . How can it be done ?
I do the opposite, ie. deploy my Chef recipes via Capistrano. I recommend it.
#config/deploy.rb
...
before 'bundle:install', "provision:default", "deploy:config_db_yml_symlink"
...
This will execute the chef config for a server after before bundle install, which is important because a lot of gems rely on packages being installed to the OS.
#config/deploy/provision.rb
Capistrano::Configuration.instance(:must_exist).load do
namespace :provision do
task :default do
provision.setup
provision.web
provision.db
provision.db_slave
end
task :setup, once: true do
provision.get_environment_variables
provision.update_cookbooks
end
task :db, :roles => :db do
next if find_servers_for_task(current_task).empty?
if rails_env == 'staging'
run %{cd #{release_path}/provision; sudo chef-solo -c solo.rb -j db.json -l debug}
else
run %{cd #{release_path}/provision; sudo chef-solo -c solo.rb -j db_master.json -l debug}
end
end
task :db_slave, :roles => :db_slave do
next if find_servers_for_task(current_task).empty?
run %{cd #{release_path}/provision; sudo chef-solo -c solo.rb -j db_slave.json -l debug}
end
task :web, :roles => :web do
next if find_servers_for_task(current_task).empty?
run %{cd #{release_path}/provision; sudo chef-solo -c solo.rb -j web.json -l debug}
end
task :get_environment_variables do
run "if [ -d ~/.config ]; then " +
"cd ~/.config && git fetch && git reset origin/master --hard; " +
"else git clone git#github.com:mycompany/config.git .config; fi"
run "sudo cp ~/.config/secureshare/#{rails_env}/environment /etc/environment"
end
task :update_cookbooks do
run "if [ -d /u/chef ]; then " +
"cd /u/chef && git fetch && git reset origin/master --hard; " +
"else git clone git#github.com:mycompany/chef.git /u/chef; fi"
end
end
namespace :deploy do
task :setup, :except => { :no_release => true } do
dirs = [deploy_to, releases_path, shared_path]
dirs += shared_children.map { |d| File.join(shared_path, d.split('/').last) }
dirs += [File.join(shared_path, 'sockets')]
run "#{try_sudo} mkdir -p #{dirs.join(' ')}"
run "#{try_sudo} chmod g+w #{dirs.join(' ')}" if fetch(:group_writable, true)
run "#{try_sudo} chown -R ubuntu:ubuntu #{dirs.join(' ')}" if fetch(:group_writable, true)
end
task :config_db_yml_symlink do
run "ln -s #{shared_path}/database.yml #{release_path}/config/database.yml"
end
end
end
I have a folder in my project named provision, to handle the definition of chef roles, though the recipes are in a different repository.
#provision/solo.rb
root = File.absolute_path(File.dirname(__FILE__))
cookbook_path '/u/chef'
role_path root + "/roles"
log_level :debug
log_location STDOUT
Nodes are defined in the project
#provision/db_slave.json
{
"run_list": ["role[db_slave]"]
}
And roles
#provision/roles/db_slave.rb
name "db_slave"
description 'A postgresql slave.'
run_list(["recipe[base]", "recipe[postgresql::slave]", "recipe[rails]","recipe[papertrail]", "recipe[fail2ban]"])
override_attributes(
'kernel' => {
'shmmax' => ENV['KERNEL_SHMMAX'],
'shmall' => ENV['KERNEL_SHMALL'],
'msgmax' => ENV['KERNEL_MSGMAX'],
'msgmnb' => ENV['KERNEL_MSGMNB']
},
'postgresql' => {
'user' => ENV['PG_USER'],
'password' => ENV['PG_PASSWORD'],
'database' => ENV['PG_DATABASE'],
'master_host' => ENV['PG_HOST']
},
'app_dir' => ENV['APP_DIR'],
'papertrail' => {
'port' => ENV['PAPERTRAIL_PORT'],
'log_files' => [
"#{ENV['APP_DIR']}/shared/log/*.log",
"/var/log/*.log",
"/var/log/syslog",
"/var/log/upstart/*.log",
"/var/log/postgresql/*.log"
]
},
'new_relic' => {
'key' => ENV['NEW_RELIC_LICENSE_KEY']
})
All without keeping any sensitive information within the app. I also use capistrano-ec2group in order to map servers to roles using EC2 security groups.
group :myapp_web, :web
group :myapp_web, :app
group :myapp_db, :db, :primary=>true
group :myapp_db_slave, :db_slave
So basically you keep your chef recipes in one repo, your environment variables in another repo, and your app in another repo - and use Capistrano to both provision servers and deploy your app.
You could also keep your chef recipes in your application repo, but that inhibits reuse between project. The key is to put everything that changes into environment variables and store them separate to the app and the recipes.
When this is configured correctly, to add new servers you simply need to spin one up in EC2, apply the desired security group and then
cap deploy
You could watch that Foodfightshow Episode about Application Deployment.
You can e.g. put the configuration files (with e.g. the database credentials) to the server with Chef, while pushing the source code with Capistrano.
You can't. Or at least it won't be very straightforward.
Chef is a pull system -- the client pulls information from the Chef server, and takes action upon it.
Capistrano is a push system -- you tell it to log into the server and perform tasks there.
The only way I see for you to integrate them would be to run Capistrano locally on each machine, but I fail to see a reason for that.
Chef's deploy resource can probably do everything you need without the need to have Capistrano integrated. If you still want to push your deploys to the servers independently from the chef-client runs, you're better off not deploying via Chef and keeping your current system.
If you want continuous delivery, hook up your Capistrano scripts to your CI server and run them at the end of your pipeline.
The podcast referred by #StephenKing is a great source of information on this matter.
Related
I have a user that just has access to pull from github. In my Dockerfile I have added the plugins for Jenkins, such as github:1.22.4, but I want to configure the plugins as some of the people that will build the image won't know how to do the configuration, and don't care to learn.
So, I have some plugins for Jenkins and I want to be able to configure them using the Dockerfile. How can I do that?
My Dockerfile is pretty basic right now:
FROM jenkins
COPY plugins.txt /plugins.txt
RUN /usr/local/bin/plugins.sh /plugins.txt
and I have several plugins in plugins.txt, but the one I want to configure is to pull the code from github.
Did you check this git repository?
lets say you have plugins.txt like:
github:1.22.4
maven-plugin:2.7.1
ant:1.3
and Dockerfile like in your question.
You can take a look into example of plugins.sh and here is part for installing plugins. since you want do configure some plugins you can add configuration when you are installing plugin:
if ! grep -q "${plugin[0]}:${plugin[1]}" "$TEMP_ALREADY_INSTALLED"
then
echo "Downloading ${plugin[0]}:${plugin[1]}"
curl --retry 3 --retry-delay 5 -sSL -f "${JENKINS_UC_DOWNLOAD}/plugins/${plugin[0]}/${plugin[1]}/${plugin[0]}.hpi" -o "$REF/${plugin[0]}.jpi"
unzip -qqt "$REF/${plugin[0]}.jpi"
# if [ some plugin ] then
# here your configuration
# fi
(( COUNT_PLUGINS_INSTALLED += 1 ))
else
echo " ... skipping already installed: ${plugin[0]}:${plugin[1]}"
fi
I have setup an app to deploy to my site that is hosted on Digital Ocean CentOS 6 server and I am using Capistrano to deploy the app from my development machine. I have got a repo setup that I push to and that my Capistrano config references when I do cap development deploy.
The issue I am having is that it throws this error:
[a7406f5e] Command: ( GIT_ASKPASS=/bin/echo GIT_SSH=/tmp/PopupHub/git-ssh.sh /usr/bin/env git ls-remote git#repo-url-is-here/popup-hub.git )
DEBUG [a7406f5e] Permission denied (publickey).
DEBUG [a7406f5e] fatal: The remote end hung up unexpectedly
In capfile I have got this:
# Load DSL and Setup Up Stages
require 'capistrano/setup'
# Includes default deployment tasks
require 'capistrano/deploy'
# Includes tasks from other gems included in your Gemfile
#
# For documentation on these, see for example:
#
# https://github.com/capistrano/rvm
# https://github.com/capistrano/rbenv
# https://github.com/capistrano/chruby
# https://github.com/capistrano/bundler
# https://github.com/capistrano/rails
#
# require 'capistrano/rvm'
# require 'capistrano/rbenv'
# require 'capistrano/chruby'
require 'capistrano/bundler'
require 'capistrano/rails/assets'
require 'capistrano/rails/migrations'
require 'capistrano/sitemap_generator'
# Loads custom tasks from `lib/capistrano/tasks' if you have any defined.
Dir.glob('lib/capistrano/tasks/*.cap').each { |r| import r }
In my config/deploy.rb I have:
lock '3.1.0'
server "0.0.0.0.0"
set :application, "NameOfApp"
set :scm, "git"
set :repo_url, "git#the-repo-url-is-here/popup-hub.git"
# set :scm_passphrase, ""
# set :user, "deploy"
# files we want symlinking to specific entries in shared.
set :linked_files, %w{config/database.yml}
# dirs we want symlinking to shared
set :linked_dirs, %w{bin log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system}
SSHKit.config.command_map[:rake] = "bundle exec rake" #8
SSHKit.config.command_map[:rails] = "bundle exec rails"
set :branch, ENV["REVISION"] || ENV["BRANCH_NAME"] || "master"
set :keep_releases, 20
namespace :deploy do
desc 'Restart passenger without service interruption (keep requests in a queue while restarting)'
task :restart do
on roles(:app) do
execute :touch, release_path.join('tmp/restart.txt')
unless execute :curl, '-s -k --location localhost | grep "Pop" > /dev/null'
exit 1
end
end
end
after :finishing, "deploy:cleanup"
after :finishing, "deploy:sitemap:refresh"
end
after "deploy", "deploy:migrate"
after 'deploy:publishing', 'deploy:restart'
# deploy:sitemap:create #Create sitemaps without pinging search engines
# deploy:sitemap:refresh #Create sitemaps and ping search engines
# deploy:sitemap:clean #Clean up sitemaps in the sitemap path
# start new deploy.rb stuff for the beanstalk repo
Then in my config/development.rb I have got:
set :stage, :development
set :ssh_options, {
forward_agent: true,
password: 'thepassword',
user: 'deployer',
}
server "0.0.0.0", user: "deployer", roles: %w{web app db}
set :deploy_to, "/home/deployer/development"
set :rails_env, 'development' # If the environment differs from the stage name
set :branch, ENV["REVISION"] || ENV["BRANCH_NAME"] || "master"
When I push in bash cap development deploy the error further up happens.
Can anyone tell me why this is happening? I have carried out everything fine up to now and I have this setup on another Digital Ocean droplet.
Thanks,
I think you have not ssh access to your remote server using you local system's ssh keys.
If you don't have ssh keys on local system, generate:
ssh-keygen -t rsa
Upload your local keys to remote server:
cat ~/.ssh/id_rsa.pub | ssh user#hostname 'cat >> .ssh/authorized_keys'
Source: HowToGeek.com
You need to set up your SSH key in Digital Ocean
I have written a script named mailman_server using gem "mailman" placed in 'script/mailman_server'
#!/usr/bin/env ruby
require "rubygems"
require "bundler/setup"
require "mailman"
#Mailman.config.logger = Logger.new("log/mailman.log")
Mailman.config.poll_interval = 3
Mailman.config.pop3 = {
server: 'server', port: 110,
username: "loginid",
password: "password"
}
Mailman::Application.run do
default do
p "Found a new message"
# 'perform some action here'
end
end
It fetches all the emails from my account and then i do processing on them.
I have my deploy.rb file as
set :stages, %w(production) #various environments
load "deploy/assets" #precompile all the css, js and images... before deployment..
require "bundler/capistrano" # install all the new missing plugins...
require 'delayed/recipes' # load this for delayed job..
require 'capistrano/ext/multistage' # deploy on all the servers..
require "rvm/capistrano" # if you are using rvm on your server..
require './config/boot'
require 'airbrake/capistrano' # using airbrake in your application for crash notifications..
set :delayed_job_args, "-n 2" # number of delayed job workers
before "deploy:assets:symlink", "deploy:copy_database_file"
before "deploy:update_code", "delayed_job:stop" # stop the previous deployed job workers...
after "deploy:start", "delayed_job:start" #start the delayed job
after "deploy:restart", "delayed_job:restart" # restart it..
after "deploy:update", "deploy:cleanup" #clean up temp files etc.
set :rvm_ruby_string, '1.9.3' # ruby version you are using...
set :rvm_type, :user
server "my_server_ip", :app, :web, :db, :primary => true
set(:application) { "my_application_name" }
set (:deploy_to) { "/home/user/#{application}/#{stage}" }
set :user, 'user'
set :keep_releases, 3
set :repository, "git#bitbucket.org:my_random_git_repo_url"
set :use_sudo, false
set :scm, :git
default_run_options[:pty] = true
ssh_options[:forward_agent] = true
set :deploy_via, :remote_cache
set :git_shallow_clone, 1
set :git_enable_submodules, 1
namespace :deploy do
task :start do ; end
task :stop do ; end
task :restart, :roles => :app, :except => { :no_release => true } do
run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}"
end
task :copy_database_file do
run "ln -sf #{shared_path}/database.yml #{release_path}/config/database.yml"
end
end
I want to execute this script every time I deploy to the server. Also I need to stop this script whenever I am deploying the code.
I am unable to figure out how can we start or stop this script using capistrano on server.
You could try to save pid of process on start with something like this
run "cd #{deploy_to}/current; ./script/mailman_server &; echo &! > /var/run/mailman_server.pid" #untested
and stop it with
run "kill `cat /var/run/mailman_server.pid`; rm /var/run/mailman_server.pid"
But I think you should check out Foreman, it provides handy way to run jobs in development and supports exporting your jobs to upstart or inid.d scripts for production, so you will need just to start or stop corresponding service with
run "sudo /etc/init.d/mailman_server start"
run "sudo /etc/init.d/mailman_server stop"
I am trying to deploy a Rails3 app with Capistrano (2.5.19). I have successfully run:
cap deploy:setup
and the correct directories were created on the server. But when I run cap deploy:cold or cap deploy the script hangs halfway through.
shell$ cap deploy:cold
* executing `deploy:cold'
* executing `deploy:update'
** transaction: start
* executing `deploy:update_code'
executing locally: "git ls-remote git#server.foo.com:test.git master"
* executing "git clone -q git#server.foo.com:test.git /home/deployer/www/apps/test/releases/20101223162936 && cd /home/deployer/www/apps/test/releases/20101223162936 && git checkout -q -b deploy be3165b74d52540742873c125fb85d04e1ee3063 && git submodule -q init && git submodule -q sync && git submodule -q update && (echo be3165b74d52540742873c125fb85d04e1ee3063 > /home/deployer/www/apps/test/releases/20101223162936/REVISION)"
servers: ["server.foo.com"]
[server.foo.com] executing command
Here is my deploy.rb:
$:.unshift(File.expand_path("~/.rvm/lib"))
require 'rvm/capistrano'
set :rvm_ruby_string, 'jruby'
# main details
set :application, "test_sqlserver"
role :web, "server.foo.com"
role :app, "server.foo.com"
role :db, "server.foo.com", :primary => true
# server details
default_run_options[:pty] = true
ssh_options[:forward_agent] = true
set :deploy_to, "/home/deployer/www/apps/#{application}"
set :deploy_via, :checkout
set :user, :deployer
set :use_sudo, false
# repo details
set :scm, :git
set :repository, "git#server.foo.com:test.git"
set :branch, "master"
set :git_enable_submodules, 1
I believe my file permissions are setup correctly
There appears to be a bug in JRuby since at least 1.6.5 -- see SSH Agent forwarding does not work with jRuby (which lets capistrano ssh-deployments fail)
Apparently one "workaround" is to not use SSH agent forwarding, though that may not be acceptable. If you want the issue noticed and fixed faster (I know I do), watching the issue might help.
Looks like you can't run Capistrano under jruby, as jruby-openssl doesn't support Net:SSH which underlies Capistrano.
http://jruby-extras.rubyforge.org/jruby-openssl/
I am trying to create a capistrano task to setup my log rotation file.
namespace :app do
task :setup_log_rotation,:roles => :app do
rotate_script = %Q{#{shared_path}/log/#{stage}.log {
daily
rotate #{ENV['days'] || 7}
size #{ENV['size'] || "5M"}
compress
create 640 #{user} #{ENV['group'] || user}
missingok
}}
put rotate_script, "#{shared_path}/logrotate_script"
"sudo cp #{shared_path}/logrotate_script /etc/logrotate.d/#{application}"
run "rm #{shared_path}/logrotate_script"
end
end
At the very top of my deploy.rb file I had following line
set :use_sudo, false
I completely missed that my cp command was silently failing and on my terminal I thought everything is fine. That is not good. How should I write the cp code so that incase cp fails for some reason ( in this case sudo command was failing) then I should get feedback on my terminal.
Solved my own problem.
Look at this for something similar.
I add this line in deploy.rb
default_run_options[:pty] = true
And changed from
run "sudo cp #{shared_path}/logrotate_script /etc/logrotate.d/#{application}"
to this
sudo "cp #{shared_path}/logrotate_script /etc/logrotate.d/#{application}"