Is there any hook like "pre vagrant up"? - deployment

I'm trying to automate my development boxes with vagrant. I need to share the vagrant setup with other developers, so we need to be sure that some boundary conditions are fullfilled before the normal vagrant up process is started.
Is there any hook (like in git, pre-commit or other pre-* scripts) in vagrant? The provision scripts are much too late.
My current setup looks like this:
Vagrantfile
vagrant-templates/
vagrant-templates/apache.conf
vagrant-templates/...
sub-project1/
sub-project2/
I need to be sure, that sub-project{1..n} exists and if not, there should be a error message.
I would prefer a bash-like solution, but I'm open minded for other solutions.

You could give a try to this Vagrant plugin I've written:
https://github.com/emyl/vagrant-triggers
Once installed, you could put in your Vagrantfile something like:
config.trigger.before :up, :execute => "..."

One option is to put the logic straight into Vagrantfile. Then it gets executed on all vagrant commands in the project. For example something like this:
def ensure_sub_project(name)
if !File.exists?(File.expand_path("../#{name}", __FILE__))
# you could raise or do other ruby magic, or shell out (for a bash script)
system('clone-the-project.sh', name)
end
end
ensure_sub_project('some-project')
ensure_sub_project('other-project')
Vagrant.configure('2') do |config|
# ...
end

It's possible to write your own plugin for vagrant and use action_hook on the machine_action_up, something like:
require 'vagrant-YOURPLUGINNAME/YOURACTIONCLASS'
module VagrantPlugins
module YOURPLUGINNAME
class Plugin < Vagrant.plugin('2')
name 'YOURPLUGINNAME'
description <<-DESC
Some description of your plugin
DESC
config(:YOURPLUGINNAME) do
require_relative 'config'
Config
end
action_hook(:YOURPLUGINNAME, :machine_action_up) do |hook|
hook.prepend(YOURACTIONCLASS.METHOD)
end
end
end
end

Another plugin to check out is vagrant-host-shell which is only run when provisioning the box. Just add it before other provisoners in Vagrantfile:
config.vm.provision :host_shell do |shell|
shell.inline = './clone-projects.sh'
shell.abort_on_nonzero = true
end

.. adding to tmatilai's answer, you can add something like this:
case ARGV[0]
when "provision", "up"
system "./prepare.sh"
else
# do nothing
end
into your Vagrantfile so that it will only be run on specific commands.

Related

puppet 4.0 vagrant modules missing

I am trying to use puppet modules in vagrant.
My box is running puppet 4.0
I am installing modules using:
if [ ! -d /etc/puppet/modules/ ]; then
puppet module install puppetlabs-java
fi
in site.pp
I have:
class { 'java':
distribution => 'jdk',
}
I keep getting an error about could not find declared class java
why can't puppet find my module?
/etc/puppet/modules/ is the default path isn't it?
vagrant file
Vagrant.configure(2) do |config|
config.vm.box = "bento/centos-7.2"
config.vm.provider "virtualbox" do |vb|
vb.gui = true
vb.memory = "8192"
end
config.vm.provision :shell, :path => "upgrade_puppet.sh"
config.vm.provision :shell, :path => "puppet_modules.sh"
config.vm.provision :puppet do |puppet|
puppet.options = '--verbose --debug'
puppet.environment_path = "puppet/environments"
puppet.environment = "production"
end
end
Updated answer now that Vagrantfile has been provided
Locations have changed in puppet 4 and directory environments are now in use by default.
So how you are using the puppet provisioner is correct. However, vagrant will upload all the directories it needs to the guest, based on your Vagrantfile to:
/tmp/vagrant-puppet/environments/production
When Vagrant calls the puppet apply it will be looking for the modules it requires in:
/tmp/vagrant-puppet/environments/production/modules
and that module directory does not exist on your host.
You can change your if block to be:
if [ ! -d /vagrant/puppet/environments/production/modules ]; then
puppet module install puppetlabs-java --modulepath /vagrant/puppet/environments/production/modules
fi
/vagrant is shared between host and guest. This would install the java module and its dependencies on your host machine under:
puppet
|
+--environments
+
-- production
|
+ -- manifests
| +
| -- site.pp
|
+ -- modules
+
-- java
+
-- stdlib
When you do your vagrant up, this content gets uploaded to the host under:
/tmp/vagrant-puppet
Tested and confirmed based on your Vagrantfile.
As Jaxim mentions, it's because the default directory locations have changed in the newer version of Puppet.
If you're interested in installing moduels automatically with Puppet, I'd recommend the R10K vagrant plugin, you can specify versions of modules and make updating them much easier, and allows you to download modules not on the forge, such as git repos.
https://github.com/jantman/vagrant-r10k
A little bit late, but I am switching from Chef over to Puppet (company policy, do not ask! :) ) and ran into the exact same situation and coming from Chef background I was refusing to "pollute" my project folder with so many Puppet specific stuff. In my opinion, I should only need Vagrantfile and nothing else.
I was also getting the "Could not find declared class java at /tmp/vagrant-puppet/environments/production" error message. So, after much messing around I've found that in puppet.options you can provide any arguments that you would normally provide if calling puppet apply at the command line.
So, if anything helps try modifying the puppet.options in your Vagrantfile as follows:
config.vm.provision :puppet do |puppet|
puppet.options = '--verbose --modulepath=/etc/puppetlabs/code/environments/production/modules'
puppet.environment_path = "puppet/environments"
puppet.environment = "production"
end
This will help Puppet find its own nose and not think that everything is available at the /tmp folder, but that the modules have already been installed at its own folder location.

capistrano upload! thinks ~ referenced local directory is on remote server

So every example I've looked up indicates this is how one is supposed to do it but I think I may have found a bug unless there's another way to do this.
I'm using upload! to upload assets to a remote list of servers. The task looks like this:
desc "Upload grunt compiled css/js."
task :upload_assets do
on roles(:all) do
%w{/htdocs/css /htdocs/js}.each do |asset|
upload! "#{fetch(:local_path) + asset}", "#{release_path.to_s + '/' + asset}", recursive: true
end
end
end
If local_path is defined as an absolute path such as:
set :local_path:, '/home/dcmbrown/projects/ABC'
This works fine. However if I do the following:
set :local_path:, '~/projects/ABC'
I end up getting the error:
The deploy has failed with an error: Exception while executing on ec2-54-23-88-125.us-west-2.compute.amazon.com: No such file or directory - ~/projects/ABC/htdocs/css
It's not a ' vs " issue as I've tried both (and I didn't think capistrano paid attention to that anyway).
Is this a bug? Is there a work around? Am I just doing it wrong?
I ended up discovering the best way to do this is to actually use path expansion! (headsmack)
irb> File.expand_path('~dcmbrown/projects/ABC')
=> "/home/dcmbrown/projects/ABC"
Of course what I'd like is to do automatic path expansion but you can't have everything. I think I was mostly dumbstruck that it didn't automatically; so much so I spent a couple of hours trying to figure out why it didn't work and ended up wasting time asking here. :(
I don't think the error is coming from the remote server, it just looks like it since it's running that upload command in the context of a deploy.
I just created a single cap task to just do an upload using the "~" character and it also fails with
cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing as deploy#XXX: No such file or directory # rb_file_s_stat - ~/Projects/testapp/public/404.html
It appears to be a Ruby issue not Capistrano as this also fails in a Ruby console
~/Projects/testapp $ irb
2.2.2 :003 > File.stat('~/Projects/testapp/public/404.html')
Errno::ENOENT: No such file or directory # rb_file_s_stat - ~/Projects/testapp/public/404.html
from (irb):3:in `stat'
from (irb):3
from /Users/supairish/.rvm/rubies/ruby-2.2.2/bin/irb:11:in `<main>'

Packer with chef-solo provisioning does nothing

I'm starting up with Packer using chef-solo to provision. I have a very simple recipe (default.rb) that contains the lines below:
package "git"
package "ruby"
package "rubygems"
I was able to provision an image using Vagrant with this successfully. I'm now trying to move this provision step to Packer but when I execute packer build it doesn't seem to run the recipe.
virtualbox-iso: Running handlers complete
virtualbox-iso: Chef Client finished, 0/0 resources updated in 1.818437359 seconds
My Packer template's provision section is:
{
"type": "chef-solo",
"cookbook_paths": ["/cookbooks"]
}
My second part to this question (I'm assuming it's going to be related) is what is the run_list configuration option?
The Packer documentation says it goes in the same file, is called run_list, and by default is empty. So you should give the name of your cookbook as a single-element string array, in a param called run_list.
For beginners like myself who are looking for an answer, I addressed this by adding a run_list like the following:
"run_list": ["git::default"]
'git' is the name in the metadata.rb file and 'default' is the filename or recipe (if my terminology is correct). My cookbook directory structure is as follows:
~/Projects/Packer-Templates/Cookbooks $ find .
.
./ruby-environment
./ruby-environment/metadata.rb
./ruby-environment/recipes
./ruby-environment/recipes/default.rb

Capistrano how to access a serverDefinition option in the code

I am defining my server setup like this:
task :test do
role(:frontend) {[server1,server2,server3, {:user=> "frontend-user", :options => {:log_location=>"HOW DO I READ THIS??"}}]}
role(:backend) {...}
role(:db) {...}
role(:mq) {...}
end
task :staging do
role(:frontend) {[server1,server2,server3, {:user=> "frontend-user", :options => {:log_location=>"HOW DO I READ THIS??"}}]}
role(:backend) {...}
role(:db) {...}
role(:mq) {...}
end
task :prod do
role(:frontend) {[server1,server2,server3, {:user=> "frontend-user", :options => {:log_location=>"HOW DO I READ THIS??"}}]}
role(:backend) {...}
role(:db) {...}
role(:mq) {...}
end
This is to embrace all the complexity of a legacy enterpricey system.
Now, from a task, I want to read the log_location.
Task example:
namespace :log do
desc "list all log files"
task :list do
run %(ls -1 #{log_location}/*/*.log)
end
end
The problem is that the variable log_location is undefined.
/.rvm/gems/ruby-2.0.0-p0/gems/capistrano-2.14.2/lib/capistrano/configuration/namespaces.rb:193:in
method_missing': undefined local variable or methodlog_location'
for
# (NameError)
How do I access that variable?
Is there a smarter/simpler way of setting this custom variable?
I'm sorry to say you can't read that. The blocks passed to task() aren't executed in a server context, thus the block in effect doesn't know what server it's operating on.
The classical workaround for this over the years has been to upload a config file which looks something like this:
---
hostname1:
log_file_location: "/var/log/hostname1/foo/bar"
hostname2:
log_file_location: "/var/log/hostname2/foo/bar"
(or similar) and use the machines hostname when loading the configuration.
I know this isn't a great workaround, thus in the forthcoming (see the v3 branch at Github) version of Capistrano there's a feature which looks like this:
host1 = SSHKit::Host.new 'user#example.com'
host2 = SSHKit::Host.new 'user#example.org'
host1.properties = {log_file_location: "/foo/bar"}
host2.properties.log_file_location = "/bar/baz"
on hosts do |host|
target = "/var/www/sites/"
if host.hostname =~ /org/
target += "dotorg"
else
target += "dotcom"
end
execute! :head, '-n 20', host.properties.log_file_location
execute! :git, :clone, "git#git.#{host.hostname}", target
end
(SSHKit Examples) - SSHKit is the new backend driver for Capistrano.
The v3 branch probably isn't ready for prime time yet, we're having a lot of success internally but the documentation is pretty ahem non existent. However the code is quite literally an oder of magnitude less imposing, and I think you'll find quite readable.
You need this: https://github.com/capistrano/capistrano/wiki/2.x-Multistage-Extension
It means that you can isolate stage specific code in separate files named after the stage. If you want to test for the stage name in the shared deploy.rb you can do that too, like this:
Put this in your deploy.rb
task :show_stage do
puts(stage)
end
Test from command line
$ cap staging show_stage
staging
Actually, I was able to pull out the log_location variable, but ended up with a solution that had one restriction:
I am using log location for one environment only. This is no problem in my current project, since I run the capistrano task against one role at a time.
For testing this setup, I made this task:
namespace :support do
desc "Test if the log location variable is correctly fetched from configuration"
task :test_log_location do
find_servers_for_task(current_task).each do |server|
# puts server.host
# puts server.port
# puts server.user
# puts server.options
result = "LOG LOCATION: #{server.options[:log_location]}"
#puts result
logger.info result
end
end
end
Then, for my tasks in the :log namespace, I defined the variable with set :log_location and also define the :current_role variable:
namespace :log do
def set_log_location
#set_log_location
#puts fetch(:log_location)
log_location = nil
options = nil
find_servers_for_task(current_task).each do |server|
# puts server.host
# puts server.port
# puts server.user
# puts server.options
options = server.options
log_location = server.options[:log_location]
#log_location = server.options[:current_role]
end
msg1="FATAL: you need to specify 'ROLES=frontend,backend,mq' (or one of them) from command line"
msg2="FATAL: Could not get log_location from environment/server options. I can only see these options: #{options}"
raise msg1 if ENV['ROLES'].nil?
raise msg2 if log_location.nil?
set :log_location, log_location
set :current_role, ENV['ROLES'].split(',').first
logger.info %(CURRENT_ROLE #{fetch(:current_role)})
logger.info %(THE LOG LOCATION IS: #{fetch(:log_location)})
end
end
Finally, I used a separate method to fully qualify the log path (needed for my setup -- also in the :log namespace):
def log_location
log_names = {
:frontend => "*/play.log",
:backend => "*Weblogic*/*.{log,out}"
}
loc = "#{fetch(:log_location)}/#{log_names[fetch(:current_role).to_sym]}"
logger.info "using the log location of '#{loc}'"
loc
end
Now, each task can use the specific log location like this:
desc "list all log files"
task :list do
set_log_location
run %(ls -l #{log_location})
end
I am sure this can be done more elegant, but it works for me

Erlang: How to access CLI flags (arguments) as application environment variables?

How does one access command line flag (arguments) as environment variables in Erlang. (As flags, not ARGV) For example:
RabbitMQ cli looks something like:
erl \
...
-sasl errlog_type error \
-sasl sasl_error_logger '{file,"'${RABBITMQ_SASL_LOGS}'"}' \
... # more stuff here
If one looks at sasl.erl you see the line:
get_sasl_error_logger() ->
case application:get_env(sasl, sasl_error_logger) of
% ... etc
By some unknown magic the sasl_error_logger variable becomes an erlang tuple! I've tried replicating this in my own erlang application, but I seem to be only able to access these values via init:get_argument, which returns the value as a string.
How does one pass in values via the commandline and be able to access them easily as erlang terms?
UPDATE Also for anyone looking, to use environment variables in the 'regular' way use os:getenv("THE_VAR")
Make sure you set up an application configuration file
{application, fred,
[{description, "Your application"},
{vsn, "1.0"},
{modules, []},
{registered,[]},
{applications, [kernel,stdlib]},
{env, [
{param, 'fred'}
]
...
and then you can set your command line up like this:
-fred param 'billy'
I think you need to have the parameter in your application configuration to do this - I've never done it any other way...
Some more info (easier than putting it in a comment)
Given this
{emxconfig, {ets, [{keypos, 2}]}},
I can certainly do this:
{ok, {StorageType, Config}} = application:get_env(emxconfig),
but (and this may be important) my application is started at this time (may actually just need to be loaded and not actually started from looking at the application_controller code).