Calling a Capistrano task with argument from other task - capistrano

I have a Capistrano 2 task that updates a file
task :update_file, roles: :app do
...
end
Now I need to write a task that performs the some operation on all the files within a folder so from within update_folder I'd like to call update_file passing to it the name of the file to update but I have an hard time doing so.
How can I set a Capistrano task to accept an argument and call it from inside an other task?
Thanks

you can do like this:
$gkey=""
$gvalue=""
desc "generate config files"
task :gen_conf_files do
$servers.each do |key,value|
$MYSQL["mysql"]["passwd"]="#{key.to_s}++"
$gkey=key.to_s
$gvalue=value.to_s
$NODE_NAME="#{key.to_s}"
$NODE_NUM=key.to_s[9,10]
gen_mfs_conf
gen_cfs_conf
gen_client_conf
gen_config_shell
gen_cdn_reacheyes_net
gen_click_reacheyes_net
gen_log_reacheyes_net
gen_fluent_conf
gen_nagios_conf
end
end
desc "genrate fluent config file"
task :gen_fluent_conf do
file = "#{generate_conf_dir}/#{$gvalue}/fluent.conf"
filename ="#{config_file_path}/fluent.conf.sample"
erb = ERB.new(File.read(filename))
erb.filename = filename
File = File.new("#{file}", "w")
File.puts erb.result
end
first define a global variable
$gvalue=""
then you can use this variable between different task

Related

Spring batch, handle failed files again, skip successfully handled

How to ideologically correct organize file handling?
I have a folder for new files (NEW), folder for old files (OLD), a folder for failed files (FAIL). New file puts in NEW, then if the handling was correct, the file goes to OLD, if the handling was failed, the file goes to ERR. Then we take this file again and correcting it and put in NEW if all ok file goes to OLD if failed goes to ERR. And repeat again and again.
I have job with constant name "fileHandlingJob", in job i have some steps: "extract", "handling", "utilize", and i have job parameters: "filePath", "fileName".
Thanks!
If you state that uniqueness criteria of file - it's file's name, then you are on right way.
If job was in state FAILED (ERR folder) then you can retrigger it with same set of parameters. If job was COMPLETED - you can't run it again. Spring batch will complain.
You can ensure this behaviour by having unique file name as Job's parameter. So no other job could be triggered with same file name. Spring batch will simply prevent this.
Second parameter filePath can be additional non-unique parameter.
JobParametersBuilder jobParametersBuilder = new JobParametersBuilder()
.addString("fileName", "myfile.xml", true)
.addDate("filePath", "C:\new\myfile.xml", false);
true/false here means whether parameter is unique or not.

pass environment variables to resource gaurd in powershell_script in LWRP

I have a LWRP with a provider action that looks like this. I want to pass environment variables to a resource guard:
action :create do
powershell_script 'create file' do
environment({'fileName' => new_resource.fileName})
code <<-EOH
New-Item $env:fileName
EOH
guard_interpreter :powershell_script
not_if '(Test-Path $env:fileName)'
end
In the example above, what I am trying to do is create a new file if one doesn't exist already. When I execute this, the new file is created every time. I expect that the second time around that the guard would execute and the resource would not be recreated. I think what is happening is that I am not able to use the environment variables in the guard like I am in the code block.
Please note that my real-life problem is substantially more complex than this, and I'm not just looking for a way to create a file if it doesn't exist. I need to know how I can use a property specified in the lightweight resource inside the 'not-if' block.
It's buried, but it is in the documentation here. Just do this:
action :create do
my_environment = 'fileName' => new_resource.fileName
powershell_script 'create file' do
environment my_environment
code <<-EOH
New-Item $env:fileName
EOH
guard_interpreter :powershell_script
not_if '(Test-Path $env:fileName)', :environment => my_environment
end
end

remote_file capistrano 3 method

In capistrano 3 docs (http://capistranorb.com/documentation/advanced-features/remote-file/) an example is provided to show how this task (remote_file) works
namespace :deploy do
namespace :check do
task :linked_files => 'config/newrelic.yml'
end
end
remote_file 'config/newrelic.yml' => '/tmp/newrelic.yml', roles: :app
file '/tmp/newrelic.yml' do |t|
sh "curl -o #{t.name} https://rpm.newrelic.com/accounts/xx/newrelic.yml"
end
They tell it allows a presence of remote file to be set as a prerequisite.
Hovewer I still can't get how it works as remote_file is called outside task code. What does it actually do? Can someone explain?
What happens if config/newrelic.yml is absent and how is remote_file call connected with
:linked_files task?
Calling remote_file only defines a task named 'config/newrelic.yml'. It does not execute that task. The line
remote_file 'config/newrelic.yml' => '/tmp/newrelic.yml', roles: :app
is saying "create a task named 'config/newrelic.yml' that has a prerequisite task named '/tmp/newrelic.yml'". The file '/tmp/newrelic.yml' part defines this prerequisite task. Finally, task :linked_files => 'config/newrelic.yml' is specifying a prerequisite task named 'config/newrelic.yml' to be executed, which we defined using the remote_file method.

Capistrano how to access a serverDefinition option in the code

I am defining my server setup like this:
task :test do
role(:frontend) {[server1,server2,server3, {:user=> "frontend-user", :options => {:log_location=>"HOW DO I READ THIS??"}}]}
role(:backend) {...}
role(:db) {...}
role(:mq) {...}
end
task :staging do
role(:frontend) {[server1,server2,server3, {:user=> "frontend-user", :options => {:log_location=>"HOW DO I READ THIS??"}}]}
role(:backend) {...}
role(:db) {...}
role(:mq) {...}
end
task :prod do
role(:frontend) {[server1,server2,server3, {:user=> "frontend-user", :options => {:log_location=>"HOW DO I READ THIS??"}}]}
role(:backend) {...}
role(:db) {...}
role(:mq) {...}
end
This is to embrace all the complexity of a legacy enterpricey system.
Now, from a task, I want to read the log_location.
Task example:
namespace :log do
desc "list all log files"
task :list do
run %(ls -1 #{log_location}/*/*.log)
end
end
The problem is that the variable log_location is undefined.
/.rvm/gems/ruby-2.0.0-p0/gems/capistrano-2.14.2/lib/capistrano/configuration/namespaces.rb:193:in
method_missing': undefined local variable or methodlog_location'
for
# (NameError)
How do I access that variable?
Is there a smarter/simpler way of setting this custom variable?
I'm sorry to say you can't read that. The blocks passed to task() aren't executed in a server context, thus the block in effect doesn't know what server it's operating on.
The classical workaround for this over the years has been to upload a config file which looks something like this:
---
hostname1:
log_file_location: "/var/log/hostname1/foo/bar"
hostname2:
log_file_location: "/var/log/hostname2/foo/bar"
(or similar) and use the machines hostname when loading the configuration.
I know this isn't a great workaround, thus in the forthcoming (see the v3 branch at Github) version of Capistrano there's a feature which looks like this:
host1 = SSHKit::Host.new 'user#example.com'
host2 = SSHKit::Host.new 'user#example.org'
host1.properties = {log_file_location: "/foo/bar"}
host2.properties.log_file_location = "/bar/baz"
on hosts do |host|
target = "/var/www/sites/"
if host.hostname =~ /org/
target += "dotorg"
else
target += "dotcom"
end
execute! :head, '-n 20', host.properties.log_file_location
execute! :git, :clone, "git#git.#{host.hostname}", target
end
(SSHKit Examples) - SSHKit is the new backend driver for Capistrano.
The v3 branch probably isn't ready for prime time yet, we're having a lot of success internally but the documentation is pretty ahem non existent. However the code is quite literally an oder of magnitude less imposing, and I think you'll find quite readable.
You need this: https://github.com/capistrano/capistrano/wiki/2.x-Multistage-Extension
It means that you can isolate stage specific code in separate files named after the stage. If you want to test for the stage name in the shared deploy.rb you can do that too, like this:
Put this in your deploy.rb
task :show_stage do
puts(stage)
end
Test from command line
$ cap staging show_stage
staging
Actually, I was able to pull out the log_location variable, but ended up with a solution that had one restriction:
I am using log location for one environment only. This is no problem in my current project, since I run the capistrano task against one role at a time.
For testing this setup, I made this task:
namespace :support do
desc "Test if the log location variable is correctly fetched from configuration"
task :test_log_location do
find_servers_for_task(current_task).each do |server|
# puts server.host
# puts server.port
# puts server.user
# puts server.options
result = "LOG LOCATION: #{server.options[:log_location]}"
#puts result
logger.info result
end
end
end
Then, for my tasks in the :log namespace, I defined the variable with set :log_location and also define the :current_role variable:
namespace :log do
def set_log_location
#set_log_location
#puts fetch(:log_location)
log_location = nil
options = nil
find_servers_for_task(current_task).each do |server|
# puts server.host
# puts server.port
# puts server.user
# puts server.options
options = server.options
log_location = server.options[:log_location]
#log_location = server.options[:current_role]
end
msg1="FATAL: you need to specify 'ROLES=frontend,backend,mq' (or one of them) from command line"
msg2="FATAL: Could not get log_location from environment/server options. I can only see these options: #{options}"
raise msg1 if ENV['ROLES'].nil?
raise msg2 if log_location.nil?
set :log_location, log_location
set :current_role, ENV['ROLES'].split(',').first
logger.info %(CURRENT_ROLE #{fetch(:current_role)})
logger.info %(THE LOG LOCATION IS: #{fetch(:log_location)})
end
end
Finally, I used a separate method to fully qualify the log path (needed for my setup -- also in the :log namespace):
def log_location
log_names = {
:frontend => "*/play.log",
:backend => "*Weblogic*/*.{log,out}"
}
loc = "#{fetch(:log_location)}/#{log_names[fetch(:current_role).to_sym]}"
logger.info "using the log location of '#{loc}'"
loc
end
Now, each task can use the specific log location like this:
desc "list all log files"
task :list do
set_log_location
run %(ls -l #{log_location})
end
I am sure this can be done more elegant, but it works for me

Adding actions to Capistrano tasks

How can I add something to Capistrano's deploy task? I need to create a symlink in "public" directory.
Create a new task to create the symlink, then use a hook to add your task into the Capistrano deploy workflow where appropriate.
e.g.
namespace :deploy do
desc "symlink my file"
task :symlink_file, :roles => :app do
run "ln -s file public/file"
end
end
after 'deploy:update_code', 'deploy:symlink_file'