Using __END__ and DATA in Chef recipes (to run legacy shell scripts) - chef-recipe

I'm migrating some shell scripts to Chef recipes. Some of these scripts are fairly involved, so just to make life easier in the short term and to avoid introducing bugs in rewriting everything in Chef/Ruby, I'd like to just run some of them as-is. They're all well-written and idempotent, so honestly there's no rush, but of course, the eventual goal is to rewrite them.
One cool feature of Ruby is its __END__ keyword/method: Lines below __END__ will not be executed. Those lines will be available via the special filehandle DATA.
It would be cool to ship the shell scripts as-is inside the the recipe after __END__, maybe something like the following, which I placed in chef-repo/cookbooks/ruby-data-test/recipes/default.rb:
file = Tempfile.new(File.basename(__FILE__))
file << DATA.read
bash file.path
file.unlink
__END__
echo "Hello, world"
However when I run this (with chef-solo -c solo.rb --override-runlist 'recipe[ruby-data-test]'), I get the following error:
[2014-10-03T17:14:56+00:00] ERROR: uninitialized constant Chef::Recipe::DATA
I'm pretty new to Chef, but I'm guessing the above is something about Chef wrapping my recipe in a class, and there's something simple preventing me from accessing DATA. Since it's "global" (?) I tried putting a dollar sign ($DATA) in front of it but that failed with:
NoMethodError
-------------
undefined method `read' for nil:NilClass
So the question is: How do I access DATA in my Chef recipe? Thanks!

It appears you don't have access to DATA, but you can fake it by reading in the current file yourself and splitting on __END__, like Sinatra does.
I ended up making a Chef LWRP for reuse. I don't know if I'll actually end up using this, but I wanted to figure it out. Like I said, I'm a Chef/Ruby noob, so any better ideas or suggestions welcome!
ruby_data_test/recipes/default.rb:
ruby_data_test_execute_ruby_data __FILE__
__END__
#!/bin/bash
set -o errexit
date
echo "Hello, world"
ruby_data_test/resources/execute_ruby_data.rb:
actions :execute_ruby_data
default_action :execute_ruby_data
attribute :source, :name_attribute => true, :required => true
attribute :args, :kind_of => Array
attribute :ignore_errors, :kind_of => [TrueClass, FalseClass], :default => false
ruby_data_test/providers/execute_ruby_data.rb:
def whyrun_supported?
true
end
use_inline_resources
action :execute_ruby_data do
converge_by("Executing #{#new_resource}") do
Chef::Log.info("Executing #{#new_resource}")
file_who_called_me = #new_resource.source
io = ::IO.respond_to?(:binread) ? ::IO.binread(file_who_called_me) : ::IO.read(file_who_called_me)
app, data = io.gsub("\r\n", "\n").split(/^__END__$/, 2)
data.lstrip!
file = Tempfile.new('execute_ruby_data')
file << data
file.chmod(0755)
file.close
exit_status = ::Open3.popen2e(file.path, *#new_resource.args) do |stdin, stdout_and_stderr, wait_thr|
stdout_and_stderr.each { |line| puts line }
wait_thr.value # exit status
end
if exit_status != 0 && !#new_resource.ignore_errors
throw RuntimeError
end
end
end
Here's the output:
$ chef-solo -c solo.rb --override-runlist 'recipe[ruby_data_test]'
Starting Chef Client, version 11.12.4
[2014-10-03T21:50:29+00:00] WARN: Run List override has been provided.
[2014-10-03T21:50:29+00:00] WARN: Original Run List: []
[2014-10-03T21:50:29+00:00] WARN: Overridden Run List: [recipe[ruby_data_test]]
Compiling Cookbooks...
Converging 1 resources
Recipe: ruby_data_test::default
* ruby_data_test_execute_ruby_data[/root/chef/chef-repo/cookbooks/ruby_data_test/recipes/default.rb] action execute_ruby_dataFri Oct 3 21:50:29 UTC 2014
Hello, world
- Executing ruby_data_test_execute_ruby_data[/root/chef/chef-repo/cookbooks/ruby_data_test/recipes/default.rb]
Running handlers:
Running handlers complete
Chef Client finished, 1/1 resources updated in 1.387608 seconds

Related

SCP command not working in karate project - it throws command error:cannot run program scp.exe: CreateProcess error=2 [duplicate]

I'm trying to execute bash script using karate. I'm able to execute the script from karate-config.js and also from .feature file. I'm also able to pass the arguments to the script.
The problem is, that if the script fails (exits with something else than 0) the test execution continues and finishes as succesfull.
I found out that when the script echo-es something then i can access it as a result of the script so I could possibly echo the exit value and do assertion on it (in some re-usable feature), but this seems like a workaround rather than a valid clean solution. Is there some clean way of accessing the exit code without echo-ing it? Am I missing on something?
script
#!/bin/bash
#possible solution
#echo 3
exit 3;
karate-config.js
var result = karate.exec('script.sh arg1')
feture file
def result = karate.exec('script.sh arg1')
Great timing. We very recently did some work for CLI testing which I am sure you can use effectively. Here is a thread on Twitter: https://twitter.com/maxandersen/status/1276431309276151814
And we have just released version 0.9.6.RC4 and new we have a new karate.fork() option that returns an instance of Command on which you can call exitCode
Here's an example:
* def proc = karate.fork('script.sh arg1')
* proc.waitSync()
* match proc.exitCode == 0
You can get more ideas here: https://github.com/intuit/karate/issues/1191#issuecomment-650087023
Note that the argument to karate.fork() can take multiple forms. If you are using karate.exec() (which will block until the process completes) the same arguments work.
string - full command line as seen above
string array - e.g. ['script.sh', 'arg1']
json where the keys can be
line - string (OR)
args - string array
env - optional environment properties (as JSON)
redirectErrorStream - boolean, true by default which means Sys.err appears in Sys.out
workingDir - working directory
useShell - default false, auto-prepend cmd /c or sh -c depending on OS
And since karate.fork() is async, you need to call waitSync() if needed as in the example above.
Do provide feedback and we can tweak further if needed.
EDIT: here's a very advanced example that shows how to listen to the process output / log, collect the log, and conditionally exit: fork-listener.feature
Another answer which can be a useful reference: Conditional match based on OS
And here's how to use cURL for advanced HTTP tests ! https://stackoverflow.com/a/73230200/143475
In case you need to do a lot of local file manipulation, you can use the karate.toJavaFile() utility so you can convert a relative path or a "prefixed" path to an absolute path.
* def file = karate.toJavaFile('classpath:some/file.txt')
* def path = file.getPath()

rake executes file task when file already exists

I have a rakefile that executes some (but not all) of it's file tasks even if the files of interest have already been built. The frustrating thing, is that paring down my rake file to a MWE resolves the problem---even though I haven't altered anything wrt the filetask definition, how the files are being selected, the dependencies, or anything else. It seems that simply removing other (file)tasks from the rakefile remedies the problem.
I realize this is a really awful question, but does anyone have ideas about what might be going on here? I'd post sample code, but my MWE works as expected and I don't have any sense for what is causing the problem in the full rake file. All I can think to do is demonstrate that my MWE is literally an excerpt from the full Rakefile, unaltered...
➜ solutionmaps cat mwe/Rakefile|sed '/^$/d'|tee a
require 'rake'
require 'rake/clean'
require 'pathname'
HOME = ENV['HOME']
SHARED_ATLAS = "#{HOME}/MRI/Manchester/data/CommonBrains/MNI_EPI_funcRes.nii"
TXT = Rake::FileList["txt/nodestrength/??.mni"]
AFNI_RAW = TXT.pathmap("afni/nodestrength/%n_raw+tlrc.HEAD")
AFNI_RAW.zip(TXT).each do |target,source|
file target => [source] do
sh("3dUndump -master #{SHARED_ATLAS} -xyz -datum float -prefix #{target.sub("+tlrc.HEAD","")} #{source}")
end
CLOBBER.push(target)
CLOBBER.push(target.sub(".HEAD",".BRIK"))
CLOBBER.push(target.sub(".HEAD",".BRIK.gz"))
end
➜ solutionmaps perl -ne 'print if ($seen{$_} .= #ARGV) =~ /10$/' Rakefile mwe/Rakefile|sed '/^$/d'|tee b
require 'rake'
require 'rake/clean'
require 'pathname'
HOME = ENV['HOME']
SHARED_ATLAS = "#{HOME}/MRI/Manchester/data/CommonBrains/MNI_EPI_funcRes.nii"
TXT = Rake::FileList["txt/nodestrength/??.mni"]
AFNI_RAW = TXT.pathmap("afni/nodestrength/%n_raw+tlrc.HEAD")
AFNI_RAW.zip(TXT).each do |target,source|
file target => [source] do
sh("3dUndump -master #{SHARED_ATLAS} -xyz -datum float -prefix #{target.sub("+tlrc.HEAD","")} #{source}")
end
CLOBBER.push(target)
CLOBBER.push(target.sub(".HEAD",".BRIK"))
CLOBBER.push(target.sub(".HEAD",".BRIK.gz"))
end
➜ solutionmaps diff a b
➜ solutionmaps
And that my mwe works as expected (that is, it does not execute the file task).
➜ mwe rake --trace --dry-run afni/nodestrength/02_raw+tlrc.HEAD
** Invoke afni/nodestrength/02_raw+tlrc.HEAD (first_time, not_needed)
** Invoke txt/nodestrength/02.mni (first_time, not_needed)
But the full rakefile does not.
rake --trace --dry-run afni/nodestrength/02_raw+tlrc.HEAD
** Invoke afni/nodestrength/02_raw+tlrc.HEAD (first_time)
** Invoke txt/nodestrength/02.mni (first_time, not_needed)
** Execute (dry run) afni/nodestrength/02_raw+tlrc.HEAD
➜ solutionmaps ls afni/nodestrength/02_raw+tlrc.HEAD
afni/nodestrength/02_raw+tlrc.HEAD
Finally happened across a possible answer:
Rake determines that a file task needs to be run if the file doesn’t exist or if any of the prerequisite file tasks are newer.
Quoted from: http://madewithenvy.com/ecosystem/articles/2013/rake-file-tasks/
Since my Rakefiles are under heavy development, and my Filetasks are all pretty interrelated, this is probably why my Rakefile always wanted to rebuild everything.

Puppet onlyif and unless conditional test from boolean data in Hiera and CLI script output

I am running Puppet v3.0 on RHEL 6 and am doing package management via the exec resource.
I would like to add a number of control gates into my manifest via onlyif and unless.
First I would like to use booleans as defined in Hiera [ auto lookup function ].
Secondly I would like to use booleans from a bash script running diff <() <().
Im using the following hiera data :
---
my-class::package::patch_now:
0
my-class::package::package_list:
acl-2.2.49-6.el6-x86_64
acpid-1.0.10-2.1.el6-x86_64
...etc
and my manifest are as follows :
# less package.pp
class my-classs::package(
$package_list,
$patch_now,
){
exec {'patch_packages':
provider => shell,
path => [ "/bin/", "/usr/bin/" ],
logoutput => true,
timeout => 100,
command => "yum update -e0 -d0 -y $package_list",
unless => "/path/to/my-diff.script 2>&1 > /dev/null",
onlyif => "test 0 -eq $patch_now",
}
}
How would I test the booleans (0|1) from Hiera and a CLI diff.script with unless and onlyif in the context above ?
I'm assuming that you mean to install all listed packages in one sweep if $patch_now is set.
You should not test for that using onlyif. That is meant to verify some state on the agent system. If the master is aware of your data, you should use conditionals in the manifest structure.
if $patch_now {
exec { ... }
}
But do use true and false instead of 1 and 0 as the value for the flag - both 1 and 0 are equal to true in boolean context!
Your YAML looks funny, anyway.
To define a single value:
my-class::package::patch_now: false
To define an array:
my-class::package::package_list:
- acl-2.2.49-6.el6-x86_64
- acpid-1.0.10-2.1.el6-x86_64
- ...
When you use the array in your class, you cannot just put it in a string such as "yum update -e0 -d0 -y $package_list", for that will expand to "yum update -e0 -d0 -y acl-2.2.49-6.el6-x86_64acpid-1.0.10-2.1.el6-x86_64...", without spaces between the elements.
To concatenate the elements with spaces, use the join function from the stdlib
module.
$packages = join($package_list, ' ')
...
"yum update -e0 -d0 -y $packages"
I honestly don't get how your diff <() <() is supposed to work. The whole approach looks a little convoluted. I suspect that with a little tweaking, your diff script could probably perform the updates on its own (so that the exec just runs this script with different parameters).
EDIT after receiving more info in your comment.
To make this work cleanly, I recommend the following.
have Puppet transfer your Hiera data to the agent
file { '/opt/wanted-packages': content => inline_template('<%= package_list * "\n" %>') }
The diff will then work like you suggested, only simpler.
diff /opt/wanted-packages <(facter ...)
Just make sure that the exec requires the file and you should be fine.

Capistrano how to access a serverDefinition option in the code

I am defining my server setup like this:
task :test do
role(:frontend) {[server1,server2,server3, {:user=> "frontend-user", :options => {:log_location=>"HOW DO I READ THIS??"}}]}
role(:backend) {...}
role(:db) {...}
role(:mq) {...}
end
task :staging do
role(:frontend) {[server1,server2,server3, {:user=> "frontend-user", :options => {:log_location=>"HOW DO I READ THIS??"}}]}
role(:backend) {...}
role(:db) {...}
role(:mq) {...}
end
task :prod do
role(:frontend) {[server1,server2,server3, {:user=> "frontend-user", :options => {:log_location=>"HOW DO I READ THIS??"}}]}
role(:backend) {...}
role(:db) {...}
role(:mq) {...}
end
This is to embrace all the complexity of a legacy enterpricey system.
Now, from a task, I want to read the log_location.
Task example:
namespace :log do
desc "list all log files"
task :list do
run %(ls -1 #{log_location}/*/*.log)
end
end
The problem is that the variable log_location is undefined.
/.rvm/gems/ruby-2.0.0-p0/gems/capistrano-2.14.2/lib/capistrano/configuration/namespaces.rb:193:in
method_missing': undefined local variable or methodlog_location'
for
# (NameError)
How do I access that variable?
Is there a smarter/simpler way of setting this custom variable?
I'm sorry to say you can't read that. The blocks passed to task() aren't executed in a server context, thus the block in effect doesn't know what server it's operating on.
The classical workaround for this over the years has been to upload a config file which looks something like this:
---
hostname1:
log_file_location: "/var/log/hostname1/foo/bar"
hostname2:
log_file_location: "/var/log/hostname2/foo/bar"
(or similar) and use the machines hostname when loading the configuration.
I know this isn't a great workaround, thus in the forthcoming (see the v3 branch at Github) version of Capistrano there's a feature which looks like this:
host1 = SSHKit::Host.new 'user#example.com'
host2 = SSHKit::Host.new 'user#example.org'
host1.properties = {log_file_location: "/foo/bar"}
host2.properties.log_file_location = "/bar/baz"
on hosts do |host|
target = "/var/www/sites/"
if host.hostname =~ /org/
target += "dotorg"
else
target += "dotcom"
end
execute! :head, '-n 20', host.properties.log_file_location
execute! :git, :clone, "git#git.#{host.hostname}", target
end
(SSHKit Examples) - SSHKit is the new backend driver for Capistrano.
The v3 branch probably isn't ready for prime time yet, we're having a lot of success internally but the documentation is pretty ahem non existent. However the code is quite literally an oder of magnitude less imposing, and I think you'll find quite readable.
You need this: https://github.com/capistrano/capistrano/wiki/2.x-Multistage-Extension
It means that you can isolate stage specific code in separate files named after the stage. If you want to test for the stage name in the shared deploy.rb you can do that too, like this:
Put this in your deploy.rb
task :show_stage do
puts(stage)
end
Test from command line
$ cap staging show_stage
staging
Actually, I was able to pull out the log_location variable, but ended up with a solution that had one restriction:
I am using log location for one environment only. This is no problem in my current project, since I run the capistrano task against one role at a time.
For testing this setup, I made this task:
namespace :support do
desc "Test if the log location variable is correctly fetched from configuration"
task :test_log_location do
find_servers_for_task(current_task).each do |server|
# puts server.host
# puts server.port
# puts server.user
# puts server.options
result = "LOG LOCATION: #{server.options[:log_location]}"
#puts result
logger.info result
end
end
end
Then, for my tasks in the :log namespace, I defined the variable with set :log_location and also define the :current_role variable:
namespace :log do
def set_log_location
#set_log_location
#puts fetch(:log_location)
log_location = nil
options = nil
find_servers_for_task(current_task).each do |server|
# puts server.host
# puts server.port
# puts server.user
# puts server.options
options = server.options
log_location = server.options[:log_location]
#log_location = server.options[:current_role]
end
msg1="FATAL: you need to specify 'ROLES=frontend,backend,mq' (or one of them) from command line"
msg2="FATAL: Could not get log_location from environment/server options. I can only see these options: #{options}"
raise msg1 if ENV['ROLES'].nil?
raise msg2 if log_location.nil?
set :log_location, log_location
set :current_role, ENV['ROLES'].split(',').first
logger.info %(CURRENT_ROLE #{fetch(:current_role)})
logger.info %(THE LOG LOCATION IS: #{fetch(:log_location)})
end
end
Finally, I used a separate method to fully qualify the log path (needed for my setup -- also in the :log namespace):
def log_location
log_names = {
:frontend => "*/play.log",
:backend => "*Weblogic*/*.{log,out}"
}
loc = "#{fetch(:log_location)}/#{log_names[fetch(:current_role).to_sym]}"
logger.info "using the log location of '#{loc}'"
loc
end
Now, each task can use the specific log location like this:
desc "list all log files"
task :list do
set_log_location
run %(ls -l #{log_location})
end
I am sure this can be done more elegant, but it works for me

DBD::Informix connection issues

I'm having somewhat weird problem with DBD::Informix. If I run a simple script like that:
use DBI;
my $dbh = DBI->connect_cached('dbi:Informix:database', '', '');
my $sth = $dbh->prepare('select foo from bar');
...
It works all right. But if I try to do exactly the same from a test script it fails with the following message:
SQL: -25588: The appl process cannot connect to the database server cms_ol.
ISAM: 22: Invalid argument
The only difference I see is that test script is quite heavy on module usage; it is based on Test::More and loads a lot of submodules that are to be tested.
Turning on DBI trace does not provide anything useful (for me, at least). Simple script runs along just fine:
DBI 1.616-nothread default trace level set to 0x0/1 (pid 9685 pi 0) at test_ifx.pl line 6
Note: perl is running without the recommended perl -w option
-> DBI->connect(dbi:Informix:cms#cms_ol, , ****, HASH(0x13fad0))
-> DBI->install_driver(Informix) for solaris perl=5.008009 pid=9685 ruid=106 euid=106
install_driver: DBD::Informix version 2011.0612 loaded from /cms/webdash/lib/arch/DBD/Informix.pm
<- install_driver= DBI::dr=HASH(0x1c8070)
!! warn: 0 CLEARED by call to connect method
-->> DBD::Informix::dbd_ix_db_connect()
CONNECT TO 'cms#cms_ol' - no user info
-->> DBD::Informix::dbd_ix_db_check_for_autocommit()
... and the only difference in trace of the problematic script I see is that it just fails:
DBI 1.616-nothread default trace level set to 0x0/1 (pid 9687 pi 0) at 22_report.t line 5 via 22_report.t line 6
Note: perl is running without the recommended perl -w option
-> DBI->connect_cached(dbi:Informix:cms, , ****)
-> DBI->install_driver(Informix) for solaris perl=5.008009 pid=9687 ruid=106 euid=106
install_driver: DBD::Informix version 2011.0612 loaded from /cms/webdash/lib/arch/DBD/Informix.pm
<- install_driver= DBI::dr=HASH(0xb619bc)
!! warn: 0 CLEARED by call to connect_cached method
-->> DBD::Informix::dbd_ix_db_connect()
CONNECT TO 'cms' - no user info
***ERROR***
SQL: -25588: The appl process cannot connect to the database server cms_ol.
ISAM: 22: Invalid argument
<<-- dbd_ix_db_connect (**ERROR-1**)
<<-- DBD::Informix::dbd_ix_db_connect()
I'm running custom Perl 5.8.9 build in Solaris 9, with latest DBI and DBD::Informix versions, against Informix IDS 9.40UC.
Update: If I try to be a smartass and put a block like that at the top of the heavy test script:
use DBI;
BEGIN { my $dbh = DBI->connect_cached( ... ); print "Connected!\n" if $dbh; }
... it prints like this:
Connected!
Out of memory!
Callback called exit.
END failed--call queue aborted at t/22_report.t line 20.
Callback called exit at t/22_report.t line 20.
BEGIN failed--compilation aborted at t/22_report.t line 24.
My guess is that DBD::Informix conflicts with some of the modules loaded after the connection is made. But which one? That's the question...
Another update: It appears that the above trick does something unwieldy. I tried to load all the modules explicitly by replacing 'use Module' with 'require Module; Module->import'. Pure Perl modules are OK but whenever XS module using XSLoader appears, Perl goes boom with friendly 'Out of memory' message. And if I move Informix connection below module initialization, it works all right - except DBD::Informix fails with the same -25588 error. Boomer. I'm at loss. :(
Another another update: I tried to run the same script with standard Perl 5.6.1 shipped with Solaris 9, using DBI 1.601 (the latest that would compile with Perl 5.6) and DBD::Informix 2011.0612. Same thing, so it's not custom Perl giving me trouble.
I can also add that the test module in question was prototyped using DBD::SQLite and fully works. It is the final test with DBD::Informix that is failing... As usual. :/
Workaround: following e-mail discussion with Jonathan, a workaround was found: addition of streams-based 'onipcstr' connection to Informix server allowed DBD::Informix to connect. Apparently, some XS modules interfere with default shmem-based connection method, although the culprit is unknown at the moment.
Further discussion
Custom-built Perl is, in my experience, easier than the system Perl. I never modify the system's Perl installation (I don't want to break it) so I always build my own.
You appear to have:
Solaris 9 (SPARC?)
Perl 5.8.9
DBI 1.616
DBD::Informix 2011.0612
ESQL/C (CSDK) 2.81
Informix Dynamic Server 9.40
We don't have the detailed sub-version of ESQL/C and IDS (2.81.UC2, 9.40.UC5, or whatever). There's a hint that you are using a 32-bit version of IDS, so probably everything is 32-bit. You are probably aware that 9.40 is no longer supported by IBM (and, indeed, its successor version 10.00 is also out of support). However, superficially, none of that should matter very much. The failing t91lvarchar.t is not a big issue.
Can you run the connect in working and non-working modes with DBI_TRACE=9 set in the environment.
If the trace for the connect operation is too voluminous to go into an update to the question, we'd better take this off-line to the DBD::Informix support channels (that's me, but by email).
The 'ISAM' error of 22 (Invalid argument) is puzzling. I'm curious about what is in your sqlhosts file for this server - the entry for cms_ol specifically. I'm not sure it will reveal anything, not least because you say the sample ESQL/C below (in the 'First hypothesis' section) works OK, and sometimes the Perl connects and sometimes it does not.
I wonder if there is a name conflict somewhere between functions in the shared libraries? That sort of thing will be hell to track.
First hypothesis
Further information received shows that this was not the crucial distinction.
The difference appears to be:
Works: CONNECT TO 'cms#cms_ol' - no user info
Fails: CONNECT TO 'cms' - no user info
The tricky part to explain is why the second fails, especially as the error goes on to mention cms_ol.
The workaround is to specify the server name in the connect string:
DBI->connect(dbi:Informix:cms#cms_ol, , ****, HASH(0x13fad0))
DBI->connect_cached(dbi:Informix:cms, , ****)
The underlying problem is more likely at the ESQL/C level than anything to do with other Perl modules. That is, if you compiled and executed this ESQL/C program, it would fail on cms and work on cms#cms_ol:
int main(int argc, char **argv)
{
$ char *dbs = "cms";
if (argc > 1)
dbs = argv[1];
$ whenever error stop;
$ connect to :dbs;
return 0;
}
You could run it without an explicit database name (or with an explicit 'cms'), and I would expect it to fail. You could run it with 'cms#cms_ol' and I would expect it to pass. The program will say nothing if it passes; it will be obvious when it fails (though the messages will not be beautiful).
There is an outside chance it is something to do with connect_cached; that is a service provided by the DBI module and not by the DBD::Informix module. On the whole though, it is more likely something happening at the ESQL/C level.