Getting different hash value with puppet exec [closed] - jboss

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 7 years ago.
Improve this question
When I have executed the following command I am getting the perfect hash value which i have expected.
/opt/Jboss/dc/bin/add-user.sh --silent --user testuser --password testuser*1 --realm ManagementRealm
Hash Logic = md5(testuser:ManagementRealm:testuser*1)
Expected hash value = e72bfb358dd2116ad0033c01e357c1b2
But when i have tried the same thing with puppet exec. I am getting different hash value. I don't know how to debug or fix it. Any help is much appreciated.
my puppet code :
define jboss::useradd(
$home,
$username,
$password,
) {
$jbossuserfix = '2>&1 | awk \'BEGIN{a=0}{if (/Error/){a=1};print}END{if (a==1) exit 1}\''
$realm = "ManagementRealm"
$filepath = "${home}/domain/configuration/mgmt-users.properties"
$encrypasswd = md5("${username}:ManagementRealm:${password}")
notify { " ${title} Encry ${encrypasswd} ": }
exec { "${title}::user::add":
environment => ["JBOSS_HOME=${home}","__PASSWD=${password}"],
command => "${home}/bin/add-user.sh --silent --user '${username}' --password \"\$__PASSWD\" --realm '{realm}' ${jbossuserfix}",
unless => "/bin/egrep -e '^${username}=${encrypasswd}' ${filepath}",
require => File["${home}/domain/configuration/domain.xml"],
logoutput => true,
}
}
The following is the result i am getting with my above code.
Result hash value : fb8ed958ba3d535fb8314d4da4b96d42

The command attribute in your puppet code doesn't match the example line you give.
First, you're missing the $ on ${realm}.
Second, you've added quotes around the parameters in the puppet code. Not knowing anything about the script you are calling, that may or may not be important.

Related

Passing parameters to puppet manifest via command line

I have been searching for an answer to this question with no luck, but is there a way to pass parameters into puppet manifests when running the 'apply' command, in a similar way to the way you pass parameters when running a UNIX script on the command line?
The suggestions I see mention either keeping variables at the top of the manifest for use later, or to store them in a hiera file. But neither really answer the question I am posing?
Any guidance on how to do this would be greatly appreciated?
Edit:
An example of what I have been doing is:
$doc_root = "/var/www/example"
exec { 'apt-get update':
command => '/usr/bin/apt-get update'
}
package { 'apache2':
ensure => "installed",
require => Exec['apt-get update']
}
file { $doc_root:
ensure => "directory",
owner => "www-data",
group => "www-data",
mode => 644
}
file { "$doc_root/index.html":
ensure => "present",
source => "puppet:///modules/main/index.html",
require => File[$doc_root]
}
As you can see the variable is hardcoded at the top, whereas whilst I am trying to use the variable in the same way, I need to be able to pass the value in when running the apply command.
Using lookup functions in conjunction with hiera.yaml files doesn't fulfil my requirements for the same reason.
The only thing I can think may be a work around is to create a UNIX script that accepts parameters, saves those values in a yaml file, and then have the script execute the .pp file.
But I'm hoping that puppet has a way to do this directly.
The common procedure for passing variables into a classless manifest for use with the puppet apply subcommand would be to assign the value to a Facter fact from the CLI, and then resolve its value inside the manifest. You would begin with removing the hardcoded variable doc_root from the head of the manifest. Then, you would modify the variable into a fact like:
file { $facts['doc_root']:
...
file { "${facts['doc_root']}/index.html":
...
require => File["${facts['doc_root']}"] <-- interpolation required due to Puppet DSL inability to resolve hash value as first class expression
You would then pass the Facter value from the puppet apply subcommand like:
FACTER_doc_root=/var/www/example puppet apply manifest.pp
Note this also causes FACTER_doc_root to be temporarily set as an environment variable as a side effect.

jboss-cli : How do I read one specific system property using jboss-cli?

I'm new to jboss-cli and working through the 'jboss-cli recipes'.
Question
How do I read one specific property using jboss-cli? E.g.
jboss.home.dir (e.g. "-Djboss.home.dir=/path/to/my/jboss")
Xmx ("-Xmx=4G")
Context
The "CLI Recipes" documentation has this helpful example to get all system properties. However its 'too much infomration'. I want to script reading one specific property.
https://docs.jboss.org/author/display/WFLY10/CLI+Recipes#CLIRecipes-
Overview of all system properties in JBoss AS7+ including OS system
properties and properties specified on command line using -D, -P or
--properties arguments.
Standalone
[standalone#IP_ADDRESS:9999 /] /core-service=platform-mbean/type=runtime:read-attribute(name=system-properties)
Thanks in advance
You could do a :
:resolve-expression(expression=${jboss.home.dir})
You can use the cli like this:
$JBOSS_HOME/bin/jboss-cli.sh -c --command=/system-property=MY_PROPERTY:read-resource
you get an output like this:
$JBOSS_HOME/bin/jboss-cli.sh -c --command=/system-property=MY_PROPERTY:read-resource
{
"outcome" => "success",
"result" => {"value" => "4.0"}
}
which you can extract by piping into something like this:
<cli command> | grep "{\"value\"" | sed "s/.*value\" => \"\([^\"]*\)\".*/\1/"
its a bit ugly, and there are some nasty edge cases if the values were to be something like "value" => "value =" or something hideous.
In general this works OK.
Change the sed command to be a bit more specific to fix that.
This link pointed me to the answer: I can use a groovy script to get the values. From what I see the "jboss-cli command line" does not offer this flexibility.
https://developer.jboss.org/wiki/AdvancedCLIScriptingWithGroovyRhinoJythonEtc
Solution
Here's a solution for jboss home.
[For memory you can get results from "/core-service=platform-mbean/type=memory/:read-attribute(name=heap-memory-usage)"
bash
#!/bin/sh
# Note: must set jbbin to 'jboss home /bin'
groovy -cp $jbbin/client/jboss-cli-client.jar readJbossHome.groovy
Groovy
Note: this is 'quick and dirty'.
import org.jboss.as.cli.scriptsupport.*
cli = CLI.newInstance()
cli.connect()
// Define properties
myParentProp="system-properties"
myProp="jboss.home.dir"
// Retrieve and pluck values
result = cli.cmd("/core-service=platform-mbean/type=runtime:read-resource(recursive=true,include-runtime=false)")
myResult = result.getResponse().get("result")
myParentVal = myResult.get(myParentProp)
myVal = myParentVal.get(myProp)
// Print out results
println "Property detail ${myProp} is ${myVal}"
cli.disconnect()
You can also do it via Wildfly management rest call.
http://localhost:9990/management
POST
Headers = Content-Type:application/json
Body =
{
"operation":"resolve-expression",
"expression":"${jboss.home.dir}"
}
With newer Teiid DOCs I have found some useful information I thought this might be helpful to share to people coming across a similar usecase
https://access.redhat.com/documentation/en-us/jboss_enterprise_application_platform/6.3/html/administration_and_configuration_guide/configure_system_properties_using_the_management_cli
Helps Adding, Removing & Reading System Properties with jboss-cli
jboss-cli
If you have a cli command like ehsavoie suggested :resolve-expression(expression=${jboss.home.dir}) and want to use the content of the "result" property within jboss-cli you can save it in a variable. You can use backticks (`) to evaluate expressions.
simple expression
[standalone#localhost:9990 /] :resolve-expression(expression=${jboss.home.dir})
{
"outcome" => "success",
"result" => "/home/user/wildfly"
}
use in valiable
[standalone#localhost:9990 /] set wildflydirectory=`:resolve-expression(expression=${jboss.home.dir})`
[standalone#localhost:9990 /] echo $wildflydirectory
/home/user/wildfly
PowerShell
If you happen to use the PowerShell you can use a one-liner to extract even deeply nested results with the help of the cli's --output-json option and PowerShell's ConvertFrom-Json cmdlet. In this way the parsing problem from James Roberts's approach with grep and sed are gone.
$value=(Invoke-Expression "./jboss-cli.ps1 -c --command=':resolve-expression(expression=`${jboss.home.dir})' --output-json" | ConvertFrom-Json).result
It is a bit tricky to quote the command and escape the correct PowerShell special characters.

Using __END__ and DATA in Chef recipes (to run legacy shell scripts)

I'm migrating some shell scripts to Chef recipes. Some of these scripts are fairly involved, so just to make life easier in the short term and to avoid introducing bugs in rewriting everything in Chef/Ruby, I'd like to just run some of them as-is. They're all well-written and idempotent, so honestly there's no rush, but of course, the eventual goal is to rewrite them.
One cool feature of Ruby is its __END__ keyword/method: Lines below __END__ will not be executed. Those lines will be available via the special filehandle DATA.
It would be cool to ship the shell scripts as-is inside the the recipe after __END__, maybe something like the following, which I placed in chef-repo/cookbooks/ruby-data-test/recipes/default.rb:
file = Tempfile.new(File.basename(__FILE__))
file << DATA.read
bash file.path
file.unlink
__END__
echo "Hello, world"
However when I run this (with chef-solo -c solo.rb --override-runlist 'recipe[ruby-data-test]'), I get the following error:
[2014-10-03T17:14:56+00:00] ERROR: uninitialized constant Chef::Recipe::DATA
I'm pretty new to Chef, but I'm guessing the above is something about Chef wrapping my recipe in a class, and there's something simple preventing me from accessing DATA. Since it's "global" (?) I tried putting a dollar sign ($DATA) in front of it but that failed with:
NoMethodError
-------------
undefined method `read' for nil:NilClass
So the question is: How do I access DATA in my Chef recipe? Thanks!
It appears you don't have access to DATA, but you can fake it by reading in the current file yourself and splitting on __END__, like Sinatra does.
I ended up making a Chef LWRP for reuse. I don't know if I'll actually end up using this, but I wanted to figure it out. Like I said, I'm a Chef/Ruby noob, so any better ideas or suggestions welcome!
ruby_data_test/recipes/default.rb:
ruby_data_test_execute_ruby_data __FILE__
__END__
#!/bin/bash
set -o errexit
date
echo "Hello, world"
ruby_data_test/resources/execute_ruby_data.rb:
actions :execute_ruby_data
default_action :execute_ruby_data
attribute :source, :name_attribute => true, :required => true
attribute :args, :kind_of => Array
attribute :ignore_errors, :kind_of => [TrueClass, FalseClass], :default => false
ruby_data_test/providers/execute_ruby_data.rb:
def whyrun_supported?
true
end
use_inline_resources
action :execute_ruby_data do
converge_by("Executing #{#new_resource}") do
Chef::Log.info("Executing #{#new_resource}")
file_who_called_me = #new_resource.source
io = ::IO.respond_to?(:binread) ? ::IO.binread(file_who_called_me) : ::IO.read(file_who_called_me)
app, data = io.gsub("\r\n", "\n").split(/^__END__$/, 2)
data.lstrip!
file = Tempfile.new('execute_ruby_data')
file << data
file.chmod(0755)
file.close
exit_status = ::Open3.popen2e(file.path, *#new_resource.args) do |stdin, stdout_and_stderr, wait_thr|
stdout_and_stderr.each { |line| puts line }
wait_thr.value # exit status
end
if exit_status != 0 && !#new_resource.ignore_errors
throw RuntimeError
end
end
end
Here's the output:
$ chef-solo -c solo.rb --override-runlist 'recipe[ruby_data_test]'
Starting Chef Client, version 11.12.4
[2014-10-03T21:50:29+00:00] WARN: Run List override has been provided.
[2014-10-03T21:50:29+00:00] WARN: Original Run List: []
[2014-10-03T21:50:29+00:00] WARN: Overridden Run List: [recipe[ruby_data_test]]
Compiling Cookbooks...
Converging 1 resources
Recipe: ruby_data_test::default
* ruby_data_test_execute_ruby_data[/root/chef/chef-repo/cookbooks/ruby_data_test/recipes/default.rb] action execute_ruby_dataFri Oct 3 21:50:29 UTC 2014
Hello, world
- Executing ruby_data_test_execute_ruby_data[/root/chef/chef-repo/cookbooks/ruby_data_test/recipes/default.rb]
Running handlers:
Running handlers complete
Chef Client finished, 1/1 resources updated in 1.387608 seconds

Reading SNMP values from a device with multiple view-based ACM contexts, using Net::SNMP

I am trying to use Perl and Net::SNMP to query a device that has multiple configured views/contexts (a Cisco ACE 4710, for example). The equivalent snmpwalk command is:
snmpwalk -c public#CONTEXT_NAME -v 2c 1.2.3.4 '.1.3.6.1.4.1.9.9.480.1.1.2'
I can enumerate the various contexts/views with SNMP-VIEW-BASED-ACM-MIB, e.g.:
my $vacmContextName = '.1.3.6.1.6.3.16.1.1.1.1';
my ($session, $error) = Net::SNMP->session(-hostname => '1.2.3.4', -community => 'public');
my %contexts = %{ $snmp->get_entries(-columns => [ $vacmContextName ]) };
...but am having trouble now reading any further OIDs from each specific context, e.g.:
my $ciscoL4L7ResourceLimitTable = '.1.3.6.1.4.1.9.9.480.1.1.2';
my ($session, $error) = Net::SNMP->session(-hostname => '1.2.3.4', -community => 'public#CONTEXT_NAME');
my %stats = %{ $snmp->get_entries(-columns => [ $ciscoL4L7ResourceLimitTable ]) }
If I run with -debug => DEBUG_ALL, I see the data being returned in the packet inspection (with recognisable data from that context), but then I get a lot of the following errors:
error: [131] Net::SNMP::Security::Community::process_incoming_msg(): The community name "public#CONTEXT_NAME" was expected, but "public" was found
error: [218] Net::SNMP::MessageProcessing::prepare_data_elements(): The community name "public#CONTEXT_NAME" was expected, but "public" was found
error: [398] Net::SNMP::Dispatcher::_transport_response_received(): The community name "public#CONTEXT_NAME" was expected, but "public" was found
error: [1234] Net::SNMP::__ANON__(): The community name "public#CONTEXT_NAME" was expected, but "public" was found
error: [2363] Net::SNMP::__ANON__(): The community name "public#CONTEXT_NAME" was expected, but "public" was found
...and the resulting contents of %stats is undef.
If I try with -community => 'public', it works, but I only get values from the default context (which doesn't contain everything I need).
If I try with -contextname => 'CONTEXT_NAME', I just get:
error: [2423] Net::SNMP::_context_name(): The contextName argument is only supported in SNMPv3
Is it not possible to do what I need with Net::SNMP?
This is perl, v5.10.1 (*) built for i386-linux-thread-multi
CPAN_FILE D/DT/DTOWN/Net-SNMP-v6.0.1.tar.gz
INST_VERSION v6.0.1
From an email conversation I just had with the package author, Word-of-$DEITY is:
Cisco's "context implementation" is vendor specific and is not standards based. The use of "contexts" is officially defined in the SNMPv3 RFCs (specifications). The Net::SNMP modules tries to follow the RFCs in its implementation. I would have to go back to the RFCs determine if it is legal to respond to an SNMP v1/2c message with a "community" string that is not the same as the one with which the request was made. This is the problem that you are seeing.
You are most likely not going to get Cisco to change their implementation even if it is violating an RFC, so your only recourse is to comment out the code in the Net/SNMP/Security/Community.pm module that is returning an error on the send/receive of differing community strings. I typically do not add modification to the module to work around vendor specific problems.
Rather than comment out the relevant section of the Community.pm (which is used by far more on my server than just this script), I have instead elected to shim/monkey-patch the method in question in my own code, for example:
no warnings 'redefine';
use Net::SNMP::Security::Community;
*Net::SNMP::Security::Community::process_incoming_msg = sub { return 1; };
This way, the failing method is bypassed in favour of just accepting any returned community string.
Filthy dirty on all counts, but acceptable within my controlled environment. Anyone coming across this in the future should make sure that version != 6.0.1 hasn't changed the way this module works, or what the existing subroutine does before attempting the same.

Perl SCP ERROR(Asking to Continue?)

Here's is what I am doing
my $username = "user";
my $password= "pass";
my $host="xxx.xxx.xxx.xxx";
my $scpe = Net::SCP::Expect->new(user => $username,
password => $password,
preserve => 1,
recursive => 1,
verbose=>1,
auto_yes=>1);
$scpe->scp("$file","$host:./drop/drop.txt");
When I run this code there is no error I am using unix box, $file is in my directory and have full permissions, also I have changed the directory to temp in unix box but when somebody else runs this code they get
Problem performing scp: Are you sure
you want to continue connecting
(yes/no)? at scp.pl line 242
I am very confused why is it happening, as this error is not received by me
Short answer:
Raise the timeout_auto value:
my $scpe = Net::SCP::Expect->new(user => $username,
password => $password,
preserve => 1,
recursive => 1,
verbose=>1,
timeout_auto=>10, #For example - 5 should probably be plenty
auto_yes=>1);
Long answer.
The
problem performing scp
is what Net::SCP::Expect prepends to the literal error message it gets from SCP itself, so in this case
Are you sure you want to continue
connecting (yes/no)?
This usually happens because the host SCP is connecting to is not yet known.
You should set auto_yes to 1 if you want to avoid this error as the CPAN Documentation for NET::SCP::Expect explains, but I see you're already doing that.
If that doesn't help, consider raising the timeout_auto value. It defaults to 1 second, but if it takes longer for SCP to pose the 'are you sure' question (because for example the DNS looking of the host takes longer), it might not be enough.