How does one access command line flag (arguments) as environment variables in Erlang. (As flags, not ARGV) For example:
RabbitMQ cli looks something like:
erl \
...
-sasl errlog_type error \
-sasl sasl_error_logger '{file,"'${RABBITMQ_SASL_LOGS}'"}' \
... # more stuff here
If one looks at sasl.erl you see the line:
get_sasl_error_logger() ->
case application:get_env(sasl, sasl_error_logger) of
% ... etc
By some unknown magic the sasl_error_logger variable becomes an erlang tuple! I've tried replicating this in my own erlang application, but I seem to be only able to access these values via init:get_argument, which returns the value as a string.
How does one pass in values via the commandline and be able to access them easily as erlang terms?
UPDATE Also for anyone looking, to use environment variables in the 'regular' way use os:getenv("THE_VAR")
Make sure you set up an application configuration file
{application, fred,
[{description, "Your application"},
{vsn, "1.0"},
{modules, []},
{registered,[]},
{applications, [kernel,stdlib]},
{env, [
{param, 'fred'}
]
...
and then you can set your command line up like this:
-fred param 'billy'
I think you need to have the parameter in your application configuration to do this - I've never done it any other way...
Some more info (easier than putting it in a comment)
Given this
{emxconfig, {ets, [{keypos, 2}]}},
I can certainly do this:
{ok, {StorageType, Config}} = application:get_env(emxconfig),
but (and this may be important) my application is started at this time (may actually just need to be loaded and not actually started from looking at the application_controller code).
Related
I have been searching for an answer to this question with no luck, but is there a way to pass parameters into puppet manifests when running the 'apply' command, in a similar way to the way you pass parameters when running a UNIX script on the command line?
The suggestions I see mention either keeping variables at the top of the manifest for use later, or to store them in a hiera file. But neither really answer the question I am posing?
Any guidance on how to do this would be greatly appreciated?
Edit:
An example of what I have been doing is:
$doc_root = "/var/www/example"
exec { 'apt-get update':
command => '/usr/bin/apt-get update'
}
package { 'apache2':
ensure => "installed",
require => Exec['apt-get update']
}
file { $doc_root:
ensure => "directory",
owner => "www-data",
group => "www-data",
mode => 644
}
file { "$doc_root/index.html":
ensure => "present",
source => "puppet:///modules/main/index.html",
require => File[$doc_root]
}
As you can see the variable is hardcoded at the top, whereas whilst I am trying to use the variable in the same way, I need to be able to pass the value in when running the apply command.
Using lookup functions in conjunction with hiera.yaml files doesn't fulfil my requirements for the same reason.
The only thing I can think may be a work around is to create a UNIX script that accepts parameters, saves those values in a yaml file, and then have the script execute the .pp file.
But I'm hoping that puppet has a way to do this directly.
The common procedure for passing variables into a classless manifest for use with the puppet apply subcommand would be to assign the value to a Facter fact from the CLI, and then resolve its value inside the manifest. You would begin with removing the hardcoded variable doc_root from the head of the manifest. Then, you would modify the variable into a fact like:
file { $facts['doc_root']:
...
file { "${facts['doc_root']}/index.html":
...
require => File["${facts['doc_root']}"] <-- interpolation required due to Puppet DSL inability to resolve hash value as first class expression
You would then pass the Facter value from the puppet apply subcommand like:
FACTER_doc_root=/var/www/example puppet apply manifest.pp
Note this also causes FACTER_doc_root to be temporarily set as an environment variable as a side effect.
In a device driver source in the Linux tree, I saw dev_dbg(...) and dev_err(...), where do I find the logged message?
One reference suggest to add #define DEBUG . The other reference involves dynamic debug and debugfs, and I got lost.
dev_dbg() expands to dynamic_dev_dbg(), dev_printk(), or no-op depending on the compilation flags.
#if defined(CONFIG_DYNAMIC_DEBUG)
#define dev_dbg(dev, format, ...) \
do { \
dynamic_dev_dbg(dev, format, ##__VA_ARGS__); \
} while (0)
#elif defined(DEBUG)
#define dev_dbg(dev, format, arg...) \
dev_printk(KERN_DEBUG, dev, format, ##arg)
#else
#define dev_dbg(dev, format, arg...) \
({ \
if (0) \
dev_printk(KERN_DEBUG, dev, format, ##arg); \
})
#endif
dynamic_dev_dbg() and dev_printk() call dev_printk_emit() which calls vprintk_emit().
This very same function is called in a normal mode when you just do a printk(). Just note here, that the rest functions like dev_err() will end up in the same function.
Thus, obviously, the buffer is all the same, i.e. kernel intrenal buffer.
The logged message at the end is printed to
Current console if kernel loglevel value (can be changed via kernel command line or via procfs) is high enough for certain message, here KERN_DEBUG.
Internal buffer which can be read by running dmesg command.
Note, data in 2 is kept as long as there still room in the buffer. Since it's limited and circular, newer data preempts old one.
Additional information how to enable Dynamic Debug.
First of all, be sure you have CONFIG_DYNAMIC_DEBUG=y in the kernel configuration.
Assume we would like to enable all debug prints in the built-in module with name 8250. To achieve that we simple add to the kernel command line the following 8250.dyndbg=+p.
If the same driver is compiled as loadable module we may either add options 8250 dyndbg to the modprobe configuration or to the shell command line when do it manually, like modprobe 8250 dyndbg.
More details are described in the Dynamic Debug documentation.
The "How certain debug prints are automatically enabled in linux kernel?" raises the question why some debug prints are automatically enabled and how DEBUG affects that when CONFIG_DYNAMIC_DEBUG=y. The answer is lying in the dynamic_debug.h and since it's used during compilation the _DPRINTK_FLAGS_DEFAULT defines the certain message appearence.
#if defined DEBUG
#define _DPRINTK_FLAGS_DEFAULT _DPRINTK_FLAGS_PRINT
#else
#define _DPRINTK_FLAGS_DEFAULT 0
#endif
you can find dev_err(...) in kernel messages. As the name implies, dev_err(...) messages are error messages, so they will definitely be printed if the execution comes to that point. dev_dbg(...) are debug messages which are more generously used in the kernel driver code and they are not printed by default. So everything you have read about dynamic_debugging comes into play with dev_dbg(...).
There are several pre-conditions to have dynamic debugging working, below 1. and 2. are general preconditions for dynamic debugging. 3. and later are for your particular driver/module/subsystem and can be .
Dynamic debugging support has to be in your kernel config CONFIG_DYNAMIC_DEBUG=y. You may check if it is the case zgrep DYNAMIC_DEBUG /proc/config.gz
debugfs has to be mounted. You can check with sudo mount | grep debugfs and if not existing, you can mount with sudo mount -t debugfs /sys/kernel/debug
refer to dynamic_debugging and enable the particular file/function/line you are interested
I'm new to jboss-cli and working through the 'jboss-cli recipes'.
Question
How do I read one specific property using jboss-cli? E.g.
jboss.home.dir (e.g. "-Djboss.home.dir=/path/to/my/jboss")
Xmx ("-Xmx=4G")
Context
The "CLI Recipes" documentation has this helpful example to get all system properties. However its 'too much infomration'. I want to script reading one specific property.
https://docs.jboss.org/author/display/WFLY10/CLI+Recipes#CLIRecipes-
Overview of all system properties in JBoss AS7+ including OS system
properties and properties specified on command line using -D, -P or
--properties arguments.
Standalone
[standalone#IP_ADDRESS:9999 /] /core-service=platform-mbean/type=runtime:read-attribute(name=system-properties)
Thanks in advance
You could do a :
:resolve-expression(expression=${jboss.home.dir})
You can use the cli like this:
$JBOSS_HOME/bin/jboss-cli.sh -c --command=/system-property=MY_PROPERTY:read-resource
you get an output like this:
$JBOSS_HOME/bin/jboss-cli.sh -c --command=/system-property=MY_PROPERTY:read-resource
{
"outcome" => "success",
"result" => {"value" => "4.0"}
}
which you can extract by piping into something like this:
<cli command> | grep "{\"value\"" | sed "s/.*value\" => \"\([^\"]*\)\".*/\1/"
its a bit ugly, and there are some nasty edge cases if the values were to be something like "value" => "value =" or something hideous.
In general this works OK.
Change the sed command to be a bit more specific to fix that.
This link pointed me to the answer: I can use a groovy script to get the values. From what I see the "jboss-cli command line" does not offer this flexibility.
https://developer.jboss.org/wiki/AdvancedCLIScriptingWithGroovyRhinoJythonEtc
Solution
Here's a solution for jboss home.
[For memory you can get results from "/core-service=platform-mbean/type=memory/:read-attribute(name=heap-memory-usage)"
bash
#!/bin/sh
# Note: must set jbbin to 'jboss home /bin'
groovy -cp $jbbin/client/jboss-cli-client.jar readJbossHome.groovy
Groovy
Note: this is 'quick and dirty'.
import org.jboss.as.cli.scriptsupport.*
cli = CLI.newInstance()
cli.connect()
// Define properties
myParentProp="system-properties"
myProp="jboss.home.dir"
// Retrieve and pluck values
result = cli.cmd("/core-service=platform-mbean/type=runtime:read-resource(recursive=true,include-runtime=false)")
myResult = result.getResponse().get("result")
myParentVal = myResult.get(myParentProp)
myVal = myParentVal.get(myProp)
// Print out results
println "Property detail ${myProp} is ${myVal}"
cli.disconnect()
You can also do it via Wildfly management rest call.
http://localhost:9990/management
POST
Headers = Content-Type:application/json
Body =
{
"operation":"resolve-expression",
"expression":"${jboss.home.dir}"
}
With newer Teiid DOCs I have found some useful information I thought this might be helpful to share to people coming across a similar usecase
https://access.redhat.com/documentation/en-us/jboss_enterprise_application_platform/6.3/html/administration_and_configuration_guide/configure_system_properties_using_the_management_cli
Helps Adding, Removing & Reading System Properties with jboss-cli
jboss-cli
If you have a cli command like ehsavoie suggested :resolve-expression(expression=${jboss.home.dir}) and want to use the content of the "result" property within jboss-cli you can save it in a variable. You can use backticks (`) to evaluate expressions.
simple expression
[standalone#localhost:9990 /] :resolve-expression(expression=${jboss.home.dir})
{
"outcome" => "success",
"result" => "/home/user/wildfly"
}
use in valiable
[standalone#localhost:9990 /] set wildflydirectory=`:resolve-expression(expression=${jboss.home.dir})`
[standalone#localhost:9990 /] echo $wildflydirectory
/home/user/wildfly
PowerShell
If you happen to use the PowerShell you can use a one-liner to extract even deeply nested results with the help of the cli's --output-json option and PowerShell's ConvertFrom-Json cmdlet. In this way the parsing problem from James Roberts's approach with grep and sed are gone.
$value=(Invoke-Expression "./jboss-cli.ps1 -c --command=':resolve-expression(expression=`${jboss.home.dir})' --output-json" | ConvertFrom-Json).result
It is a bit tricky to quote the command and escape the correct PowerShell special characters.
I'm learning Puppet and the biggest frustration I have with the entire paradigm is the try/run/fix development process I'm using to build functional Puppet code. My background is in Java and I'm naturally use to debugging my code to find errors instead of just running the program to see where it bombs making development much faster but I can't seem to find a way to do this using Puppet and Eclipse. I know writing a debugger for Puppet would require some creativity given its nature but I think this is something the community could really benefit from.
I've written debuggers and know the Eclipse SDK but unfortunately it does not map cleanly to the Puppet architecture which is a bit awkward in the sense its runtime stack and execution flow does not happen in natural order as well as the fact the runtime requires a target machine to apply changes on.
I'm curious if the community has done any development work on trying to create some kind of debugger where code can be stepped. To write this I think it would make sense to extend Eclipse with a new Puppet debug configuration type where you specify a target sandbox host to test your code as well as a puppet project in your workspace you want to debug (leveraging existing Gepetto tooling). Then when you start a new Puppet debugging session Eclipse could connect to the remote host, execute puppet apply with some additional debug arguments and somehow provide feedback from the runtime about what line of code is currently being executed.
This still might be awkward but would allow puppet developers to quickly see things like oh duh.. I can't create this directory because the parent path does not exist, wait... why is this if statement not going here like I planned, oh I see here that Puppet is not very clear on single or double quotes or now I see why this fails because this class was not executed first etc. etc.
Instead all we get is a big ugly output on the agent console that yes can give us insight on errors but does not cleanly map exceptions to our code that in my view shows an underlying pain and weakness of Puppet, can you at least give me a stack trace and line number so I know where to look? Nope sorry.
Don't get me wrong, I love how Puppet can make me look very productive throughout the work week when all I'm doing is running Puppet apply on new machines which my manager has not yet figured out but I think for Puppet to really be useful this lack of debugging support is something that needs to be addressed.
Does anyone else feel this pain? - Duncan Krebs
It would be impossible to "step through" puppet code, unless you want to debug against the ruby codebase itself. It's not just that the order of "execution" is unclear, its that the manifest themselves are never executed at a single time. They are actually evaluated in multiple phases throughout execution.
There are ways to simplify finding problems though. The biggest one is writing unit tests using rspec-puppet. It lets you essentially test the compilation phase of puppet, helping you catch errors like circular dependencies, incorrect conditional logic, etc.
There is a new tool called the puppet-debugger which allows you to set breakpoints in your puppet code in order to step through it. So this is no longer "impossible" as it has been available for about 8 months.
You will first need to install the puppet-debugger gem
https://github.com/nwops/puppet-debugger
Then install the debug module, include it in your fixtures or just ensure it is in your module path.
https://forge.puppet.com/nwops/debug
Then just set a breakpoint in your code using debug::break() function.
Ruby Version: 2.0.0
Puppet Version: 4.9.4
Puppet Debugger Version: 0.6.0
Created by: NWOps
Type "exit", "functions", "vars", "krt", "whereami", "facts", "resources", "classes",
"play", "classification", "types", "datatypes", "reset", or "help" for more information.
>> $vars = ['one', 'two', 'three']
=> [
[0] "one",
[1] "two",
[2] "three"
]
>> $vars.each | String $var | {
debug::break()
notify{$var:}
}
From file: puppet_debugger_input20170417-97123-qjwbaj.pp
1: $vars.each | String $var | {
=> 2: debug::break()
3: notify{$var:}
4: }
1:>> $var
=> "one"
2:>> exit
From file: puppet_debugger_input20170417-97123-qjwbaj.pp
1: $vars.each | String $var | {
=> 2: debug::break()
3: notify{$var:}
4: }
1:>> $var
=> "two"
2:>> exit
From file: puppet_debugger_input20170417-97123-qjwbaj.pp
1: $vars.each | String $var | {
=> 2: debug::break()
3: notify{$var:}
4: }
1:>> $var
=> "three"
I am new to hosting world (cloudcontrol), an i got some problem with application credentials, like database administration (mongohq), or google authentification.
So, will i put those variable with some kind of syntaxte (something like $variable) in the code, and then make a commandline with key-value as variable-value ?
If you are using Tornado, it makes it even simpler. Use tornado.options and pass environment variables while running the code.
Use following in your Tornado code:
define("mysql_host", default="127.0.0.1:3306", help="Main user DB")
define("google_oauth_key", help="Client key for Google Oauth")
Then you can access the these values in your rest of your code as:
options.mysql_host
options.google_oauth_key
When you are running your Tornado script, pass the environment variables:
python main.py --mysql_host=$MYSQL_HOST --google_oauth_key=$OAUTH_KEY
assuming both $MYSQL_HOST and $OAUTH_KEY are environment variables. Let me know if you need a full working example or any further help.
example:
First set a environment variable:
$export mongo_uri_env=mongodb://alien:12345#kahana.mongohq.com:10067/essog
and make changes in your Tornado code:
define("mongo_uri", default="127.0.0.1:28017", help="MongoDB URI")
...
...
uri = options.mongo_uri
and you would run your code as
python main.py --mongo_uri=$mongo_uri_env
If you don't want to pass it while running, then you have to read that environment variable directly in your script. For that
import os
...
...
uri = os.environ['mongo_uri_env']