Modulefile - Module loading when module unload is called - environment-modules

So I am trying to have module files dynamically load/unload when you load one. I am trying to do this in a way so that I can unload conflicting modules (instead of just using the conflict variable as it just throws an error)
However, when I call module load on one file that tries to unload another... the second one is loaded instead of unloaded. Example is below:
test1 Modulefile:
#%Module1.0
##
## Testing
##
if { [module-info mode load] && [is-loaded test2] } {
puts stderr "Unloading test2 (From test1)"
module unload test2
}
test2 Modulefile:
#%Module1.0
##
## Test2
##
if { [module-info mode load] } {
puts stderr "Loading test2\n"
}
if { [module-info mode remove] } {
puts stderr "Unloading test2\n"
}
Output when I try to load test1 while test2 is loaded:
root#host:/usr/local/modules# module load test1
Unloading test2 (From test1)
Loading test2
module --version output:
VERSION=3.2.10
DATE=2012-12-21
AUTOLOADPATH=undef
BASEPREFIX="/usr"
BEGINENV=99
CACHE_AVAIL=undef
DEF_COLLATE_BY_NUMBER=undef
DOT_EXT=""
EVAL_ALIAS=1
HAS_BOURNE_FUNCS=1
HAS_BOURNE_ALIAS=1
HAS_TCLXLIBS=undef
HAS_X11LIBS=1
LMSPLIT_SIZE=1000
MODULEPATH="/usr/share/modules/modulefiles"
MODULES_INIT_DIR="/usr/Modules/3.2.10/init"
PREFIX="/usr/Modules/3.2.10"
TCL_VERSION="8.6"
TCL_PATCH_LEVEL="8.6.1"
TMP_DIR="/tmp"
USE_FREE=undef
VERSION_MAGIC=1
VERSIONPATH="/usr/share/modules/versions"
WANTS_VERSIONING=1
WITH_DEBUG_INFO=undef
Does anyone know why or how I can fix this? Is there another command that I can put in the Modulefile that unloads another module? Or is there an better alternative to using Modules?
Thanks in advanced for reading and helping!

The issue you describe is coming from a bug affecting Modules version <= 3.2.10. Newer Modules version (> 3.2.10) or up-to-date "environment-modules" package on RedHat-like Linux distribution have fixed this issue:
For instance on a recent Fedora system, excepted behavior is obtained:
$ module -V
Modules Release 4.1.3 (2018-06-18)
$ module load test2
Loading test2
$ module list
Currently Loaded Modulefiles:
1) test2
$ module load test1
Unloading test2 (From test1)
Unloading test2
$ module list
Currently Loaded Modulefiles:
1) test1

Related

Add bash script as an entrypoint to Python package with Poetry

Is it possible to add bash script as an entrypoint (console script) to Python package via poetry? It looks like it only accepts python files (see code here).
I want entry.sh to be an entry script
#!/usr/bin/env bash
set -e
echo "Running entrypoint"
via setup.py
entry_points={
"console_scripts": [
"entry=entry.sh",
],
},
On the other hand setuptools seems to be supporting shell scripts (see code here).
Is it possible to include shell script into a package and add it to the entrypoints after installing when working with Poetry?
UPD. setuptools does not support that as well (it generates code below)
def importlib_load_entry_point(spec, group, name):
dist_name, _, _ = spec.partition('==')
matches = (
entry_point
for entry_point in distribution(dist_name).entry_points
if entry_point.group == group and entry_point.name == name
)
return next(matches).load()
globals().setdefault('load_entry_point', importlib_load_entry_point)
Is it design decision? It looks to me that packaging should provide such a feature to deliver complex applications as a single bundle.
So I ended up using this workaround: have my script in place and add it to the bundle via package_data and call it from within Python code which I made as an entrypoint.
import subprocess
def _run(bash_script):
return subprocess.call(bash_script, shell=True)
def entrypoint():
return _run("./scripts/my_entrypoint.sh")
def another_entrypoint_if_needed():
return _run("./scripts/some_other_script.sh")
and pyproject.toml
[tool.poetry.scripts]
entrypoint = 'bash_runner:entrypoint'
another = 'bash_runner:another_entrypoint_if_needed'
Same works for console_scripts in setup.py file.

Specify injection order of user-defined monitor files in apama_project

Can an apama_project specify the injection order of user-defined types?
It appears the engine_deploy does not automatically resolve the user-defined dependency graph.
Using the apama_project tool, I have setup a project with two *.mon files. 1.mon depends on an event definition in 2.mon.
TestProject
|-.dependencies
...
|-events
|-monitors
| |-1.mon // depends 2.mon
| |-2.mon
|-.project
The intent was to see if the engine_deploy tool could identify the dependency tree of user-defined types. Unfortunately, it does not appear to:
engine_deploy -d ../Deployment .
INFO: copying the project file from /home/twanas/base_project to output directory ../Deployment
WARN: Overwriting output deployment directory ../Deployment
ERROR: Failed to generate initialization list as the project has below error(s):
/home/twanas/base_project/monitors/1.mon: 1: the name rt in the com namespace does not exist
/home/twanas/base_project/monitors/1.mon: 5: "A" does not exist
Full source:
// 1.mon
using com.rt.sub_a;
monitor B {
action onload() {
on all A() as a {
log a.toString();
}
}
}
// 2.mon
package com.rt.sub_a;
event A {
string mystring;
}
Assuming the user is developing on linux so does not use the 'SoftwareAG Designer' - how can this be achieved?
On a separate note - the apama_project and engine_deploy are great additions to toolbase.
The issue was actually caused by invalid EPL using com.rt.sub_a;
The tools did indeed resolve the user-defined dependencies which is excellent.

Postgres PL/JAVA: java.lang.ClassNotFoundException error after loading JAR file in database

I am getting the java.lang.ClassNotFoundException: error inside Postgres when running a function that calls a JAR file I have loaded. I have installed and configured PL/JAVA (including the delivered examples) in my database and can run the examples to success. I am not attempting to load/install my first JAR, but I am doing something wrong.
My host controls the OS version: CentOS 6.8. Postgres is version 8.4.
I am attempting to install my own very simple java class, which is a derivative of the delivered example Parameters.addOne class. All my code is in /tmp. Here are the steps I've followed:
Doug.java:
package com.msmetric;
import java.math.BigDecimal;
import java.sql.Date;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Time;
import java.sql.Timestamp;
import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.TimeZone;
import java.util.logging.Logger;
public class Doug {
public static int addOne(int value) {
return value + 1;
}
}
Compile Doug.java using 'javac Doug.java' succeeds.
Create JAR file with Doug.class file in it using 'jar -cvf Doug.jar Doug.class. This works fine.
Now I load the JAR file into Postgres (public schema), change the classpath, create the function that calls the JAR, then attempt to run at psql prompt.
Run sqlj.install_jar from psql:
select sqlj.install_jar('file:/tmp/Doug.jar','Doug',false);
Set the classpath inside Postgres (from psql prompt postgres=#):
select sqlj.set_classpath('public','Doug');
Create the function that calls the JAR. This create function code is taken directly from the examples.ddr file that came with PL/JAVA. I simply changed org.postgres to com.msmetric.
create or replace function addone(int) returns int as 'com.msmetric.Doug.addOne(java.lang.Integer)' language java;
Now with the JAR loaded and function created, I attempt to run it. This function should simply add 1 to the number provided.
select addone(3);
Results:
ERROR: java.lang.ClassNotFoundException: com.msmetric.Doug
Thoughts?
I'm very sorry I didn't see your question sooner. Underneath all the exotic details (PostgreSQL, PL/Java, schemas, classpaths...), there's just a bit of basic Java going on here: if a jar file contains a class Doug.class in package com.msmetric, its path within the jar has to reflect that: it has to be com/msmetric/Doug.class. Otherwise, it won't be found.
You can set up that whole structure step by step:
javac Doug.java
mkdir com
mkdir com/msmetric
mv Doug.class com/msmetric/
jar -cvf Doug.jar com/msmetric/Doug.class
Or, you can let javac do more of the work for you:
mkdir classes
javac -d classes Doug.java
jar -cvf Doug.jar -C classes .
When you give javac a -ddirectory option, instead of just writing class files next to their .java sources, it will put them all in their proper places under the directory you named, and then you can just tell jar to change into that directory and slurp them all up (don't overlook the . at the end of that jar command).
Once you fix that, if you retry your original steps, you'll see that you now get a different error:
ERROR: Unable to find static method com.msmetric.Doug.addOne with signature (Ljava/lang/Integer;)I
That happens because you declared the function in Doug.java with int addOne(int value) (that is, taking a primitive int argument), but you declared it in SQL with returns int as 'com.msmetric.Doug.addOne(java.lang.Integer)' taking an Integer object.
Once you correct that:
create or replace function addone(int) returns int as 'com.msmetric.Doug.addOne(int)' language java;
you'll be able to see:
# select addone(3);
addone
--------
4
(1 row)
If you happen to see this belated answer, may I ask what version of PL/Java you are using? That's one detail you didn't mention. If it is older than 1.5.0, there are newer features that can help you out. For one, you can just annotate that function:
#Function
public static int addOne(int value) {
return value + 1;
}
and have javac spit out not only the Doug.class file but also a pljava.ddr file with your SQL function declaration already written correctly (no mixing up argument types!). There is a way to include that .ddr file into the jar you create so that you can just call sqlj.install_jar with the last parameter true so it runs the commands in the .ddr and your functions are ready to use. There's a Hello, world example in the docs that shows more of how it's done.
Cheers,
-Chap

Using __END__ and DATA in Chef recipes (to run legacy shell scripts)

I'm migrating some shell scripts to Chef recipes. Some of these scripts are fairly involved, so just to make life easier in the short term and to avoid introducing bugs in rewriting everything in Chef/Ruby, I'd like to just run some of them as-is. They're all well-written and idempotent, so honestly there's no rush, but of course, the eventual goal is to rewrite them.
One cool feature of Ruby is its __END__ keyword/method: Lines below __END__ will not be executed. Those lines will be available via the special filehandle DATA.
It would be cool to ship the shell scripts as-is inside the the recipe after __END__, maybe something like the following, which I placed in chef-repo/cookbooks/ruby-data-test/recipes/default.rb:
file = Tempfile.new(File.basename(__FILE__))
file << DATA.read
bash file.path
file.unlink
__END__
echo "Hello, world"
However when I run this (with chef-solo -c solo.rb --override-runlist 'recipe[ruby-data-test]'), I get the following error:
[2014-10-03T17:14:56+00:00] ERROR: uninitialized constant Chef::Recipe::DATA
I'm pretty new to Chef, but I'm guessing the above is something about Chef wrapping my recipe in a class, and there's something simple preventing me from accessing DATA. Since it's "global" (?) I tried putting a dollar sign ($DATA) in front of it but that failed with:
NoMethodError
-------------
undefined method `read' for nil:NilClass
So the question is: How do I access DATA in my Chef recipe? Thanks!
It appears you don't have access to DATA, but you can fake it by reading in the current file yourself and splitting on __END__, like Sinatra does.
I ended up making a Chef LWRP for reuse. I don't know if I'll actually end up using this, but I wanted to figure it out. Like I said, I'm a Chef/Ruby noob, so any better ideas or suggestions welcome!
ruby_data_test/recipes/default.rb:
ruby_data_test_execute_ruby_data __FILE__
__END__
#!/bin/bash
set -o errexit
date
echo "Hello, world"
ruby_data_test/resources/execute_ruby_data.rb:
actions :execute_ruby_data
default_action :execute_ruby_data
attribute :source, :name_attribute => true, :required => true
attribute :args, :kind_of => Array
attribute :ignore_errors, :kind_of => [TrueClass, FalseClass], :default => false
ruby_data_test/providers/execute_ruby_data.rb:
def whyrun_supported?
true
end
use_inline_resources
action :execute_ruby_data do
converge_by("Executing #{#new_resource}") do
Chef::Log.info("Executing #{#new_resource}")
file_who_called_me = #new_resource.source
io = ::IO.respond_to?(:binread) ? ::IO.binread(file_who_called_me) : ::IO.read(file_who_called_me)
app, data = io.gsub("\r\n", "\n").split(/^__END__$/, 2)
data.lstrip!
file = Tempfile.new('execute_ruby_data')
file << data
file.chmod(0755)
file.close
exit_status = ::Open3.popen2e(file.path, *#new_resource.args) do |stdin, stdout_and_stderr, wait_thr|
stdout_and_stderr.each { |line| puts line }
wait_thr.value # exit status
end
if exit_status != 0 && !#new_resource.ignore_errors
throw RuntimeError
end
end
end
Here's the output:
$ chef-solo -c solo.rb --override-runlist 'recipe[ruby_data_test]'
Starting Chef Client, version 11.12.4
[2014-10-03T21:50:29+00:00] WARN: Run List override has been provided.
[2014-10-03T21:50:29+00:00] WARN: Original Run List: []
[2014-10-03T21:50:29+00:00] WARN: Overridden Run List: [recipe[ruby_data_test]]
Compiling Cookbooks...
Converging 1 resources
Recipe: ruby_data_test::default
* ruby_data_test_execute_ruby_data[/root/chef/chef-repo/cookbooks/ruby_data_test/recipes/default.rb] action execute_ruby_dataFri Oct 3 21:50:29 UTC 2014
Hello, world
- Executing ruby_data_test_execute_ruby_data[/root/chef/chef-repo/cookbooks/ruby_data_test/recipes/default.rb]
Running handlers:
Running handlers complete
Chef Client finished, 1/1 resources updated in 1.387608 seconds

can't get the require work with file in puppet module

i try to get the following code to run:
class common
{
...
# common packages
package
{
["lsb-release", "figlet"]: ensure => installed,
}
# Print some information if someone logs in:
file { "/etc/motd":
#require => [ Package["figlet"], File["/usr/bin/figlet"] ],
require => Package["figlet"],
content => generate('/usr/bin/env', '/usr/bin/figlet','-w', '186', '-p', '-f', 'banner', "$hostname"),
}
....
}
should't this work?
i get the following error:
err: Could not retrieve catalog from remote server: Error 400 on SERVER: Failed to execute generator /usr/bin/env: Execution of '/usr/bin/env /usr/bin/figlet -w 186 -p -f banner hostname' returned 127: /usr/bin/env: /usr/bin/figlet: No such file or directory
at /etc/puppet/modules/common/manifests/init.pp:37 on node puppetmaster.local
warning: Not using cache on failed catalog
err: Could not retrieve catalog; skipping run
first i had no require (row 12) and no package (row 5-8) in the code, to fix the errors i thought to i can simply add the row 12 (require package figlet) but it does not work. so i added the package figlet, but the the error does not go away.
how to add this dependency? shouldn't puppet run through the code and don't skip the run totally?
generate() runs on the server, not the client. (It's a parser function so it has to run on the server)
The class as you've written it will ensure that clients get figlet installed on them, but then tries to run figlet on the puppetmaster. Just install figlet on your puppetmasters and you won't need the package resources.
Also use smslant font, not banner :)