How to deploy a module/provider/spi via scripting? - wildfly

Is there a way to deploy modules to Wildfly via scripting (as in, without manually modifying XML files)? I know about the jboss-cli.sh command to add module but is there a way to either directly modify my standalone.xml/domain.xml or do some equivalent thing that will tell Wildfly to load the module?
Said another way...
I've discovered two ways to deploy modules:
1) Hot deploy a jar directly by copying it into $KEYCLOAK_HOME/standalone/deployments
(Per the README in that directory, this method is not recommended for production deployments but it works without any manual work afterward.)
2) run jboss-cli.sh --command="module add --name=com.example.MySpi" then manually edit standalone.xml (or domain.xml) to have your module in the "providers" list, like so:
<subsystem xmlns="urn:jboss:domain:keycloak-server:1.1">
<web-context>auth</web-context>
<providers>
...
<provider>module:com.example.MySpi</provider>
</providers>
...
</subsystem>
... and finally restart the server.
I'd like to use the recommended way, but without manually editing an XML file. Is there a recommended path for this?

You can do something like
jboss-cli.sh --command="/subsystem=keycloak-server:list-add(name=providers, value=module:com.example.MySpi)"
Basically you can script everything that is in standalone.xml with jboss-cli. To find out more how your configuration looks internally, you may try /subsystem=keycloak-server:read-resource(recursive=true) within jboss-cli.

Sorry, cannot add comments yet, so I'm adding this here.
I had to add the --connect option to the command above, otherwise it was complaining with no connection to the controller.
The whole command then would be:
jboss-cli.sh --connect --command="/subsystem=keycloak-server:list-add(name=providers, value=module:com.example.MySpi)"

Related

WildFly: jboss-cli's add module creates a wrong folder

I am taking my very first steps with WildFly application server. I want to create a database driver.
I had a look at https://www.adam-bien.com/roller/abien/entry/installing_oracle_jdbc_driver_on on how to do it manually. And now I want to do it by jboss-cli.sh. I read about these commands e. g. here and here.
So I am typing...
wildfly-26.0.0.Final/bin$ ./jboss-cli.sh -c
[standalone#localhost:9990 /] module add --name=com.oracle --resources=/home/user/Downloads/ojdbc8.jar --dependencies=javax.api,javax.transaction.api
The command is going to be executed without error.
I would expect it to
create the module-subfolders (step 2 in the linked tutorial by Adam Bien)
copy the JAR file to the newly created folder (step 3)
create the module.xml file (step 4)
maybe even to add the necessary <driver /> tag in the standalone.xml (do not know if that should be part of the add module command?) (step 5)
Basically it does a lot of that, but different than I expect.
It creates the subfolder in a wrong(?) location. It is not created in [WILDFLY_HOME]/modules/system/layers/base/com/oracle/main like it is decribed by Adam Bien but it is created [WILDFLY_HOME]/modules/com/oracle/main. The JAR file is correctly copied, the module.xml file is created but the folder seems to be wrong. And the standalone.xml is not altered at all.
If I start the web management console I do not see the driver next to the default H2 one.
So my question is what am I doing wrong with the command so that the folder is created in the correcy localtion? Or does this work as designed and the location is not that relevant and I am making other mistakes that it does not show in management console nor in standalone.xml?
By the way, I also tried to change the command module add --name=system.layers.base.com.oracle .... Now the folder was correct, but in the module.xml the name of the module was also system.layers.base.com.oracle.
I tested with WildFly 26.0.0 and WildFly-preview 26.0.0 under Ubuntu.
It should not be created in modules/system/lasers/base. That is for components provided by the container. Having the module off the root $JBOSS_HOME/modules directory is correct.

Websphere Application Server importing settings

I want to import silently server configuration (such as Applications servers -> Process Definition -> Java Virtual Machine -> Generic JVM arguments etc.).
I've tried wsadmin tool, but it requires WSA to be running - and this is bad for me, because I need to write script that copies these settings without any interaction.
wsadmin -lang jython -c "AdminTask.importWasprofile('[-archive d:\profil2.car]')"
Another way was the "Import server configuration from server..." option in Eclipse context menu (Servers tab), but it still needs interaction from user.
Is there any way to copy those settings? Should I copy some files or something?
I'm installing Rational Application Developer 7.0.0.7. I have also generated .car file with exported settings.
Ok, I've managed to import all those settings silently.
First, you have to export profile using f.e. wsadmin script.
Command
wsadmin -lang jython -c "AdminTask.exportWasprofile(['-archive', 'd:\sampleProfileName.car'])
Will export default profile to .car file (which is, in fact, .zip file with other extension). It is nice to add here, that my version of WAS wouldn't export SIB settings.
Importing those settings is as easy, as exporting, you just have to run command
wsadmin -lang jython -conntype none -c "AdminTask.importWasprofile('[-archive d:\sampleProfileName.car]')"
Note using -conntype option, as #bkail mentioned
Sadly, WAS 6.x and earlier does not support exporting/importing SIB settings (as mentioned HERE). In order to copy them, you have to add manually to .car file buses directory (as mentioned HERE). The problem is - adding them by admin console didn't create this directory for me.
I had to use another wsadmin script that creates SIB - I've found it HERE. It simply uses AdminTask object to create bus manually - and thanks to it, it created the buses directory.
Hope this helps somebody who has the same problem as I had - and it will save him MANY hours.
Greetings.

How do I specify a config file with play 2.4 and activator

I am building a Scala Play 2.4 application which uses the typesafe activator.
I would like to run my tests 2 times with a different configuration file for each run.
How can I specify alternative config files, or override the config settings?
I currently run tests with the command "./activator test"
You can create different configuration files for different environments/purposes. For example, I have three configuration files for local testing, alpha deployment, and production deployment as in this project https://github.com/luongbalinh/play-mongo
You can specify the configuration for running as follows:
activator run -Dconfig.resource=application.conf
where application.conf is the configuration you want to use.
You can create different configuration files for different environments. To specify the configuration to use it with activator run, use the following command:
activator "run -Dconfig.resource=application.conf"
where the application.conf is the desired configuration. Without the quotes it did not work for me. This is using the same configuration parameters as you use when going into production mode as described here:
https://www.playframework.com/documentation/2.5.x/ProductionConfiguration#Specifying-an-alternate-configuration-file
Important to know is also that config.resource tries to locate the configuration within the conf/ folder, so no need to specify that as well. For full paths not among the resources, use config.file. Further reading is also in the above link.
The quotes need to be used because you do not want to send the -D to activator, but to the run command. Using the quotes, the activator's JVM gets no -D argument but it interprets "run -Dconfig.file=application.conf" and sets the config.file property accordingly, also in the activator's JVM.
This was already discussed here: Activator : Play Framework 2.3.x : run vs. start
Since all the above are partially incorrect, here is my hard wrought knowledge from the last weekend.
Use include "application.conf" not include "application" (which Akka does)
Configs must be named .conf or Play will discard them silently
You probably want -Dconfig.file=<file>.conf so you're not classpath dependent
Make sure your provide the full file path (e.g. /opt/configs/prod.conf)
Example
Here is an example of this we run:
#prod.conf
include "application"
akka.remote.hostname = "prod.blah.com"
# Example of passing in S3 keys
s3.awsAccessKeyId="YOUR_KEY"
s3.awsSecretAccessKey="YOUR_SECRET_KEY"
And just pass it in like so:
activator -Dconfig.file=/var/lib/jenkins/jenkins.conf test
of if you fancy SBT:
sbt -Dconfig.file=/var/lib/jenkins/jenkins.conf test
Dev Environment
Also note it's easy to make a developer.conf file as well, to keep all your passwords/local ports, and then set a .gitignore so dev's don't accidentally check them in.
The below command works with Play 2.5
$ activator -Dconfig.resource=jenkins.conf run
https://www.playframework.com/documentation/2.5.x/ProductionConfiguration

How to configure Capistrano to deploy to same directory?

I understand that Capistrano (v2.15.5) deploys to a different directory and symlinks them in deploy:create_symlink however we have a proprietary module on our web server that breaks on every deploy as its licensed to a specific directory. I understand the advantages of the symlink and being able to rollback etc. but we need to deploy to the same directory. I can't find any documentation which supports this, is it possible without editing the source?
Provided I understood you correctly, this should help:
set :deploy_to, "<proprietary path>"
This will put the releases dir and the current symlink into <proprietary path>.
For more control over all relevant directories, have a look at deploy.rb from the 2.x branch here:
https://github.com/capistrano/capistrano/blob/legacy-v2/lib/capistrano/recipes/deploy.rb
In particular lines 50-66. You can overwrite all the _cset statements with set just like in the example above.

Logging for application in different environment Log4j

I am developing a logging framework using Log4j. I am not able to figure out how to maintain separate log files for different environment, i.e., development, testing, staging and production.
Firstly you'll need a different copy of your log4j.xml for each environment.
Lets call it log4j-dev.xml, log4j-test.xml, log4j-stage.xml and log4j-prod.xml each having their own settings like log file name and log levels.
You then pass in the corresponding file at the the server startup as a system property like below -
-Dlog4j.configuration=log4j-dev.xml
This URL has the example on how to pass this for Tomcat. The concept is the same for whichever server you are deploying on.
On Windows, I have used "set CATALINA_OPTS=-Dlog4j.configurationFile=log4j2-dev.xml" instead of log4j.configuration