Fastest way to get openam attribute names for ssoadm - single-sign-on

I am trying to script an openam deployment using ssoadm, and want to know what will be the fastest and fool proof way to get the attribute names for ssoadm?
Right now, i login to the console and "view html source" for the attribute i am interested in, and use that via ssoadm. But, this approach is time consuming, plus with openam13 the attribute names are not available in the source.

Are you interested in any configuration or service in particular?
For most configurations and services (such as datastores, auth modules, server properties, etc...) there is an ssoadm command that will give you the current values from where you can grab the property names and use the in your script.
For example if you have a Datastore called OpenDJ in your top-level realm you can get the current configuration values using the following command:
ssoadm show-datastore -u amadmin -f /tmp/amadmin.pwd -e / -m OpenDJ
Typically it's just a matter of finding the right ssoadm command. Another option will be to look at the service definition. All these definitions are kept in xml format in your configuration store inside ou=Services.
Hope this helps.

I think the easiest approach is probably to look up the service XML files. At the time of the configuration, the service XML files are all copied over to ~/<OPENAM_HOME>/config/xml folder, so normally you can just try to grep for certain strings (like dynamic), but even then that may not work well.
If you know what service you are dealing with, then things get a bit easier. Are you trying to change an Authentication configuration? It must be defined in amAuth.xml then. The service name to be used for the ssoadm command is defined in the <Service> element under the "name" attribute. The service attribute names are defined under <AttributeSchema> elements with "name" attribute.
Yet another alternative would be to just read the documentation as most of the property names are already documented:
http://openam.forgerock.org/doc/bootstrap/admin-guide/index.html#auth-core-realm-attributes

Related

How to include a deployment.properties file of environment variables in WSO2 Identity Server?

I want to include a properties file of environment variables to better integrate between environments in AWS deployment of WSO2 Identity Server. I could put all the environment variables in line in the wso2server.sh, but it would be better to inject a properties file that has all the variables I need.
I am trying to include:
-Ddeployment.conf="$CARBON_HOME/repository/conf/etc/dev-env.properties" \
in the wso2server.sh where my dev-env.properties has variables that I want to include in the xml configurations. An example being the usr-mgt.xml connection string:
<Property name="ConnectionURL">${user.mgt.connection.url}</Property>
I could do -Duser.mgt.connection.url="connection-string" \ but I have about 20 properties that I currently want to set this way and would prefer to keep them all in one file instead of in line environment variables. I found this Medium article
describing something like what I am looking for but I'm not sure it's exactly what I want and it was unclear how to implement this.
Do I need to write a Java Util class to read these environment variables from the properties file or is there a simpler way to do this? And if I need a utils class what would that look like?
As far as I know, this feature will support from the WSO2 products which are based on Carbon Kernek 5 and onwards. But at the moment most of the WSO2 products are based on kernel version 4.0. Therefore I think you can't get this done with existing WSO2 products.

Use log4j to log message in liberty console

Our log server consumes our log messages through kubernetes pods sysout formatted in json and indexes json fields.
We need to specify some predefined fields in messages, so that we can track transactions across pods.
For one of our pod we use Liberty profile and have issue to configure logging for these needs.
One idea was to use log4j to send customized json message in console. But all message are corrupted by Liberty log system that handles and modifies all logs done in console. I failed to configure Liberty logging parameters (copySystemStreams = false, console log level = NO) for my needs and always have liberty modify my outputs and interleaved non json messages.
To workaround all that I used liberty consoleFormat="json" logging parameter, but this introduced unnecessary fields and also do not allow me to specify my custom fields.
Is it possible to control liberty logging and console ?
What is the best way to do my use case with Liberty (and if possible Log4j)
As you mentioned, Liberty has the ability to log to console in JSON format [1]. The two problems you mentioned with that, for your use case, are 1) unnecessary fields, and 2) did not allow you to specify your custom fields.
Regarding unnecessary fields, Liberty has a fixed set of fields in its JSON schema, which you cannot customize. If you find you don't want some of the fields I can think of a few options:
use Logstash.
Some log handling tools, like Logstash, allow you to remove [2] or mutate [3] fields. If you are sending your logs to Logstash you could adjust the JSON to your needs that way.
change the JSON format Liberty sends to stdout using jq.
The default CMD (from the websphere-liberty:kernel Dockerfile) is:
CMD ["/opt/ibm/wlp/bin/server", "run", "defaultServer"]
You can add your own CMD to your Dockerfile to override that as follows (adjust jq command as needed):
CMD /opt/ibm/wlp/bin/server run defaultServer | grep --line-buffered "}" | jq -c '{ibm_datetime, message}'
If your use case also requires sending log4J output to stdout, I would suggest changing the Dockerfile CMD to run a script you add to the image. In that script you would need to tail your log4J log file as follows (this could be combined with the above advice on how to change the CMD to use jq as well)
`tail -F myLog.json &`
`/opt/ibm/wlp/bin/server run defaultServer`
[1] https://www.ibm.com/support/knowledgecenter/en/SSEQTP_liberty/com.ibm.websphere.wlp.doc/ae/rwlp_logging.html
[2] https://www.elastic.co/guide/en/logstash/current/plugins-filters-prune.html
[3] https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html
Just in case it helps, I ran into the same issue and the best solution I found was:
Convert app to use java.util.Logging (JUL)
In server.xml add <logging consoleSource="message,trace" consoleFormat="json" traceSpecification="{package}={level}"/> (swap package and level as required).
Add a bootstrap.properties that contains com.ibm.ws.logging.console.format=json.
This will give you consistent server and application logging in JSON. A couple of lines at the boot of the server are not json but that was one empty line and a "Launching defaultServer..." line.
I too wanted the JSON structure to be consistent with other containers using Log4j2 so, I followed the advice from dbourne above and add jq to my CMD in my dockerfile to reformat the JSON:
CMD /opt/ol/wlp/bin/server run defaultServer | stdbuf -o0 -i0 -e0 jq -crR '. as $line | try (fromjson | {level: .loglevel, message: .message, loggerName: .module, thread: .ext_thread}) catch $line'
The stdbuf -o0 -i0 -e0 stops pipe ("|") from buffering its output.
This strips out the liberty specific json attributes, which is either good or bad depending on your perspective. I don't need to new values so I don't have a good recommendation for that.
Although the JUL API is not quite as nice as Log4j2 or SLF4j, it's very little code to wrap the JUL API in something closer to Log4j2 E.g. to have varargs rather than an Object[].
OpenLiberty will also dynamically change logging if you edit the server.xml so, it pretty much has all the necessary bits; IMHO.

How do I use Puppet's ralsh with resource types provided by modules?

I have installed the postgresql module from Puppetforge.
How can I query Postgresql resources using ralsh ?
None of the following works:
# ralsh postgresql::db
# ralsh puppetlabs/postgresql::db
# ralsh puppetlabs-postgresql::db
I was hoping to use this to get a list of databases (including attributes such as character sets) and user names/passwords from the current system in a form that I can paste into a puppet manifest to recreate that setup on a different machine.
In principle, any puppet client gets the current state of your system from another program called Facter. You should create a custom Fact (a module of Facter), and then included into your puppet client. Afterwards, I think you could call this custom Fact from ralsh.
More information about creating a custom Fact can be found in here.
In creating your own Fact, you should execute your SQL query and then save the result into particular variable.

Change log file name during runtime - ent lib

I have a WCF service that will serve multiple clients. I'm using ent lib for the logging.
I'd like to have a different log file for each client. is there a way to change the file name back and forth?
I found a few threads but they all talk about editing the config file during runtime.
ALso found this: Enterprise Library Logging but it talks about environment variables. I will set the log name according to the client id.
Thanks
Avi
You can have distinct categories linked to individually configured FlatFile or RollingFile tracelisteners for each client.
If filenames are unknown till runtime, consider using fluent API for configuration, like so:
http://msdn.microsoft.com/en-us/library/ff664363(PandP.50).aspx#fluent_api_logging

CherryPy : Accessing Global config

I'm working on a CherryPy application based on what I found on that BitBucket repository.
As in this example, there is two config files, server.cfg (aka "global") and app.cfg.
Both config files are loaded in the serve.py file :
# Update the global settings for the HTTP server and engine
cherrypy.config.update(os.path.join(self.conf_path, "server.cfg"))
# ...
# Our application
from webapp.app import Twiseless
webapp = Twiseless()
# Let's mount the application so that CherryPy can serve it
app = cherrypy.tree.mount(webapp, '/', os.path.join(self.conf_path, "app.cfg"))
Now, I'd like to add the Database configuration.
My first thought was to add it in the server.cfg (is this the best place? or should it be located in app.cfg ?).
But if I add the Database configuration in the server.cfg, I don't know how to access it.
Using :
cherrypy.request.app.config['Database']
Works only if the [Database] parameter is in the app.cfg.
I tried to print cherrypy.request.app.config, and it shows me only the values defined in app.cfg, nothing in server.cfg.
So I have two related question :
Is it best to put the database connection in the server.cfg or app.cfg file
How to access server.cfg configuration (aka global) in my code
Thanks for your help! :)
Put it in the app config. A good question to help you decide where to put such things is, "if I mounted an unrelated blog app at /blogs on the same server, would I want it to share that config?" If so, put it in server config. If not, put it in app config.
Note also that the global config isn't sectioned, so you can't stick a [Database] section in there anyway. Only the app config allows sections. If you wanted to stick database settings in the global config anyway, you'd have to consider config entry names like "database_port" instead. You would then access it directly by that name: cherrypy.config.get("database_port").