Keycloak Userstorage SPI: how to use external application.properties? - keycloak

So I have a Userstorage SPI that connects to a DB. At the moment it is hard coded (db url, username, password, etc.) into the class and I would like to use an application.properties file near the jar.
So the keycloak folder structure should look something like:
keycloak:
bin
conf
data
lib
providers
my-userstorage-spi.jar
application.properties
So when the keycloak starts: bin/kc.bat start-dev, the my-userstorage-spi.jar should read the values from the application.properties.
I am up for any solution. Autoconfiguration, file handling...
For example if I try to use it like this, it would say the file is not found:
Properties mainProperties = new Properties();
String path = "./application.properties";
FileInputStream file = new FileInputStream(path);
mainProperties.load(file);
System.out.println(mainProperties.get("my.value"));
file.close();
The exception:
java.io.FileNotFoundException: .\application.properties (The system cannot find the file specified)
Thanks in advance.

Related

loginUserFromKeytab for UserGroupInformation accepts path for keytab file, works local but not when bundled as JAR

I have all 4 files needed to read from/write to HDFS in my resources folder and method to create hdfs object is as below .
public static FileSystem getHdfsOnPrem(String coreSiteXml, String hdfsSiteXml, String krb5confLoc, String keyTabLoc){
// Setup the configuration object.
try {
Configuration config = new Configuration();
config.addResource(new org.apache.hadoop.fs.Path(coreSiteXml));
config.addResource(new org.apache.hadoop.fs.Path(hdfsSiteXml));
config.set("hadoop.security.authentication", "Kerberos");
config.addResource(krb5confLoc);
config.set("fs.hdfs.impl",org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
config.set("fs.file.impl",org.apache.hadoop.fs.LocalFileSystem.class.getName());
System.setProperty("java.security.krb5.conf", krb5confLoc);
org.apache.hadoop.security.HadoopKerberosName.setConfiguration(config);
UserGroupInformation.setConfiguration(config);
UserGroupInformation.loginUserFromKeytab("my_username", keyTabLoc);
return org.apache.hadoop.fs.FileSystem.get(config);
}
catch(Exception ex) {
ex.printStackTrace();
return null;
}
}
It works when I run it in local and pass below as the paths
C:\Users\my_username\IdeaProjects\my_project_name\target\scala-2.12\classes\core-site.xml
C:\Users\my_username\IdeaProjects\my_project_name\target\scala-2.12\classes\hdfs-site.xml
C:\Users\my_username\IdeaProjects\my_project_name\target\scala-2.12\classes\krb5.conf
C:\Users\my_username\IdeaProjects\my_project_name\target\scala-2.12\classes\my_username.user.keytab
It runs fine when I run it in local but when I bundle it as JAR and run it in an env like kubernetes it throws below error (Since bundling as JAR I can read contents of resource files as stream but I need to pass in path for loginuserFromKeytab method)
org.apache.hadoop.security.KerberosAuthException: failure to login: for principal: my_username from keytab file:/opt/spark-3.0.0/jars/foo-project-name!/my_username.user.keytab javax.security.auth.login.LoginException: Unable to obtain password from user
Any suggestions/pointers are appreciated.
I suggest you use Jaas config file instead of writing this code. This helps to remove the security plumbing from your code and externalizes it. "Unable to obtain password " would occur if the user that is running your app doesn't have permission to access the file.

Record Level Data truncation on Mainframe server while doing SFTP from spark server

Please read this fully.
I am working on sending a csv file via SFTP from spark application developed in scala to the mainframe server. I am using jsch (java secure channel) package version 0.1.53 version to accomplish the SFTP connection from spark server to mainframe server. I am facing issue that on the mainframe server, the csv file gets truncated to 1024 bytes per record line.
After research, I found that on the mainframe, we have options like using "lrecl" and "recfm" to control the length of each record in the file and the format of that record. But I am unable to integrate these options on scala. I found this answer on stackoverflow which was meant for implementation in Java. When I use the same logic on scala, I am getting the below error:
EDC5129I No such file or directory., file: /+recfm=fb,lrecl=3000 at
at com.jcraft.jsch.ChannelSftp.throwStatusError(ChannelSftp.java:2846)
at com.jcraft.jsch.ChannelSftp._stat(ChannelSftp.java:2198)
at com.jcraft.jsch.ChannelSftp._stat(ChannelSftp.java:2215)
at com.jcraft.jsch.ChannelSftp.ls(ChannelSftp.java:1565)
at com.jcraft.jsch.ChannelSftp.ls(ChannelSftp.java:1526)
Scala code block using the jsch library for establishing the SFTP connection and transferring file is as below:
session = jsch.getSession(username, host, port)
session.setConfig("PreferredAuthentication","publickey")
session.setConfig("MaxAuthTries",2)
System.out.println("Created SFTP Session")
val sftpSessionConfig: Properties = new Properties()
sftpSessionConfig.put("StrictHostKeyChecking","no")
session.setConfig(sftpSessionConfig)
session.connect() //Connect to session
System.out.println("Connected to SFTP Session")
val channel = session.openChannel("sftp")
channel.connect()
val sftpChannel = channel.asInstanceOf[ChannelSftp]
sftpChannel.ls("/+recfm=fb,lrecl=3000") //set lrecl and recfm ---> THROWING ERROR HERE
sftpChannel.put(sourceFile, destinationPath,ChannelSftp.APPEND) //Push file from local to mainframe
Is there any way where we can set these options as the configuration in my scala code using the jsch library? I also tried using the spring-ml's spark-sftp package. But this package also has the problem of data truncation on the mainframe server.
Please help as this issue has become very critical blocker to my project.
EDIT: Updated question with scala code block
From this presentation Dovetail SFTP Webinar on slide 21:
ls /+recfm=fb,lrecl=80
it seems to me there is one '/' too many in your code.
From the error message, I think the SFTP server has the current path in the UNIX file system. You do not set the data set high level qualifier (HLQ) for the data set, do you? I can't see it in the code. Again from the above presentation, do a cd before the ls:
cd //your-hlq-of-choice
This will do two things:
Change the current working directory to the MVS data set side.
Set the HLQ to be used.
Sorry I cannot test myself; I do not know scala.
First, what SFTP server is running on z/OS? If it is the one provided with z/OS (not Dovetail) the command you are executing isn't supported and you will receive a message like Can't ls: "/+recfm=fb,lrecl=80" not found. Which would be valid because that is not valid file. Everything to the right of the / would be considered part of the filename.
I converted your code to Java as I'm not familiar with Scala and didn't have time to learn it. Here was my code sample I used.
import com.jcraft.jsch.JSch;
import java.util.Properties;
import java.util.Vector;
class sftptest {
static public void main(String[] args) {
String username = "ibmuser";
String host = "localhost";
int port = 10022; // Note, my z/OS is running in a docker container so I map 10022 to 22
JSch jsch = new JSch();
String sourceFile = "/";
String destinationPath ="/";
String privateKey = "myPrivateKey";
try {
jsch.addIdentity(privateKey); //add private key path and file
com.jcraft.jsch.Session session = jsch.getSession(username, host, port);
session.setConfig("PreferredAuthentication","password");
session.setConfig("MaxAuthTries", "2");
System.out.println("Created SFTP Session");
Properties sftpSessionConfig = new Properties();
sftpSessionConfig.put("StrictHostKeyChecking","no");
session.setConfig(sftpSessionConfig);
session.connect(); //Connect to session
System.out.println("Connected to SFTP Session");
com.jcraft.jsch.ChannelSftp channel = (com.jcraft.jsch.ChannelSftp) session.openChannel("sftp");
channel.connect();
// com.jcraft.jsch.Channel sftpChannel = (ChannelSftp) channel;
// channel.ls("/+recfm=fb,lrecl=3000"); //set lrecl and recfm ---> THROWING ERROR HERE
// channel.ls("/"); //set lrecl and recfm ---> THROWING ERROR HERE
Vector filelist = channel.ls("/");
for(int i=0; i<filelist.size();i++){
System.out.println(filelist.get(i).toString());
}
// channel.put(sourceFile, destinationPath, com.jcraft.jsch.ChannelSftp.APPEND); //Push file from local to mainframe
} catch (Exception e) {
System.out.println("Exception "+e.getMessage());
}
}
}
For my case I did use an ssh key and not a password. The output with your ls method is:
Created SFTP Session
Connected to SFTP Session
Exception No such file
dropping the + and everything to the right you get:
Created SFTP Session
Connected to SFTP Session
drwxr-xr-x 2 OMVSKERN SYS1 8192 May 13 01:18 .
drwxr-xr-x 7 OMVSKERN SYS1 8192 May 13 01:18 ..
-rw-r--r-- 1 OMVSKERN SYS1 0 May 13 01:18 file 1
-rw-r--r-- 1 OMVSKERN SYS1 0 May 13 01:18 file 2
The main issue is that the z/OS appears to not support the syntax you are using which is provided by a specific SFTP implementation by Dovetail.
If you do not have Dovetail I recommend that since you are sending CSV files that are generally variable in length that you send them as a USS file so that the lines will be properly translated and will be of variable length. Transfer them to USS (regular Unix on z/OS) and then copy them to an MVS file that has a RECFM of VB. Assuming the file is already allocated you could do a cp myuploadedFile.csv "//'MY.MVS.FILE'"

OpenSource: Encryption of JDBC Password in configuration properties file

As I noticed a plugin available for the enterprise version (https://download.rundeck.com/plugins/encrypted-datasource-plugin.html); is there an option for users of Rundeck open source to perform the same kind of encyption of datasource password in the configuration file?
As I noticed many people mentioning writing their own java programs and leveraging the Jasypt utilities; I tried this. I do have two jar files (one for encrypt and one for decrypt). I created a directory (since I'm using rpm based Rundeck 3.3 installation) called: /var/lib/rundeck/lib . I added this directory to the JVM classpath in /etc/sysconfig/rundeckd via: export RDECK_JVM_SETTINGS="-Djava.class.path=/var/lib/rundeck/lib/*". I converted my /etc/rundeck/rundeck-config.properties file to groovy format and updated the /etc/sysconfig/rundeck with: export RDECK_CONFIG_FILE="/etc/rundeck/rundeck-config.groovy". However when I change the /etc/rundeck/rundeck-config.groovy entry for datasource.password to:
datasource.password=MyDecrypt("MyTest123Password"); I get an error in the Rundeck logs after restarting:
[2020-09-08T18:01:03,168] WARN context.AnnotationConfigServletWebServerApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'application': Initialization of bean failed; nested exception is groovy.lang.MissingMethodException: No signature of method: groovy.util.ConfigSlurper$_parse_closure5.MyDecrypt() is applicable for argument types: (String) values: [MyTest123Password]
Any suggestions?
That's encryption is only for Rundeck Enterprise, perhaps the best approach on Rundeck Community is to secure the rundeck-config.properties file through file UNIX permissions.

Diacritic Letters are mistreated by Rest Client

I'm using rest:0.8 to connect my main Grails project to another Grails project that serves as a report generator using this line of code:
Map<String, String> adminConfigService = [
webURL: "http://192.168.20.21:8080/oracle-report-service/generate",
...
]
Map params = [
...
name: "Iñigo",
...
]
withHttp(uri: adminConfigService.webURL) {
html = get(query: params)
}
And then the receiving REST client will process that data. Running the two projects in my local machine works fine. Although when I deploy the war file of the report generator to our tomcat server, it converts the letter "ñ" to "├â┬æ", so the name "Iñigo" is treated as "I├â┬æigo".
Since the Report Generator project works fine when run on my local machine, does that means I need to change some conf files on my Tomcat Server? What setting file do I need to change?
It seems like encoding issue.
Check Config.groovy:
grails.converters.encoding = "UTF-8"
Check file's encoding of controllers and services where you use rest:0.8.
Check URIEncoding in tomcat's server.xml (must be UTF-8).
Also try to set useBodyEncodingForURI="true" (in connector, like URIEncoding parameter).
Do you save this data to the database? If that so, check your DataSource.groovy url parameter:
url = "jdbc:mysql://127.0.0.1:3306/dbname?characterEncoding=utf8"
Also check encoding and collation of you table and fields in the database.

EJB Lookup on EAP 6.2 not using jboss-ejb-client.properties

We're trying to let two JBoss EAP 6.2 Servers communicate via JNDI.
One server is using it's own LoginModule .
The first server will recieve requests via webservices and delegate them to the second server. For that reason, the first server needs to log in to look for the beans it needs to delegate.
I've figured out, that we need the following information to connect to the main (second) server:
remote.connections=default
endpoint.name=client-endpoint
remote.connection.default.port=4447
remote.connection.default.host=localhost
remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED=false
remote.connection.default.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS=false
remote.connection.default.connect.options.org.xnio.Options.SASL_POLICY_NOPLAINTEXT=false
remote.connection.default.connect.options.org.xnio.Options.SASL_DISALLOWED_MECHANISMS=JBOSS-LOCAL-USER
remote.connection.default.callback.handler.class=our.own.callbackhandler.class
java.naming.factory.initial=org.jboss.naming.remote.client.InitialContextFactory
java.naming.factory.url.pkgs=org.jboss.ejb.client.naming
We put this information into the 'jboss-ejb-client.properties' file, but our server didn't react on that.
The tutorial (https://docs.jboss.org/author/display/AS71/EJB+invocations+from+a+remote+server+instance) says that we need to use the jboss-ejb-client.xml.
Because of our LoginModule (and Callbackhandler) we don't need an user and/or password to connect to the server!
My first solution for that problem is putting all information into the InitialContext using the Properties-Class.
final Properties props = new Properties();
props.put("remote.connections", "default");
props.put("remote.connection.default.host", "localhost");
props.put("remote.connection.default.port", "4447");
props.put("remote.connection.default.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS", "false");
props.put("remote.connection.default.connect.options.org.xnio.Options.SASL_POLICY_NOPLAINTEXT", "false");
props.put("remote.connection.default.connect.options.org.xnio.Options.SASL_DISALLOWED_MECHANISMS", "JBOSS-LOCAL-USER");
props.put("remote.connection.default.callback.handler.class",
"our.own.callbackhandler.class");
props.put("org.jboss.ejb.client.scoped.context", "true");
props.put("remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED", "false");
props.put(Context.URL_PKG_PREFIXES, "org.jboss.ejb.client.naming");
Context ic = new InitialContext(prop);
It's working that way.
My question now is: Is there any workaround to put that information in the standalone.xml or jboss-ejb-client.properties / jboss-ejb-client.xml?
I did not find any place to put the classname of our Callbackhandler.
Thank you in advance.
Try turning up the logging for the category org.jboss.ejb.client to DEBUG or TRACE and you should see where the server is trying to read jboss-ejb-client.properties from.