OpenSource: Encryption of JDBC Password in configuration properties file - rundeck

As I noticed a plugin available for the enterprise version (https://download.rundeck.com/plugins/encrypted-datasource-plugin.html); is there an option for users of Rundeck open source to perform the same kind of encyption of datasource password in the configuration file?
As I noticed many people mentioning writing their own java programs and leveraging the Jasypt utilities; I tried this. I do have two jar files (one for encrypt and one for decrypt). I created a directory (since I'm using rpm based Rundeck 3.3 installation) called: /var/lib/rundeck/lib . I added this directory to the JVM classpath in /etc/sysconfig/rundeckd via: export RDECK_JVM_SETTINGS="-Djava.class.path=/var/lib/rundeck/lib/*". I converted my /etc/rundeck/rundeck-config.properties file to groovy format and updated the /etc/sysconfig/rundeck with: export RDECK_CONFIG_FILE="/etc/rundeck/rundeck-config.groovy". However when I change the /etc/rundeck/rundeck-config.groovy entry for datasource.password to:
datasource.password=MyDecrypt("MyTest123Password"); I get an error in the Rundeck logs after restarting:
[2020-09-08T18:01:03,168] WARN context.AnnotationConfigServletWebServerApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'application': Initialization of bean failed; nested exception is groovy.lang.MissingMethodException: No signature of method: groovy.util.ConfigSlurper$_parse_closure5.MyDecrypt() is applicable for argument types: (String) values: [MyTest123Password]
Any suggestions?

That's encryption is only for Rundeck Enterprise, perhaps the best approach on Rundeck Community is to secure the rundeck-config.properties file through file UNIX permissions.

Related

Getting Database Authentication to work on Apache Guacamole

Got Apache Guacamole and Tomcat working between two laptops and a PC under
LAN.
However, was always updating user and connection details through
user-mapping.xml
I decided to then set up Database Authentication for easier changing of
user-mapping,
setup and had active SQLServer, MYSQL as well as now PostGreSQL, all are
active and running (not concurrently, tried one by one and then uninstalled)
however guacamole login details remain the same and seem to be unaffected by
the changes in guacamole.properties.
Here is my latest guacamole.properties file for reference. (PostGreSQL
Version atm)
guacd-hostname:localhost
guacd-port: 4822
user-mapping:/etc/guacamole/user-mapping.xml
auth-provider:
net.sourceforge.guacamole.net.basic.BasicFileAuthenticationProvider
# MySQL properties
#mysql-hostname: localhost
#mysql-port: 3306
#mysql-database: guacamole_db
#mysql-username: SHRDC
#mysql-password: Shrdc_1234
#mysql-user-required: true
# PostgreSQL properties
postgresql-hostname: localhost
postgresql-port: 5432
postgresql-database: guacamole_db
postgresql-username: SHRDC
postgresql-password: Shrdc_1234
postgresql-user-required: true
I feel its some connector, driver issue hence not being recognised.
Something to change in /lib or /extensions?
For reference, the auth driver and auth connector i am using are currently:
in /extensions:
guacamole-auth-jdbc-postgresql.jar (Previously was
guacamole-auth-jdbc-postgresql-1.2.0.jar before i renamed it trying smth
out)
in /lib:
postgresql-42.2.14.jar
all steps followed as per:
https://guacamole.apache.org/doc/gug/jdbc-auth.html
Would love some feedback, been stuck trying to get DB authentication to work
for a week plus now!
Sincerely
I've encountered the exact same problem, however my setup uses docker.
In my case, there are discrepancies between actual code and documentations.
I will explain how to find the root cause, since it's similar.
Enable Logback debug
Since you are installing manually (not using docker container). Chances are you knew exactly where the GUACAMOLE_HOME is. Just to remind you, by default it is in /etc/guacamole, but if you have /home/$USER/.guacamole it will be used instead.
Add logback.xml like it was described here: https://guacamole.apache.org/doc/gug/configuring-guacamole.html in your GUACAMOLE_HOME dir.
See your catalina output
The new debug settings will output all debug message. If there are no [DEBUG] message, then you put logback.xml in the wrong location.
Once you have DEBUG output stream, see important output such as the GUACAMOLE_HOME currently being used, AuthBinding that is currently used, etc. This is when catalina is starting up.
For example, this is the excerpt of my log:
19:23:08.933 [localhost-startStop-1] DEBUG o.a.g.extension.ExtensionModule - Loading extension: "guacamole-auth-jdbc-postgresql-1.2.0.jar"
19:23:08.973 [localhost-startStop-1] DEBUG o.a.g.extension.ExtensionModule - [0] Binding AuthenticationProvider "org.apache.guacamole.auth.postgresql.PostgreSQLAuthenticationProvider".
19:23:08.980 [localhost-startStop-1] INFO o.a.g.environment.LocalEnvironment - GUACAMOLE_HOME is "/root/.guacamole".
19:23:10.150 [localhost-startStop-1] DEBUG o.a.g.extension.ExtensionModule - [1] Binding AuthenticationProvider "org.apache.guacamole.auth.postgresql.PostgreSQLSharedAuthenticationProvider".
19:23:10.207 [localhost-startStop-1] DEBUG o.a.g.e.LanguageResourceService - Merged strings with existing language: "es"
19:23:10.213 [localhost-startStop-1] DEBUG o.a.g.e.LanguageResourceService - Merged strings with existing language: "ru"
19:23:10.216 [localhost-startStop-1] DEBUG o.a.g.e.LanguageResourceService - Merged strings with existing language: "de"
19:23:10.222 [localhost-startStop-1] DEBUG o.a.g.e.LanguageResourceService - Merged strings with existing language: "fr"
19:23:10.227 [localhost-startStop-1] DEBUG o.a.g.e.LanguageResourceService - Merged strings with existing language: "ja"
19:23:10.233 [localhost-startStop-1] DEBUG o.a.g.e.LanguageResourceService - Merged strings with existing language: "en"
19:23:10.234 [localhost-startStop-1] INFO o.a.g.extension.ExtensionModule - Extension "PostgreSQL Authentication" loaded.
Notice that the postgresql auth binding must be loaded first.
If there are no output like that, then Tomcat doesn't even found your settings.
If it found the settings but failed to load the bindings, then Tomcat couldn't locate your binding.
This is some log example if such things occurs (catalina startup runs fine, but logging in via guacamole dashboard will spew this error).
### Error querying database. Cause: java.sql.SQLException: Error setting driver on UnpooledDataSource. Cause: java.lang.ClassNotFoundException: org.postgresql.Driver
### The error may exist in org/apache/guacamole/auth/jdbc/user/UserMapper.xml
### The error may involve org.apache.guacamole.auth.jdbc.user.UserMapper.selectOne
### The error occurred while executing a query
### Cause: java.sql.SQLException: Error setting driver on UnpooledDataSource. Cause: java.lang.ClassNotFoundException: org.postgresql.Driver
19:44:44.511 [http-nio-8080-exec-12] DEBUG o.a.g.rest.RESTExceptionMapper - Unexpected error in REST endpoint.
org.apache.ibatis.exceptions.PersistenceException:
### Error querying database. Cause: java.sql.SQLException: Error setting driver on UnpooledDataSource. Cause: java.lang.ClassNotFoundException: org.postgresql.Driver
### The error may exist in org/apache/guacamole/auth/jdbc/user/UserMapper.xml
### The error may involve org.apache.guacamole.auth.jdbc.user.UserMapper.selectOne
### The error occurred while executing a query
### Cause: java.sql.SQLException: Error setting driver on UnpooledDataSource. Cause: java.lang.ClassNotFoundException: org.postgresql.Driver
Lastly if it found your settings, but didn't found your guacamole-auth-postgresql binding it will spew this log:
19:47:49.654 [http-nio-8080-exec-15] DEBUG o.a.g.extension.ExtensionModule - [0] Binding AuthenticationProvider "org.apache.guacamole.auth.file.FileAuthenticationProvider".
Notice that now the FileAuth binding is loaded first (it doesn't found your postgres jdbc binding).
Based on the log information, systematically try to find the root cause
It can be as simple as wrong GUACAMOLE_HOME. For example, you edited /etc/guacamole/guacamole.properties but Tomcat actually loaded /home/$USER/.guacamole/guacamole.properties. Or maybe your directory structure is incorrect.
This is my directory tree, if you want to compare:
root#guacamole-7988d57c8d-nwfk7:~/.guacamole# tree .
.
├── extensions
│ ├── guacamole-auth-jdbc-postgresql-1.2.0.jar -> /opt/guacamole/postgresql/guacamole-auth-jdbc-postgresql-1.2.0.jar
│ └── lost+found
├── guacamole.properties
├── lib
│ └── postgresql-9.4-1201.jdbc41.jar -> /opt/guacamole/postgresql/postgresql-9.4-1201.jdbc41.jar
└── logback.xml
3 directories, 4 files
Check if you can actually access the database
From within the machine that guacamole runs (the tomcat). Check that you can access your database with the given credentials. If you are using postgres, then try to access it via psql. Just to make sure you have proper permission to access the database
Make sure the jdbc driver you are using is for the correct Java Version.
Probably have been stressed enough by the docs. But maybe you can check again.

Error upgrading Jasperreports server from 7.2 to 7.5 (keystore problem)

The upgrade procedure is pretty simple and well documented. I have been upgrading jasperreports server since version 4 always using the same procedure (buildomatic).
Now, in 7.5 version I get
java.lang.RuntimeException: KeystoreManager was never initialized or
there are errors while instantiating the instance.
Failed to instantiate
[com.jaspersoft.jasperserver.crypto.KeystoreManager]: Please make sure
that create-keystore was executed;
Error creating bean with name 'keystoreManager': Invocation of init
method failed;
Error creating bean with name 'passwordEncoder': Unsatisfied
dependency expressed through field 'keystoreManager';
The keystore is in /root folder, as it should.
Have you tried the process mentioned in this link https://community.jaspersoft.com/wiki/encryption-jasperreports-server-75
"If the JasperReports Server cannot find the keystore files - maybe because of permissions as noted above, you will get an exception on server start like:
Failed to instantiate [com.jaspersoft.jasperserver.crypto.KeystoreManager]: Please make sure that create-keystore was executed; nested exception is java.lang.RuntimeException: KeystoreManager was never initialized or there are errors while instantiating the instance.
To fix this, you need to move the keystore files into a directory that is accessible by the user running the web app process. See Updating keystore files below."

Creating PostgreSQL DataSource via pax-jdbc config file on karaf 4

On my karaf 4.0.8 I've installed the feature pax-jdbc-postgresql. The DataFactory for PostgreSQL is installed:
org.osgi.service.jdbc.DataSourceFactory]
osgi.jdbc.driver.class org.postgresql.Driver
osgi.jdbc.driver.name PostgreSQL JDBC Driver
osgi.jdbc.driver.version PostgreSQL 9.4 JDBC4.1 (build 1203)
service.bundleid 204
service.scope singleton
Using Bundles com.eclipsesource.jaxrs.publisher (184)
I've create the file etc/org.ops4j.datasource-psql-sandbox.cfg:
osgi.jdbc.driver.class=org.postgresql.Driver
osgi.jdbc.driver.name=PostgreSQL
url=jdbc:postgresql://localhost:5432/sandbox
dataSourceName=psql-sandbox
user=sandbox
password=sandbox
After that, I see the confirmation in karaf.log that the file was processed:
2017-02-10 14:54:17,468 | INFO | 41-88b277ae0921) |
DataSourceRegistration | 154 - org.ops4j.pax.jdbc.config -
0.9.0 | Detected config for DataSource psql-sandbox. Tracking DSF with filter
(&(objectClass=org.osgi.service.jdbc.DataSourceFactory)(osgi.jdbc.driver.class=org.postgresql.Driver)(osgi.jdbc.driver.name=PostgreSQL))
However, I see no new DataSource in services list in console. What went wrong? I see no exceptions in log ....
The log message tell you that the config was processed and it is now searching for a suitable DataSourceFactory OSGi service.
The problem in your case is that it does not find such a service. So to debug this you should list all DataSourceFactory services and check their properties.
service:list DataSourceFactory
In my case it shows this:
[org.osgi.service.jdbc.DataSourceFactory]
-----------------------------------------
osgi.jdbc.driver.class = org.postgresql.Driver
osgi.jdbc.driver.name = PostgreSQL JDBC Driver
...
As you see it does not match the filter you see in the log. Generally you should only provide either osgi.jdbc.driver.class or osgi.jdbc.driver.name not both. If you remove the osgi.jdbc.driver.name line the config will work.
There is no error message as the system can not know if the error is transient or not. Basically as soon as you install a matching OSGi service the DataSource will be created.

Spring Data Cassandra - Environment must not be null Error

I am following basic tutorial at Spring Data Cassandra reference http://docs.spring.io/spring-data/cassandra/docs/1.1.0.RC1/reference/html/ and I am running into following exception
java.lang.IllegalArgumentException: Environment must not be null!
at org.springframework.util.Assert.notNull(Assert.java:112)
at org.springframework.data.repository.config.RepositoryConfigurationSourceSupport.<init>(RepositoryConfigurationSourceSupport.java:50)
at org.springframework.data.repository.config.AnnotationRepositoryConfigurationSource.<init>(AnnotationRepositoryConfigurationSource.java:74)
at org.springframework.data.repository.config.RepositoryBeanDefinitionRegistrarSupport.registerBeanDefinitions(RepositoryBeanDefinitionRegistrarSupport.java:74)
at org.springframework.context.annotation.ConfigurationClassParser.processImport(ConfigurationClassParser.java:394)
at org.springframework.context.annotation.ConfigurationClassParser.doProcessConfigurationClass(ConfigurationClassParser.java:204)
at org.springframework.context.annotation.ConfigurationClassParser.processConfigurationClass(ConfigurationClassParser.java:163)
at org.springframework.context.annotation.ConfigurationClassParser.parse(ConfigurationClassParser.java:138)
at org.springframework.context.annotation.ConfigurationClassPostProcessor.processConfigBeanDefinitions(ConfigurationClassPostProcessor.java:284)
at org.springframework.context.annotation.ConfigurationClassPostProcessor.postProcessBeanDefinitionRegistry(ConfigurationClassPostProcessor.java:225)
at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:630)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:461)
at org.springframework.context.annotation.AnnotationConfigApplicationContext.<init>(AnnotationConfigApplicationContext.java:73)
at com.strides.platform.domain.UserRepositoryDaoTest.<init>(UserRepositoryDaoTest.java:28)
I have completed steps mentioned in document,
1) Use Cassandra Properties
2) Create Java configuration
3) Create domain and repository classes
I have autowired Environment variable in Test classes. I checked couple of sample projects and not sure what needs to be done more.
I've encountered this error message and found the problem only occuring when I used Spring Framework version 3.2.8.RELEASE.
My solution was to upgrade to version 3.2.9.RELEASE.
See also java.lang.IllegalArgumentException: Environment must not be null

Error running hadoop application in Eclipse on Windows

I'm trying to set up an Eclipse environment for developing and debugging hadoop. I'm following Tom White's Definitive Hadoop 3rd ed. What I would like to do is get the MaxTemperature app working locally on my Windows within Eclipse before moving it to my Hortonworks sandbox VM. The comment on page 158 about using the local job runner seems to be what I want. I don't want to set up a full hadoop implementation on Windows. I'm hoping with the right config params I can convince it to run as a java application inside Eclipse.
Windows: 7
Eclipse: Luna
Hadoop: 2.4.0
JDK: 7
When I set the Run configuration for MaxTemperatureDriver (Source code on page 157) to
inputfile outputdir foo (deliberate bogus 3rd parameter)
I get the usage message so I know I'm running my program with those params.
If I remove the bogus third param I get
Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1255)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1251)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1250)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1279)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at mark.MaxTemperatureDriver.run(MaxTemperatureDriver.java:52)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at mark.MaxTemperatureDriver.main(MaxTemperatureDriver.java:56)
I've tried inserting -conf but it seems to be ignored. There is no error message if I specify a nonexistent path.
I've tried inserting -fs file:/// -jt local, but it makes no difference
I've tried inserting -D mapreduce.framework.name=local
I've tried specifying the input and output with the file: format
Note. I'm not asking about how to configure eclipse to connect to a remote Hadoop installation. I want the application to run within eclipse.
Is this possible? Any ideas?
Additional info:
I turned on debugging. I saw:
582 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
583 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Cannot pick org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider - returned null protocol
I'm wondering not why YarnClientProtocolProvider failed, but why it didn't try LocalClientProtocolProvider.
New info:
It seems that this is an issue with Hadoop 2.4.0. I recreated my environment with Hadoop 1.2.1, followed the instructions in
http://gerrymcnicol.com/index.php/2014/01/02/hadoop-and-cassandra-part-4-writing-your-first-mapreduce-job/
added the Windows hack from
http://bigdatanerd.wordpress.com/2013/11/14/mapreduce-running-mapreduce-in-windows-file-system-debug-mapreduce-in-eclipse
and it all started working.
Following blog will be useful.
Running mapreduce in Windows filesystem