Getting Database Authentication to work on Apache Guacamole - postgresql

Got Apache Guacamole and Tomcat working between two laptops and a PC under
LAN.
However, was always updating user and connection details through
user-mapping.xml
I decided to then set up Database Authentication for easier changing of
user-mapping,
setup and had active SQLServer, MYSQL as well as now PostGreSQL, all are
active and running (not concurrently, tried one by one and then uninstalled)
however guacamole login details remain the same and seem to be unaffected by
the changes in guacamole.properties.
Here is my latest guacamole.properties file for reference. (PostGreSQL
Version atm)
guacd-hostname:localhost
guacd-port: 4822
user-mapping:/etc/guacamole/user-mapping.xml
auth-provider:
net.sourceforge.guacamole.net.basic.BasicFileAuthenticationProvider
# MySQL properties
#mysql-hostname: localhost
#mysql-port: 3306
#mysql-database: guacamole_db
#mysql-username: SHRDC
#mysql-password: Shrdc_1234
#mysql-user-required: true
# PostgreSQL properties
postgresql-hostname: localhost
postgresql-port: 5432
postgresql-database: guacamole_db
postgresql-username: SHRDC
postgresql-password: Shrdc_1234
postgresql-user-required: true
I feel its some connector, driver issue hence not being recognised.
Something to change in /lib or /extensions?
For reference, the auth driver and auth connector i am using are currently:
in /extensions:
guacamole-auth-jdbc-postgresql.jar (Previously was
guacamole-auth-jdbc-postgresql-1.2.0.jar before i renamed it trying smth
out)
in /lib:
postgresql-42.2.14.jar
all steps followed as per:
https://guacamole.apache.org/doc/gug/jdbc-auth.html
Would love some feedback, been stuck trying to get DB authentication to work
for a week plus now!
Sincerely

I've encountered the exact same problem, however my setup uses docker.
In my case, there are discrepancies between actual code and documentations.
I will explain how to find the root cause, since it's similar.
Enable Logback debug
Since you are installing manually (not using docker container). Chances are you knew exactly where the GUACAMOLE_HOME is. Just to remind you, by default it is in /etc/guacamole, but if you have /home/$USER/.guacamole it will be used instead.
Add logback.xml like it was described here: https://guacamole.apache.org/doc/gug/configuring-guacamole.html in your GUACAMOLE_HOME dir.
See your catalina output
The new debug settings will output all debug message. If there are no [DEBUG] message, then you put logback.xml in the wrong location.
Once you have DEBUG output stream, see important output such as the GUACAMOLE_HOME currently being used, AuthBinding that is currently used, etc. This is when catalina is starting up.
For example, this is the excerpt of my log:
19:23:08.933 [localhost-startStop-1] DEBUG o.a.g.extension.ExtensionModule - Loading extension: "guacamole-auth-jdbc-postgresql-1.2.0.jar"
19:23:08.973 [localhost-startStop-1] DEBUG o.a.g.extension.ExtensionModule - [0] Binding AuthenticationProvider "org.apache.guacamole.auth.postgresql.PostgreSQLAuthenticationProvider".
19:23:08.980 [localhost-startStop-1] INFO o.a.g.environment.LocalEnvironment - GUACAMOLE_HOME is "/root/.guacamole".
19:23:10.150 [localhost-startStop-1] DEBUG o.a.g.extension.ExtensionModule - [1] Binding AuthenticationProvider "org.apache.guacamole.auth.postgresql.PostgreSQLSharedAuthenticationProvider".
19:23:10.207 [localhost-startStop-1] DEBUG o.a.g.e.LanguageResourceService - Merged strings with existing language: "es"
19:23:10.213 [localhost-startStop-1] DEBUG o.a.g.e.LanguageResourceService - Merged strings with existing language: "ru"
19:23:10.216 [localhost-startStop-1] DEBUG o.a.g.e.LanguageResourceService - Merged strings with existing language: "de"
19:23:10.222 [localhost-startStop-1] DEBUG o.a.g.e.LanguageResourceService - Merged strings with existing language: "fr"
19:23:10.227 [localhost-startStop-1] DEBUG o.a.g.e.LanguageResourceService - Merged strings with existing language: "ja"
19:23:10.233 [localhost-startStop-1] DEBUG o.a.g.e.LanguageResourceService - Merged strings with existing language: "en"
19:23:10.234 [localhost-startStop-1] INFO o.a.g.extension.ExtensionModule - Extension "PostgreSQL Authentication" loaded.
Notice that the postgresql auth binding must be loaded first.
If there are no output like that, then Tomcat doesn't even found your settings.
If it found the settings but failed to load the bindings, then Tomcat couldn't locate your binding.
This is some log example if such things occurs (catalina startup runs fine, but logging in via guacamole dashboard will spew this error).
### Error querying database. Cause: java.sql.SQLException: Error setting driver on UnpooledDataSource. Cause: java.lang.ClassNotFoundException: org.postgresql.Driver
### The error may exist in org/apache/guacamole/auth/jdbc/user/UserMapper.xml
### The error may involve org.apache.guacamole.auth.jdbc.user.UserMapper.selectOne
### The error occurred while executing a query
### Cause: java.sql.SQLException: Error setting driver on UnpooledDataSource. Cause: java.lang.ClassNotFoundException: org.postgresql.Driver
19:44:44.511 [http-nio-8080-exec-12] DEBUG o.a.g.rest.RESTExceptionMapper - Unexpected error in REST endpoint.
org.apache.ibatis.exceptions.PersistenceException:
### Error querying database. Cause: java.sql.SQLException: Error setting driver on UnpooledDataSource. Cause: java.lang.ClassNotFoundException: org.postgresql.Driver
### The error may exist in org/apache/guacamole/auth/jdbc/user/UserMapper.xml
### The error may involve org.apache.guacamole.auth.jdbc.user.UserMapper.selectOne
### The error occurred while executing a query
### Cause: java.sql.SQLException: Error setting driver on UnpooledDataSource. Cause: java.lang.ClassNotFoundException: org.postgresql.Driver
Lastly if it found your settings, but didn't found your guacamole-auth-postgresql binding it will spew this log:
19:47:49.654 [http-nio-8080-exec-15] DEBUG o.a.g.extension.ExtensionModule - [0] Binding AuthenticationProvider "org.apache.guacamole.auth.file.FileAuthenticationProvider".
Notice that now the FileAuth binding is loaded first (it doesn't found your postgres jdbc binding).
Based on the log information, systematically try to find the root cause
It can be as simple as wrong GUACAMOLE_HOME. For example, you edited /etc/guacamole/guacamole.properties but Tomcat actually loaded /home/$USER/.guacamole/guacamole.properties. Or maybe your directory structure is incorrect.
This is my directory tree, if you want to compare:
root#guacamole-7988d57c8d-nwfk7:~/.guacamole# tree .
.
├── extensions
│ ├── guacamole-auth-jdbc-postgresql-1.2.0.jar -> /opt/guacamole/postgresql/guacamole-auth-jdbc-postgresql-1.2.0.jar
│ └── lost+found
├── guacamole.properties
├── lib
│ └── postgresql-9.4-1201.jdbc41.jar -> /opt/guacamole/postgresql/postgresql-9.4-1201.jdbc41.jar
└── logback.xml
3 directories, 4 files
Check if you can actually access the database
From within the machine that guacamole runs (the tomcat). Check that you can access your database with the given credentials. If you are using postgres, then try to access it via psql. Just to make sure you have proper permission to access the database
Make sure the jdbc driver you are using is for the correct Java Version.
Probably have been stressed enough by the docs. But maybe you can check again.

Related

Import realm in Keycloak:18.x

I cannot import any realms into Keycloak 18.0.0. That's the Quarkus, and not the Wildfly distribution anymore. Documentation here says it should be pretty simple, and by mounting my exported realm.json file into /opt/keycloak/data/import/...json it actually TRIES to import it, but it ends with :
"[org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Script upload is disabled".
Known to be removed, and the old -Dkeycloak.profile.feature.upload_scripts=enabled won't work anymore. OK.
But then what's the way to do import any realms on startup? That'd be used to distribute a ready-made local stack without any handcrafting needed to launch. I could do it with running SQL commands, but that's way too hacky to my taste.
Compose file :
cp-keycloak:
image: quay.io/keycloak/keycloak:18.0.0
environment:
KC_DB: mysql
KC_DB_URL: jdbc:mysql://cp-keycloak-database:3306/keycloak
KC_DB_USERNAME: root
KC_DB_PASSWORD: root
KC_HOSTNAME: localhost
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
ports:
- 8082:8080
volumes:
- ./data/local_stack/init.keycloak.json:/opt/keycloak/data/import/main-realm.json:ro
entrypoint: "/opt/keycloak/bin/kc.sh start-dev --import-realm"
The output :
cp-keycloak_1 | 2022-05-05 14:07:26,801 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (development) mode
cp-keycloak_1 | 2022-05-05 14:07:26,802 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to import realm: Main-Realm
cp-keycloak_1 | 2022-05-05 14:07:26,803 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Script upload is disabled
Thanks
This might be caused because inside of your realm .json there is references to some configuration that is using the deprecated upload script feature.
Try to removed it, export the json and them try to imported again (this time without the upload script feature.
From the comments (credits to jfrantzius): 
See here for what you either need to remove or replace in your
realm-export.json:
https://github.com/keycloak/keycloak/issues/11664#issuecomment-1111062102
. We had to replace the entries, see also here
https://github.com/keycloak/keycloak/discussions/12041#discussioncomment-2768768

OpenSource: Encryption of JDBC Password in configuration properties file

As I noticed a plugin available for the enterprise version (https://download.rundeck.com/plugins/encrypted-datasource-plugin.html); is there an option for users of Rundeck open source to perform the same kind of encyption of datasource password in the configuration file?
As I noticed many people mentioning writing their own java programs and leveraging the Jasypt utilities; I tried this. I do have two jar files (one for encrypt and one for decrypt). I created a directory (since I'm using rpm based Rundeck 3.3 installation) called: /var/lib/rundeck/lib . I added this directory to the JVM classpath in /etc/sysconfig/rundeckd via: export RDECK_JVM_SETTINGS="-Djava.class.path=/var/lib/rundeck/lib/*". I converted my /etc/rundeck/rundeck-config.properties file to groovy format and updated the /etc/sysconfig/rundeck with: export RDECK_CONFIG_FILE="/etc/rundeck/rundeck-config.groovy". However when I change the /etc/rundeck/rundeck-config.groovy entry for datasource.password to:
datasource.password=MyDecrypt("MyTest123Password"); I get an error in the Rundeck logs after restarting:
[2020-09-08T18:01:03,168] WARN context.AnnotationConfigServletWebServerApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'application': Initialization of bean failed; nested exception is groovy.lang.MissingMethodException: No signature of method: groovy.util.ConfigSlurper$_parse_closure5.MyDecrypt() is applicable for argument types: (String) values: [MyTest123Password]
Any suggestions?
That's encryption is only for Rundeck Enterprise, perhaps the best approach on Rundeck Community is to secure the rundeck-config.properties file through file UNIX permissions.

Creating PostgreSQL DataSource via pax-jdbc config file on karaf 4

On my karaf 4.0.8 I've installed the feature pax-jdbc-postgresql. The DataFactory for PostgreSQL is installed:
org.osgi.service.jdbc.DataSourceFactory]
osgi.jdbc.driver.class org.postgresql.Driver
osgi.jdbc.driver.name PostgreSQL JDBC Driver
osgi.jdbc.driver.version PostgreSQL 9.4 JDBC4.1 (build 1203)
service.bundleid 204
service.scope singleton
Using Bundles com.eclipsesource.jaxrs.publisher (184)
I've create the file etc/org.ops4j.datasource-psql-sandbox.cfg:
osgi.jdbc.driver.class=org.postgresql.Driver
osgi.jdbc.driver.name=PostgreSQL
url=jdbc:postgresql://localhost:5432/sandbox
dataSourceName=psql-sandbox
user=sandbox
password=sandbox
After that, I see the confirmation in karaf.log that the file was processed:
2017-02-10 14:54:17,468 | INFO | 41-88b277ae0921) |
DataSourceRegistration | 154 - org.ops4j.pax.jdbc.config -
0.9.0 | Detected config for DataSource psql-sandbox. Tracking DSF with filter
(&(objectClass=org.osgi.service.jdbc.DataSourceFactory)(osgi.jdbc.driver.class=org.postgresql.Driver)(osgi.jdbc.driver.name=PostgreSQL))
However, I see no new DataSource in services list in console. What went wrong? I see no exceptions in log ....
The log message tell you that the config was processed and it is now searching for a suitable DataSourceFactory OSGi service.
The problem in your case is that it does not find such a service. So to debug this you should list all DataSourceFactory services and check their properties.
service:list DataSourceFactory
In my case it shows this:
[org.osgi.service.jdbc.DataSourceFactory]
-----------------------------------------
osgi.jdbc.driver.class = org.postgresql.Driver
osgi.jdbc.driver.name = PostgreSQL JDBC Driver
...
As you see it does not match the filter you see in the log. Generally you should only provide either osgi.jdbc.driver.class or osgi.jdbc.driver.name not both. If you remove the osgi.jdbc.driver.name line the config will work.
There is no error message as the system can not know if the error is transient or not. Basically as soon as you install a matching OSGi service the DataSource will be created.

Error running hadoop application in Eclipse on Windows

I'm trying to set up an Eclipse environment for developing and debugging hadoop. I'm following Tom White's Definitive Hadoop 3rd ed. What I would like to do is get the MaxTemperature app working locally on my Windows within Eclipse before moving it to my Hortonworks sandbox VM. The comment on page 158 about using the local job runner seems to be what I want. I don't want to set up a full hadoop implementation on Windows. I'm hoping with the right config params I can convince it to run as a java application inside Eclipse.
Windows: 7
Eclipse: Luna
Hadoop: 2.4.0
JDK: 7
When I set the Run configuration for MaxTemperatureDriver (Source code on page 157) to
inputfile outputdir foo (deliberate bogus 3rd parameter)
I get the usage message so I know I'm running my program with those params.
If I remove the bogus third param I get
Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1255)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1251)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1250)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1279)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at mark.MaxTemperatureDriver.run(MaxTemperatureDriver.java:52)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at mark.MaxTemperatureDriver.main(MaxTemperatureDriver.java:56)
I've tried inserting -conf but it seems to be ignored. There is no error message if I specify a nonexistent path.
I've tried inserting -fs file:/// -jt local, but it makes no difference
I've tried inserting -D mapreduce.framework.name=local
I've tried specifying the input and output with the file: format
Note. I'm not asking about how to configure eclipse to connect to a remote Hadoop installation. I want the application to run within eclipse.
Is this possible? Any ideas?
Additional info:
I turned on debugging. I saw:
582 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
583 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Cannot pick org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider - returned null protocol
I'm wondering not why YarnClientProtocolProvider failed, but why it didn't try LocalClientProtocolProvider.
New info:
It seems that this is an issue with Hadoop 2.4.0. I recreated my environment with Hadoop 1.2.1, followed the instructions in
http://gerrymcnicol.com/index.php/2014/01/02/hadoop-and-cassandra-part-4-writing-your-first-mapreduce-job/
added the Windows hack from
http://bigdatanerd.wordpress.com/2013/11/14/mapreduce-running-mapreduce-in-windows-file-system-debug-mapreduce-in-eclipse
and it all started working.
Following blog will be useful.
Running mapreduce in Windows filesystem

Jboss shows error with datasource on startup

On starting jboss I am getting the following error :
--- MBEANS THAT ARE THE ROOT CAUSE OF THE PROBLEM ---
ObjectName: jboss.jca:service=DataSourceBinding,name=DefaultDS
State: NOTYETINSTALLED
Depends On Me:
jboss.ejb:service=EJBTimerService,persistencePolicy=database
jboss:service=KeyGeneratorFactory,type=HiLo
jboss.mq:service=StateManager
jboss.mq:service=PersistenceManager
And for all database connections in the servlet I get the following exception :
org.postgresql.util.PSQLException: FATAL: password a
uthentication failed for user "poll"
It was working fine and all of a sudden I started getting these errors. My password is correct. I even tried changing the password and then tried again it showed the same exception. What is happening here?
The DefaultDS data source is what the name suggests; the default datasource. It ships with JBoss and is configured to use the Hypersonic (ie in-memory) database. JBoss uses the DefaultDS datasource to read/write internal queues, timed events, etc
Check the file ../conf/standardjbosscmp-jdbc.xml to see what you've got configured for the DefaultDS datasource. It sounds like you've edited that file unintentionally. Unless you need to persist internal queues etc across boots, just leave it as shipped using Hypersonic.
See the JBoss doc for more.