Keycloak Environment Variables Not Working - keycloak

I would like to specify the database for Keycloak to use.
I've found this which specifies the parameters and environment variables that can be used. KC_DB should be available for setting the vendor. E.g. mysql.
I create a Dockerfile to create my image.
FROM bitnami/keycloak:18
EXPOSE 80
I then run the image using this.
docker run --name keycloak -e KC_DB=mysql -p 8080:80 keycloak
Logs
PS D:\Projects\keycloak> docker run --name keycloak -e KC_DB=mysql -p 8080:80 keycloak
keycloak 23:48:07.88
keycloak 23:48:07.88 Welcome to the Bitnami keycloak container
keycloak 23:48:07.88 Subscribe to project updates by watching https://github.com/bitnami/containers
keycloak 23:48:07.89 Submit issues and feature requests at https://github.com/bitnami/containers/issues
keycloak 23:48:07.89
keycloak 23:48:07.89 INFO ==> ** Starting keycloak setup **
keycloak 23:48:07.90 INFO ==> Validating settings in KEYCLOAK_* env vars...
keycloak 23:48:07.91 INFO ==> Trying to connect to PostgreSQL server postgresql...
cannot resolve host "postgresql": lookup postgresql on 192.168.65.5:53: read udp 172.17.0.2:51615->192.168.65.5:53: i/o timeout
cannot resolve host "postgresql": lookup postgresql on 192.168.65.5:53: read udp 172.17.0.2:57656->192.168.65.5:53: i/o timeout
cannot resolve host "postgresql": lookup postgresql on 192.168.65.5:53: read udp 172.17.0.2:41930->192.168.65.5:53: i/o timeout
cannot resolve host "postgresql": lookup postgresql on 192.168.65.5:53: read udp 172.17.0.2:48347->192.168.65.5:53: i/o timeout
cannot resolve host "postgresql": lookup postgresql on 192.168.65.5:53: read udp 172.17.0.2:43866->192.168.65.5:53: i/o timeout
cannot resolve host "postgresql": lookup postgresql on 192.168.65.5:53: read udp 172.17.0.2:49390->192.168.65.5:53: i/o timeout
cannot resolve host "postgresql": lookup postgresql on 192.168.65.5:53: read udp 172.17.0.2:39474->192.168.65.5:53: i/o timeout
cannot resolve host "postgresql": lookup postgresql on 192.168.65.5:53: read udp 172.17.0.2:46428->192.168.65.5:53: i/o timeout
cannot resolve host "postgresql": lookup postgresql on 192.168.65.5:53: read udp 172.17.0.2:59657->192.168.65.5:53: i/o timeout
cannot resolve host "postgresql": lookup postgresql on 192.168.65.5:53: read udp 172.17.0.2:56871->192.168.65.5:53: i/o timeout
I've tried using the KC_DB_HOST and multiple other environment variables but they don't seem to be picked up. I've tried this with Keycloak 19 too.
What is causing this?

You should consult that to the docker image creator you're are using, here is the github reference link maybe they able to create another docker image using mysql, so far the documentation from the github repository the image requires you to use postgresql database.

Looks like this image has a built-in default variable for the host. If you want you can just add 'postgresql' to your hosts to be recognized as some other ip. You can do it the following way:
first you might want to check which hosts you already have:
type C:\Windows\System32\Drivers\etc\hosts
it will print something like
127.0.0.1 localhost
then using any editor add new line containing
<DESIRED-HOST> postgresql
in case your postgres is running locally just put 127.0.0.1 as host

Related

How do I change the port number for connecting to PostgreSQL via micro-orm?

While creating migration via the CLI command npx mikro-orm migration:create, it throws the error:
MikroORM failed to connect to database test on postgresql://postgres#127.0.0.1:5432
As you can see it tries to connect to port number 5432 which is the default port number given while setting up PostgreSQL but I had changed that port number to 5000 while setting up PSQL on my system.
How do I make MikroORM connect to port 5000?
Refer to the docs. You are looking for the port option.

How to Connect using Port Forwarding Database Postgrsql on Openshift 3

I Have a problem on Connect from Port Forwarding Database on Openshift :
Running Pods Postgresql :
I Try Connect to Container running the database to check process and psql command, then it works :
Next, I Try Port Forwarding for Try Connection from outside Openshift Cluster:
Then I Try Connect from Outside Cluster to connect Postgresql have Error: Connection Refuse
Im Using IP Based or Hostname / FQDN Not Working and Error Still Exist
And When I Try Check Firewall port it has been opened port 5432/TCP :
Anyone Can Help Me With This problem ?
Thanks
Note: Before I have Been Looking Documentation but Not Working Resolve the Problem
Source Documentation:
https://www.openshift.com/blog/openshift-connecting-database-using-port-forwarding
"psql: could not connect to server: Connection refused" Error when connecting to remote database
The oc port-forward command is forwarding from only your loopback interfaces.
If you are running your client on the same machine where the cluster is running, then use localhost as your "Host".
If you are running your client on a different machine, they you need more network redirection to get this to work. Please see this post for more information as well as work-arounds for your problem: Access OpenShift forwarded ports from remote host

Not able to connect on Mongodb Atlas port

I'm using an M0 GCP instance.
I can connect to the cluster using this string:
'mongodb+srv://my_user:my_pass#my_cluster-mbsnz.gcp.mongodb.net/db?retryWrites=true&w=majority'
I'm trying to use another client where I need to pass host and port, but I can't connect.
I tried telnet to the port 27017, but for some reason I'm not able to connect directly on the port.
curl http://my_cluster-mbsnz.gcp.mongodb.net:27017
curl: (7) Failed to connect to my_cluster-mbsnz.gcp.mongodb.net port 27017: Connection timed out
or
telnet my_cluster-mbsnz.gcp.mongodb.net 27017
Trying 185.82.212.199...
^C -> After a long time waiting
What might be wrong ?
+srv urls use a DNS seed. On atlas, you can click into the cluster and you should be able to see the urls for your primary & your secondaries and use those urls to connect. You should also be able to use nslookup to get that info using part of that connection string, but it's probably simpler to just look up the urls through the UI.
https://docs.mongodb.com/manual/reference/connection-string/
In order to leverage the DNS seedlist, use a connection string prefix of mongodb+srv: rather than the standard mongodb:. The +srv indicates to the client that the hostname that follows corresponds to a DNS SRV record. The driver or mongo shell will then query the DNS for the record to determine which hosts are running the mongod instances.

How to pull data from aws rds postgresql using kafka locally?

I have postgresql database server on aws. I have setup one node kafka cluster on my local machine and want to pull data from postgresql database server. i have been using jdbc source connector here is the configuration(changed the actual values)
name=test-source-postgresql-jdbc-01
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
connection.url=jdbc:postgresql://hostname:5432/dbname?
user=abc&password=pwd
connection.user=abc
connection.password=pwd
table.whitelist=abc1
mode=timestamp
timestamp.column.name=timestamp
topic.prefix=test-postgresql-
and getting following error while running
ERROR Failed to create job for etc/kafka-connect-jdbc/quickstart-postgresql.properties (org.apache.kafka.connect.cli.ConnectStandalone:102)
ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:113)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector
configuration is invalid and contains the following 2 error(s):
Invalid value org.postgresql.util.PSQLException: Connection to hostname:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections. for configuration Couldn't open connection to jdbc:postgresql://abc:5432/dbname?user=abc&password=pwd
curl localhost:8083/connector-pluginsPlease click link
ls share/java/kafka-connect-jdbc -l Please click link
Any help would be appreciated!
As you mentioned, you're connecting to RDS via SSH tunnel. I don't think you can configure Kafka JDBC connector to tunnel through SSH automatically but you can create SSH tunnel manually and then configure Kafka connector to connect to RDS through this tunnel - detail description here.
Following your configuration, you can create SSH tunnel with command
ssh -N -L 5432:rds.hostname:5432 uername#ec2instnace.com -i ~/.ssh/your_key
You can test connection to the DB using:
psql -h localhost -p 5432
And your connector config would be
connection.url=jdbc:postgresql://localhost:5432/dbname?user=abc&password=pwd

Import PostgreSQL with Sqoop in Docker

I have a PostgreSQL DB sitting on my local machine (Windows) and I would like to import it into my Hortonworks Sandbox using Apache Sqoop. While something like this sounds great, the complicating factor is that my Sandbox is sitting in a Docker container, so statements such as sqoop list-tables --connect jdbc:postgresql://127.0.0.1/ambari --username ambari -P seem to run into authentication errors. I believe the issue comes from trying to connect to the local host from inside the docker container.
I looked at this post on connecting to a MySQL DB from within a container and this one to try to use PostgreSQL instead, but have so far been unsuccessful. I have tried connecting to '127.0.0.1' and '172.17.0.1' (the host's IP) in order to connect to my local host from within Docker. I have also adjusted PostgreSQL's configuration file to listen for connections on all IP addresses. However, I still get the following error messages when I run sqoop list-tables --connect jdbc:postgresql://<ip>:5432/<db_name> --username postgres -P (where <ip> is either 127.0.0.1 or 172.17.0.1, and <db_name> is the name of my database)
For connecting with 127.0.0.1:
ERROR sqoop.Sqoop: Got exception running Sqoop: java.lang.RuntimeException: org.postgresql.util.PSQLException: FATAL: Ident authentication failed for user "postgres"
For connecting with 172.17.0.1:
Connection refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
Any suggestions would be very helpful!
If this is just for local testing and not for production level coding, you can enable all trusted connections to your database by updating the pg_hba.conf file
Locate your pg_hba.conf file inside your postgres data directory
Vim the file and update it with the following lines:
#TYPE DATABASE USER ADDRESS METHOD
local all all trust
host all all 0.0.0.0/0 trust
host all all ::1/128 trust
Restart your postgres service
If you do this, your first use case (using 127.0.0.1) should work