I am trying to connect keycloak docker image quay.io/keycloak/keycloak:19.0.2 to Azure managed postgres database instance.
Reason: Our main application db is on azure so it's preferrable for us to point keycloak to same database.
We already tried and tested the keycloak functionality using standalone linux installation (./bin/kc.sh) which easily connected to azure postgres database and all our realms and user, client configuration is set into this azure postgres database.
Command: docker run --env-file endpoint.txt -p 8080:8080 quay.io/keycloak/keycloak:19.0.2 start-dev
Contents in endpoint.txt file:
DB_VENDOR=postgres
DB_ADDR=Azure_postgres_url
DB_PORT=5432
DB_DATABASE=keycloak_database
DB_USER=user
DB_PASSWORD=password
Behaviour:
http://public_ip:8080/
Keycloak landing page shows but it's not connected to existing azure db, we have to set admin account , realms and user configurations from scratch.
Mostly keycloak is using it's internal database (H2)
Your help is appreciated. Thanks for your time.
Related
To summarize, I am trying to run a Chainlink node via Docker on an Azure VM. I also created an Azure Postgresql DB and verified the VM is able to connect via psql cli.
Steps I took to get the node running (following this link):
Create Azure VM
Install docker
mkdir ~/.chainlink-rinkeby
Created .env file
Set ETH_URL via an External Provider
Create Postgres SQL Database following this link
Set Remote Database_Url config using sslmode=disable
Start the node with:
cd ~/.chainlink-rinkeby && docker run -p 6688:6688 -v ~/.chainlink-rinkeby:/chainlink -it --env-file=.env smartcontract/chainlink local n
My .env file:
"ROOT=/chainlink LOG_LEVEL=debug ETH_CHAIN_ID=4 MIN_OUTGOING_CONFIRMATIONS=2 LINK_CONTRACT_ADDRESS=0x01BE23585060835E02B77ef475b0Cc51aA1e0709 CHAINLINK_TLS_PORT=0 SECURE_COOKIES=false GAS_UPDATER_ENABLED=true ALLOW_ORIGINS=*"
"ETH_URL=wss://cl-rinkeby.fiews.io/v1/MY_API_KEY"
"DATABASE_URL=postgresql://MY_USER_NAME:MY_PASSWORD#MY_DATABASE_nAME.postgres.database.azure.com:5432/postgres?sslmode=disable"
Error:
[ERROR] unable to lock ORM: dial tcp 127.0.0.1:5432: connect: connection refused logger/default.go:139 stacktrace=github.com/smartcontractkit/chainlink/core/logger.Error
I've also tried giving a version of 0.10.8 in the chainlink startup command but the error I get for that is:
[ERROR] failed to initialize database, got error failed to connect to `host=/tmp user=root database=`: dial error
You are trying to connect your Chainlink node to a remote PostgreSQL database. This database is managed by AZUR's cloud service and hosted and administered there internally. The issue with the connection is that the Chainlink node wants to establish a connection to 127.0.0.1 and is therefore convinced that the Postgres is located locally in your Chainlink docker container.
A docker container is an own environment and represent an own host, so 127.0.0.1 will loopback into the container itself. I recommend you to have a look here on the official network documentation of docker: https://docs.docker.com/config/containers/container-networking/
With the version 0.10.8 you established a connection. The issue here is now related to the USER and the DATABASE. Please ensure that you create a database and an own USER for it and not use the admin credentials (root) like the superuser.
You can enter the postgres via the azur cli and type in the following lines:
CREATE DATABASE <yourdbname>;
CREATE USER <youruser> WITH ENCRYPTED PASSWORD '<yourpass>';
In addition you can have a look at this post related to the connection to your postgres database:
https://stackoverflow.com/a/67992586/15859805
I am currently preparing to put my web application for production. I'm using an Amazon EC2 on Redhat and have a Postgres AWS RDS in the same VPC. I cant seem to connect to the database or to a local one running on the same EC2. I have had no issues while running and testing my app on my local mac and another physical server in our office that runs Redhat 7.
I'm running the 'sails lift' command with my models set to migrate:'alter' so that it can properly set up the Database with my models and configurations, however it gets hung on a 'Waiting...' command in the middle of the orm hook as in the image.
Problematic Output
default: {
adapter: 'sails-postgresql',
host: 'localhost',
port: 5432,
user: 'exampleuser',
password: 'thepassword',
database: 'newdb',
},
I can connect to my RDS database and the local one using psql without any issues and with PGadmin. This exactly datastores setup works with my Mac and with our physical Redhat server using their own respective local postgres instances. On the Amazon EC2s however Im running into the problem from the image.
I have also tried running a complete base sails app made with 'sails new newBaseApp' and set up the datastore in that to connect to the local postgres database on the ec2 server. It has the exact same problem.
I'm a bit new to SQL and Docker. I've recently created a container for PostgreSQL on my Linux server that can be accessed by SSH. I am trying to manage it using the Entity Framework on .NET Core 2.2.
I'm trying to go by Npgsql's official documentation, but there isn't any provision for connection via SSH. The example they've provided for the connection string is:
optionsBuilder.UseNpgsql("Host=my_host;Database=my_db;Username=my_user;Password=my_pw")
Where:
my_host is set to the docker container's IP address.
my_db is the database name
my_user is the username on PostgreSQL
my_pw is the database password
I am also using this First EF Core Console Application as a tutorial. When I am attempting on the dotnet CLI:
dotnet ef database update
It keeps timing out, obviously because it can't connect to the server via SSH.
I've done my fair share of Googling with no luck. Can any of you please advise?
Edit FYI:
I am using a Windows 10 computer as a client
I am using Ubuntu Linux and connecting via OpenSSH
The Linux server has a Docker Container w/ PostgreSQL
I have successfully connected from my Windows 10 client using DBeaver
In principle, connecting to PostgreSQL isn't done over SSH - it's done directly via port 5432. You typically need to configure your container to expose that port (check the docker networking docs).
It is possible to use SSH tunneling to connect to PG (or any other service), but that's a pretty specialized mechanism to bypass firewalls and the like. You likely just need to expose port 5432 from your container.
I'm trying to use the Heroku postgresql add-on in my spring boot application (locally first). In my application properties, I removed the local server information and updated it with the Heroku credentials. I also added the posgresql dependency in my gradle file.
My code:
spring.datasource.url=jdbc:postgres://qfxqpoceljdtfo:f50b14498b7be95f0a1f4cf466b09b54ed8bbefaac9aa28a97b14719d0625e56#ec2-23-21-201-255.compute-1.amazonaws.com:5432/d6eqamh4egp1g0
spring.datasource.username=qfxqpoceljdtfo
spring.datasource.password=***Password***
spring.datasource.driver-class-name=org.postgresql.Driver
Heroku Credentials:
Host: ec2-23-21-201-255.compute-1.amazonaws.com
Database: d6eqamh4egp1g0
User: qfxqpoceljdtfo
Port: 5432
Password: ***Password***
URI postgres://qfxqpoceljdtfo:f50b14498b7be95f0a1f4cf466b09b54ed8bbefaac9aa28a97b14719d0625e56#ec2-23-21-201-255.compute-1.amazonaws.com:5432/d6eqamh4egp1g0
Heroku CLI: heroku pg:psql postgresql-metric-82455 --app battlesh1p
The resulting error is:
java.sql.SQLException: Driver:org.postgresql.Driver#20999517 returned null for URL
I've also tried using:
jdbc:postgresql://ec2-23-21-201-255.compute-1.amazonaws.com:5432/d6eqamh4egp1g0?sslmode=require
Which results in this error
org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host "94.248.76.191", user "qfxqpoceljdtfo", database "d6eqamh4egp1g0", SSL off
In case anyone comes across this. The Heroku documentation worked fine in configuring my application to use the postgres database plug-in in production. But I could never get my spring app to run locally and access the host DB.
I have created a Citus DB cluster using the Cloud Formation template here:
Multi-Machine AWS Citus Cloud Formation
I can login to the DB using CLI once I go to the host in PuTTy. This does not require a username/pwd. And, this runs successfully.
/usr/pgsql-9.6/bin/psql -h localhost -d postgres
select * from master_get_active_worker_nodes();
I set the Inbound rules for the 5432 port to 0.0.0.0/0 just to allow my remote connection to the DB.
Yet, now, when I try to connect using a JDBC URL from a remote host, I don't know what username/pwd to enter into the PostgreSQL JDBC URL. Is there a default user/pwd to use?
Default username is ec2-user and there is no default password. As you can see from the pg_hba.conf file, authentication method for the localhost is defined as "trust". You may check the details of authentication methods from here. So, you need to let remote hosts to access the db by changing the pg_hba.conf file.