Keycloak restarts continuously after DB backup restore - keycloak

i am using a postgres sql as database for my keycloak (version 14.0.0) as distroless docker image.
After i have restored a SQL dump from my keycloak, it's continuosly restarting with the following error message:
User with username 'su_0317912' already added to '/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json'
I have no idea why this message appears, because this user 'su_0317912' doesn't exists in my database. There is another su-user created by Keycloak before the backup, so why is Keycloak even trying to create one ?
Since the keycloak container is restarting the whole time, i can't connect to the bash after it's in the restarting loop.
Any help and tips would be very nice.

I got it solved. In my case there was the KEYCLOAK_ADMIN su_0317912 set in environment variables. But after restore there was another su allready in DB. But the error message is somehow confusing, because not the su_0317912 is already added, just another user.
Is only one su_* user allowed in the database ?

Related

Why I see different data between postgres DB running in docker and postico from main host?

I am using docker container for postgres database, from UI which is powered from postgres. I deleted an entry from postgres via postico(client for postgres in mac), now when I see my database in postico is completely empty but when I refresh the UI, I still see the data!!!
Please correct me If I am postico is just the UI to see what is in the postgres running in the container, In that case I expected UI also to be empty but it is not the case.
Can someone tell me why my expectation is going wrong and please let me know if any info is needed?

Database backup not working using database manager

I'd like to backup a db today through localhost:8069/web/database/manager
, but the button return nothing while clicking the button.
Terminal show no error there.
Also invalid for duplicate, delete, create database,restore database and set Master password except link of existed db.
see pic as below:
the problem emerged since I deployed odoo12.
I can directly backup through pgAdmin4.
but I still want to know what happened.
My version of pgsql is 9.6 for odoo10 and odoo11.
It worked smoothly before I deployed odoo12
I found it's a stupid question.
the reason is that I connect to wrong db_host:
my conf set db_host as 127.0.0.1 but connect to localhost.
keep this topic for who make same mistake

PostgreSQL in Openshift won't execute the entrypoint and can not start the database

We have a read-only Postgresql database that should run in Openshift cluster.
We are using RHEL as the undrlying operating system.
Our Dockerfile will install postgres software, create the database instance, loads the data to it than shuts the database down and save the image.
We are using only bash and sql scripts and deploy the database using flyway.
When starting the container the entrypoint script will simply startup the database instance using "pg_ctl" command then perform an endless loop to keep the container running.
The Dockerfile has as the last command USER 26, where 26 is the id of the postgres user. The entrypoint script can be started as the postgres user or by a sudo user.
Everything is working well in Docker.
In Openshift the container is started by a different user belonging to the root group, but not the root user nor the user 26. Actually Openshift ignores the USER 26 clause in the Dockerfile.
The user starting the container (we'll call it containeruser) has anyhow no rights to start the postgres instance , so when running the entrypoint it will get permission denied on the postgresql data folder.
I have tried different options, adding the containeruser user to the wheel group and modify the sudoers file to allow it using sudo and start the entrypoint as postgres user but with no success.
So I have my database image ready but can not start it in Openshift.
On the openshift configuration side we are not allowed to make changes like allowing sudo usage, or starting the container as root or postgres user.
Any idea or help to this problem?
I am not an Openshift expert.
Thank you!
Best regards,
rimetnac
You have two choices.
The preferred choice is to fix your image so that it can run as any user. For this, do not use the existing postgres user. Create a new user, where that user has group root. Then ensure that all directories/files that PostgreSQL needs to write to are owned by that user, but also have group root and writable by group. When the container is then started up, it will run as an assigned user ID, not in /etc/passwd, and so will fallback to using group root still. Because the directories/file are writable to group root, everything will still work. For more information see:
https://docs.openshift.org/latest/creating_images/guidelines.html
Specifically, section 'Support Arbitrary User IDs'.
The second option if you have admin control of the cluster, and your security team do not object that you are overriding the default security model, is to allow your image to be run as user ID it wants to.
First create a new service account:
oc create serviceaccount runasnonroot
Next grant that service account the ability to run as non root user ID of its choosing.
oc adm policy add-scc-to-user nonroot -z runasnonroot --as system:admin
Then patch the deployment config to use that service account.
oc patch dc/mydatabase --patch '{"spec":{"template":{"spec":{"serviceAccountName": "runasnonroot"}}}}'
Note that this still requires you use USER in the image with an integer user ID and not postgres. Otherwise it can't verify it will run as non root user. That is because if you use a user name instead of user ID, you could be maliciously mapping that to root.
I spent days figuring this out and found one good solution.
OpenShift Origin runs an image as a user created by it, as explained in this OpenShift blog post. This prevents programs from being able to access needed files and directories. To successfully run a program on OpenShift Origin, the blog post provides two solutions, however, the first will not work for PostgreSQL and the second has two disadvantages (explained in the notes):
Grant group write access to the directories used by the main program.
This will not solve the problem because, although the PostgreSQL files will be accessible by any program, they must be owned by the owner of the PostgreSQL process.
Ensure that when operating system libraries are used to look up a system user, one is returned for the ID of the user OpenShift Origin runs the image as. The following are two methods for doing this:
Use a package called nss_wrapper, "which intercepts any calls which look up details of a user and returns a valid entry."
Make the UNIX password database file (/etc/passwd) have global write permissions in the image build so that the OpenShift user can be added to it in the S2I run script.
Each option has a disadvantage: 1. install an extra package and 2. make user accounts insecure.
The best solution is to build the docker image to run as the user OpenShift Origin will run the image as. I built this instructional image with it.
One additional problem to note is that, as the owner of the PostgreSQL process must be the owner of the files and directories accessed by PostgreSQL, PostgreSQL must be set up (i.e. initdb, roles, databases, etc.) during the image build. This is because file ownership can only be changed during the image build and the ownership of the files must be changed after PostgreSQL has been set up for the reason explained in #2 below.
Here are the complete steps with notes for setting up PostgreSQL in the image build:
Manually create the PostgreSQL data directory and change its ownership to a non-root user that will be used to initialze PostgreSQL and set up the components (e.g. roles and databases) required to run the server on OpenShift Origin.
This is required because the "initdb" executable must be executed by a user other than root and will need access to the data directory. Additionally, this user cannot be the user OpenShift Origin will run the image as because it is not in the system.
Switch to the non-root user.
This is required because the initdb executable must be executed "as the user that will own the server process, because the server needs to have access to the files and directories that initdb creates" (PostgreSQL documentation) and because the PostgreSQL server will be started to set up components (e.g. roles and databases) required to run the server on OpenShift Origin.
Run the "initdb" executable.
Start the PostgreSQL server, set up the required components (roles, databases, etc.) and stop the PostgreSQL server.
Switch back to the root user.
Change the ownership of the PostgreSQL files and directories to the user OpenShift Origin will run the image as.
Edit (06/20/18): I have found that there is a solution to set up PostgreSQL after the image is built. The user OpenShift Origin will run the image as can be added to the system at the start of the build. This will allow PostgreSQL to be set up and the ownership of its files and directories to be changed after the image build.
After gathering the comments from all contributors I can asnwer my question as follows:
Option 1
When you create the postgres database during image build, you must configure openshift policies to allow starting your container as the user that created the database during image build. Use this option when the database must be filled with data and this operation takes much time making it inappropriate for a container start. the entrypoint will only start the already prepared database.
Option 2
Create your database when starting the container using the entrypoint script. Use this option when the database creation is fast enough to be done at container start.
Option 3
See the last comment from Adrian which seems to answer all the problems anyhow I didn't got the time to test it.
Thank you all for your contributions.

ERROR: cannot execute CREATE TABLE in a read-only transaction

I'm trying to setup the pgexercises data in my local machine. When I run: psql -U <username> -f clubdata.sql -d postgres -x I get the error: psql:clubdata.sql:6: ERROR: cannot execute CREATE SCHEMA in a read-only transaction.
Why did it create a read-only database on my local machine? Can I change this?
Normally the most plausible reasons for this kind of error are :
trying create statements on a read-only replica (the entire instance is read-only).
<username> has default_transaction_read_only set to ON
the database has default_transaction_read_only set to ON
The script mentioned has in its first lines:
CREATE DATABASE exercises;
\c exercises
CREATE SCHEMA cd;
and you report that the error happens with CREATE SCHEMA at line 6, not before.
That means that the CREATE DATABASE does work, when run by <username>.
And it wouldn't work if any of the reasons above was directly applicable.
One possibility that would technically explain this would be that default_transaction_read_only would be ON in the postgresql.conf file, and set to OFF for the database postgres, the one that the invocation of psql connects to, through an ALTER DATABASE statement that supersedes the configuration file.
That would be why CREATE DATABASE works, but then as soon as it connects to a different database with \c, the default_transaction_read_only setting of the session would flip to ON.
But of course that would be a pretty weird and unusual configuration.
Reached out to pgexercises.com and they were able to help me.
I ran these commands(separately):
psql -U <username> -d postgres
begin;
set transaction read write;
alter database exercises set default_transaction_read_only = off;
commit;
\q
Then I dropped the database from the terminal dropdb exercises and ran script again psql -U <username> -f clubdata.sql -d postgres -x -q
I was having getting cannot execute CREATE TABLE in a read-only transaction, cannot execute DELETE TABLE in a read-only transaction and others.
They all followed a cannot execute INSERT in a read-only transaction. It was like the connection had switched itself over to read-only in the middle of my batch processing.
Turns out, I was running out of storage!
Write access was disabled when the database could no longer write anything. I am using Postgres on Azure. I don't know if the same effect would happen if I was on a dedicated server.
I had same issue for Postgre Update statement
SQL Error: 0, SQLState: 25006 ERROR: cannot execute UPDATE in a read-only transaction
Verified Database access by running below query and it will return either true or false
SELECT pg_is_in_recovery()
true -> Database has only Read Access
false -> Database has full Access
if returns true then check with DBA team for the full access and also try for ping in command prompt and ensure the connectivity.
ping <database hostname or dns>
Also verify if you have primary and standby node for the database
In my case I had a master and replication nodes, and the master node became replication node, which I believe switched it into hot_standby mode. So I was trying to write data into a node that was meant only for reading, therefore the "read-only" problem.
You can query the node in question with SELECT pg_is_in_recovery(), and if it returns True then it is "read-only", and I suppose you should switch to using whatever master node you have now.
I got this information from: https://serverfault.com/questions/630753/how-to-change-postgresql-database-from-read-only-to-writable.
So full credit and my thanks goes to Craig Ringer!
Dbeaver: In my case
This was on.
This doesn't quite answer the original question, but I received the same error and found this page, which ultimately led to a fix.
My issue was trying to run a function with temp tables being created and dropped. The function was created with SECURITY DEFINER privileges, and the user had access locally.
In a different environment, I received the cannot execute DROP TABLE in a read-only transaction error message. This environment was AWS Aurora, and by default, non-admin developers were given read-only privileges. Their server connections were thus set up to use the read-only node of Aurora (-ro- is in the connection url), which must put the connection in the read-only state. Running the same function with the same user against the write node worked.
Seems like a good use case for table variables like SQL Server has! Or, at least, AWS should modify their flow to allow temp tables to be created and dropped on read nodes.
This occurred when I was restoring a production database locally, the database is still doing online recovery from the WAL records.
A little bit unexpected as I assumed pgbackgrest was creating instantly recoverable restores, perhaps not.
91902 postgres 20 0 1445256 14804 13180 D 4.3 0.3 0:28.06 postgres: startup recovering 000000010000001E000000A5
If like me you are trying to create DB on heroku and are stuck as this message shows up on the dataclip tab
I did this,
Choose Resources from(Overview Resources Deploy Metrics Activity Access Settings)
Choose Settings out of (Overview, Durability, Settings, Dataclip)
Then in Administration->Database Credentials choose View Credentials...
then open terminal and fill that info here and enter
psql --host=***************.amazonaws.com --port=5432 --username=*********pubxl --password --dbname=*******lol
then it'll ask for password, copy-paste from there and you can run Postgres cmds.
I suddenly started facing this error on postgres installed on my windows machine, when I was running alter query from dbeaver, all I did was deleted the connection of postgres from dbeaver and created a new connection
If you are using Azure Database for PostgreSQL your server gets into read-only mode when the storage used is near total capacity.
The error you get is exactly:
ERROR: cannot execute XXXXXXXXX in a read-only transaction
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-compute-storage
I just had this error. My cause was not granting permission to the SEQUENCE
GRANT ALL ON SEQUENCE word_mash_word_cube_template_description_reference_seq TO ronshome_user;
If you are facing this issue with an RDS instance cluster, please check your endpoint and use the Writer instance endpoint. Then it should work now.
Issue can be dur to Intellij config:
Go to Database view> click on Data Source Properties (Shift + enter)> (Select your data source)>
Options tab> Under Connection : uncheck Read-only
For me it was Azure PostgreSQL failing over to standby during maintaince in Azure and never failing back to master when PostgreSQL was in HA mode. You can check this event in Service Health and also check which zone you current VM is running from. If it's 2 and not 1 them most likely that's the result of events described above.

How to create a postgresql superuser without a superuser account?

OK...just installed Mountain Lion. First thing I noticed was that my Postgres user account was missing from the login screen. I then verified that my postgres installation was no longer working. So I created a new postgres user account on my Mac and set all the proper permission to get the postgres server to start. It started but immediately the server shutdown. I checked the log files and it said that role "postgres" does not exist.
I can't figure out how to create this user in the db since the db won't allow me to access it without using a superuser account. I tried "createuser postgres" but got the same message, "role 'postgres' does not exist". I don't know what to do at this point.
Found my problem. Somehow my postgres db user did get erased during the upgrade to mountain lion. I was able to log into the db using the same name that I use to log into the system however. Unfortunately, I found that my databases where also removed during the upgrade. I don't know why but the upgrade did effect my postgresql installation. After logging in, I found that the postgres db and the template1 db had no relations to be found. Now to piece my db back together...luckily I'm still in development mode. Note: In the future make db backups prior to upgrading system.