Migrations not synced with data dump - postgresql

I switched from a PostgreSQL server to newer version by dumping all the data. Everything seems to be in place correctly, but Knex.js fails to understand the migrations.
All migrations are listed correctly in knex_migrations table and there's not lock present in knex_migrations_lock. Yet running knex migrate:list will cause the following error
error: create table "knex_migrations" ("id" serial primary key, "name" varchar(255), "batch" integer, "migration_time" timestamptz) - relation "knex_migrations" already exists
I've run the same command against both the old and new databases and the debug log shows the following differences. Name of the tables and schemas should be default.
Old
Using environment: local
knex:client acquired connection from pool: __knexUid1 +0ms
knex:query select * from information_schema.tables where table_name = ? and table_schema = current_schema() undefined +0ms
knex:bindings [ 'knex_migrations' ] undefined +0ms
knex:client releasing connection to pool: __knexUid1 +14ms
knex:client acquired connection from pool: __knexUid1 +0ms
knex:query select * from information_schema.tables where table_name = ? and table_schema = current_schema() undefined +13ms
knex:bindings [ 'knex_migrations_lock' ] undefined +13ms
knex:client releasing connection to pool: __knexUid1 +4ms
knex:client acquired connection from pool: __knexUid1 +0ms
knex:query select * from "knex_migrations_lock" undefined +6ms
knex:bindings [] undefined +6ms
knex:client releasing connection to pool: __knexUid1 +3ms
knex:client acquired connection from pool: __knexUid1 +0ms
knex:query select "name" from "knex_migrations" order by "id" asc undefined +2ms
knex:bindings [] undefined +2ms
knex:client releasing connection to pool: __knexUid1 +2ms
Found 27 Completed Migration file/files.
-- list of migrations omitted
No Pending Migration files Found.
New
Using environment: local
knex:client acquired connection from pool: __knexUid1 +0ms
knex:query select * from information_schema.tables where table_name = ? and table_schema = current_schema() undefined +0ms
knex:bindings [ 'knex_migrations' ] undefined +0ms
knex:client releasing connection to pool: __knexUid1 +20ms
knex:client acquired connection from pool: __knexUid1 +0ms
knex:query create table "knex_migrations" ("id" serial primary key, "name" varchar(255), "batch" integer, "migration_time" timestamptz) undefined +20ms
knex:bindings [] undefined +20ms
knex:client releasing connection to pool: __knexUid1 +5ms
-- ... and the error posted above
The error seems to say that Knex cannot find the migrations as it tries to create the table, but then again it cannot create the table since it already exists. There seems to be a bit gone sideways in the system.
How do I tell Knex to start using the migrations already in the table and continue as is? All the migrations are executed on the old db before the dump, so there's nothing to do.

It was user rights issue. Below is what I think has happened. It's more of a Postgres thing, but knex also has a role in this play.
Originally the database has been created empty. The user is created and given access to control the data with following commands
CREATE DATABASE my_db;
CREATE USER my_user with PASSWORD 'foo';
GRANT ALL ON DATABASE my_db to my_user;
ALTER DATABASE my_db OWNER to my_user;
Although granting all on database sound powerful, I think it actually gives rights to use the database, not the tables itself.
After this, knex migrations were used to create the tables, which means that the user running the scripts already had rights to the tables.
When I dumped the data, it didn't follow the same flow since the tables were not created by the user. The user needs to be granted access to the tables as well, for example to give access to all tables GRANT ALL ON ALL TABLES IN SCHEMA public to my_user. Rights to tables can also be given individually if needed.

Related

Keycloak 18 won't create default-tables on docker when configure a second DB via quarkus.properties

I am implementing a Custom Userprovider SPI for keycloak 18.0.2 and therefore have (alongside the keycloak default PostgreSQL-DB) a MSSQL in use.
The customized Keycloak and the PostgreSQL are run via docker-container.
The problems occure on my local MacBook M1 (but the same behaviour on intel-cpu as well). When building and starting the custom keycloak container, all volumes for both containers are removed. So there is always fresh DB-container
(sidenote: As the SPI was written for WildFly and it is broken with 19.x.x, i just stepped back to 18.0.2 to get the whole process working again. Afterwards will update to 19 and adapt the SPI implementations.)
the problem ...
Keycloak will create all tables - for the default keycloak-db (PostgreSQL) - in the public schema ONLY IF i configure the connection to the MSSQL via persistence.xml. This must not be in the production setup, as this should be at least configurable by the gitlab pipeline.
If i move the connection-infos from persistence.xml to quarkus.properties (as described in here: https://github.com/keycloak/keycloak-quickstarts/tree/main/user-storage-jpa), the default DB-tables can't be created anymore...
logs in Postgre-Container:
LOG: database system is ready to accept connections
ERROR: relation "migration_model" does not exist at character 25
STATEMENT: SELECT ID, VERSION FROM MIGRATION_MODEL ORDER BY UPDATE_TIME DESC
ERROR: syntax error at end of input at character 20
STATEMENT: call current_schema
ERROR: current transaction is aborted, commands ignored until end of transaction block
STATEMENT: SELECT COUNT(*) FROM DATABASECHANGELOG
ERROR: syntax error at end of input at character 20
STATEMENT: call current_schema
ERROR: current transaction is aborted, commands ignored until end of transaction block
STATEMENT: SELECT COUNT(*) FROM DATABASECHANGELOGLOCK
ERROR: syntax error at end of input at character 20
STATEMENT: call current_schema
ERROR: current transaction is aborted, commands ignored until end of transaction block
STATEMENT: CREATE TABLE DATABASECHANGELOGLOCK (ID INT NOT NULL, "LOCKED" BOOLEAN NOT NULL, LOCKGRANTED datetime, LOCKEDBY VARCHAR(255), CONSTRAINT PK_DATABASECHANGELOGLOCK PRIMARY KEY (ID))
ERROR: syntax error at end of input at character 20
keycloak logs:
WARN [liquibase.database.DatabaseFactory] (main) Unknown database: PostgreSQL
WARN [org.keycloak.connections.jpa.updater.liquibase.lock.CustomLockService] (main) Failed to create lock table. Maybe other transaction created in the meantime. Retrying...
ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (development) mode
Does using the quarkus.properties overwrite some keycloak-defaults? so, when using it, keycloak acts differently than the configuration without a custom quarkus file?

liquibase default schema ignored in sql changelog

Problem: liquibase can't find table without setting schema in SQL script.
How to say liquibase use default schema in SQL changelog?
Before sql changelog, for adding check constraint, I create all table, without setting schema. Schema was set in application.properties and all table was created correctly in $RM_DB_SCHEMA.
RM_DB_SCHEMA: MANAGER
RM_DB_URL: "jdbc:h2:file:~/rmdb;MODE=PostgreSQL;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE;AUTO_SERVER=TRUE;INIT=CREATE SCHEMA IF NOT EXISTS ${RM_DB_SCHEMA}"
RM_DB_USER: sa
RM_DB_PASSWORD: admin
RM_LB_USER: ${RM_DB_USER}
RM_LB_PASSWORD: ${RM_DB_PASSWORD}
spring:
datasource:
hikari:
schema: ${RM_DB_SCHEMA}
username: ${RM_DB_USER}
password: ${RM_DB_PASSWORD}
jdbc-url: ${RM_DB_URL}
liquibase:
change-log: "classpath:db/manager-changelog.xml"
default-schema: ${RM_DB_SCHEMA}
user: ${RM_LB_USER}
password: ${RM_LB_PASSWORD}
jpa:
database: postgresql
Caused by: liquibase.exception.LiquibaseException: liquibase.exception.MigrationFailedException: Migration failed for change set changelog.xml::d::d:
Reason: liquibase.exception.DatabaseException: Таблица "STATUS" не найдена
Table "STATUS" not found; SQL statement:
ALTER TABLE TEST ADD CONSTRAINT STATUS_ID CHECK (exists (SELECT 1 FROM STATUS s WHERE STATUS_ID = s.id)) [42102-200] [Failed SQL: (42102) ALTER TABLE TEST ADD CONSTRAINT STATUS_ID CHECK (exists (SELECT 1 FROM STATUS s WHERE STATUS_ID = s.id))]
I found another solution.
The problem was in local developing with h2. (it always init as public schema). I'm just adding SET SCHEMA after creating it.
in test properties:
jdbc-url: 'jdbc:h2:file:~/rmdb;MODE=PostgreSQL;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE;AUTO_SERVER=TRUE;INIT=CREATE SCHEMA IF NOT EXISTS ${application.database.schema}\;SET SCHEMA ${application.database.schema}'

Postgres - Are multi-line statements atomic?

Here's an example:
create database users;
create table users (id int unique);
Then running these together:
insert into users values(1);
insert into users values(1);
I expected the first insert to succeed, but the second to fail. However what I am seeing is that they are run atomically, and no row is inserted. Here are the logs:
2021-11-08 23:04:37.825 UTC [181] LOG: statement: insert into users values(1);
insert into users values(1);
2021-11-08 23:04:37.825 UTC [181] ERROR: duplicate key value violates unique constraint "users_id_key"
2021-11-08 23:04:37.825 UTC [181] DETAIL: Key (id)=(1) already exists.
2021-11-08 23:04:37.825 UTC [181] STATEMENT: insert into users values(1);
insert into users values(1);
What's confusing is that there is no BEGIN; or COMMIT;. Are statements with multiple commands run atomically?
-- EDIT --
I am using Postico client to run these statements.

Redshift returns The server (version 8.0) does not support altering default privileges

I am trying to drop a user from redshift:
DROP USER xx;
I get:
[2021-03-01 14:00:39] [2BP01][500310] [Amazon](500310) Invalid operation: user "xx" cannot be dropped because some objects depend on it
[2021-03-01 14:00:39] Details:
[2021-03-01 14:00:39] owner of default privileges on new relations belonging to user xx;
I already removed it from the group:
ALTER GROUP a DROP USER xx;
I run:
select *
from pg_user
LEFT JOIN pg_group ON pg_user.usesysid = ANY(pg_group.grolist)
order by 1;
And it returns: xx,109,false,false,false,********,,,,,
Also run:
revoke create,usage on schema public from xx;
revoke all privileges on schema public from xx;
Then run this:
SELECT
distinct s.schemaname,
u.usename,
--'REVOKE ALL ON ALL TABLES IN SCHEMA '+s.schemaname+' FROM ronnylopez;',
has_schema_privilege(u.usename,s.schemaname,'create') AS user_has_select_permission,
has_schema_privilege(u.usename,s.schemaname,'usage') AS user_has_usage_permission
FROM
pg_user u
CROSS JOIN
(SELECT DISTINCT schemaname FROM pg_tables) s
WHERE
user_has_select_permission=True
and u.usename = 'xx';
And it returns only one row:
public,xx,true,true
If i run the default acl:
select * from pg_default_acl where defacluser= 109;
109,0,r,"{group admins=arwdRxt/xx,xx=arwdRxt/xx}"
To drop these i pretend to use \ddp using psql but i get:
The server (version 8.0) does not support altering default privileges.
So i'm stuck on here and not able to drop the user....
You can use the view v_generate_user_grant_revoke_ddl provided on GitHub to generate all of the REVOKE statements needed to allow the DROP USER to complete.
The ddl column provides the generated SQL
SELECT ddl
FROM v_generate_user_grant_revoke_ddl
WHERE grantee = 'useriwanttodrop';
Run the generated SQL and then drop the user. May require superuser permission.
--Generated
SET SESSION AUTHORIZATION master;
REVOKE ALL ON DATABASE mydb FROM useriwanttodrop;
RESET SESSION AUTHORIZATION;
--Drop
DROP USER useriwanttodrop;

Postgresql 9.5 has_database_privilege always returning True

The behavior is under going in postgresql version 9.5.
I'm trying to use the has_database_privilege function to check whether a user is allowed to connect to the database, but it always returns true. Even after running a revoke all privileges.
select * from has_database_privilege('tests', 'db_test', 'connect');
-- expected: true
-- return: true
-- Removing connection permission only.
revoke connect on database db_test from tests;
select * from has_database_privilege('tests', 'db_test', 'connect');
-- expected: false
-- return: true
-- Removing all permissions.
revoke all privileges on database db_test from tests;
select * from has_database_privilege('tests', 'db_test', 'connect');
-- expected: false
-- return: true
I am doing something wrong or is this a bug?