Running corda enterprise with PostgreSQL in docker container. I have followed the instruction in docs and have set database schema. On database start I see the following errors. Can anyone help what is going on there?
2018-10-11 06:57:57.491 UTC [1506] ERROR: relation "node_checkpoints" does not exist at character 22
2018-10-11 06:57:57.491 UTC [1506] STATEMENT: select count(*) from node_checkpoints
2018-10-11 06:58:22.440 UTC [1506] ERROR: relation "corda-schema.databasechangeloglock" does not exist at character 22
2018-10-11 06:58:22.440 UTC [1506] STATEMENT: select count(*) from "corda-schema".databasechangeloglock
It seems the database user name and schema name don't have the same value, ensure that correct default schema is set for the user by running as database administrator:
ALTER ROLE "[USER]" SET search_path = "[SCHEMA]";
Other possible issue is to mixing upper/lower case and other characters in schema name, could you ensure that schema name has all lower cases (e.g. corda-schema and not CORDA-SCHEMA or Corda-Schema).
Related
I am implementing a Custom Userprovider SPI for keycloak 18.0.2 and therefore have (alongside the keycloak default PostgreSQL-DB) a MSSQL in use.
The customized Keycloak and the PostgreSQL are run via docker-container.
The problems occure on my local MacBook M1 (but the same behaviour on intel-cpu as well). When building and starting the custom keycloak container, all volumes for both containers are removed. So there is always fresh DB-container
(sidenote: As the SPI was written for WildFly and it is broken with 19.x.x, i just stepped back to 18.0.2 to get the whole process working again. Afterwards will update to 19 and adapt the SPI implementations.)
the problem ...
Keycloak will create all tables - for the default keycloak-db (PostgreSQL) - in the public schema ONLY IF i configure the connection to the MSSQL via persistence.xml. This must not be in the production setup, as this should be at least configurable by the gitlab pipeline.
If i move the connection-infos from persistence.xml to quarkus.properties (as described in here: https://github.com/keycloak/keycloak-quickstarts/tree/main/user-storage-jpa), the default DB-tables can't be created anymore...
logs in Postgre-Container:
LOG: database system is ready to accept connections
ERROR: relation "migration_model" does not exist at character 25
STATEMENT: SELECT ID, VERSION FROM MIGRATION_MODEL ORDER BY UPDATE_TIME DESC
ERROR: syntax error at end of input at character 20
STATEMENT: call current_schema
ERROR: current transaction is aborted, commands ignored until end of transaction block
STATEMENT: SELECT COUNT(*) FROM DATABASECHANGELOG
ERROR: syntax error at end of input at character 20
STATEMENT: call current_schema
ERROR: current transaction is aborted, commands ignored until end of transaction block
STATEMENT: SELECT COUNT(*) FROM DATABASECHANGELOGLOCK
ERROR: syntax error at end of input at character 20
STATEMENT: call current_schema
ERROR: current transaction is aborted, commands ignored until end of transaction block
STATEMENT: CREATE TABLE DATABASECHANGELOGLOCK (ID INT NOT NULL, "LOCKED" BOOLEAN NOT NULL, LOCKGRANTED datetime, LOCKEDBY VARCHAR(255), CONSTRAINT PK_DATABASECHANGELOGLOCK PRIMARY KEY (ID))
ERROR: syntax error at end of input at character 20
keycloak logs:
WARN [liquibase.database.DatabaseFactory] (main) Unknown database: PostgreSQL
WARN [org.keycloak.connections.jpa.updater.liquibase.lock.CustomLockService] (main) Failed to create lock table. Maybe other transaction created in the meantime. Retrying...
ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (development) mode
Does using the quarkus.properties overwrite some keycloak-defaults? so, when using it, keycloak acts differently than the configuration without a custom quarkus file?
I'm trying to create schema with query:
CREATE SCHEMA IF NOT EXISTS hdb_catalog
but following error occurred:
2019-09-10 13:47:37.025 UTC [129] ERROR: duplicate key value violates unique constraint "pg_namespace_nspname_index"
2019-09-10 13:47:37.025 UTC [129] DETAIL: Key (nspname)=(hdb_catalog) already exists.
2019-09-10 13:47:37.025 UTC [129] STATEMENT:
CREATE SCHEMA IF NOT EXISTS hdb_catalog
How it is possible with IF NOT EXISTS?
That looks like you have catalog corruption.
With some luck, only the index is affected. You can try to repair it using
REINDEX pg_catalog.pg_namespace;
Like in all cases of corruption, it is commendable to create a new cluster with initdb and use pg_dump/pg_restore to copy the database there. There might be more problems.
Also, try to find out what caused the corruption. Often it is bad hardware.
I have a MSSQL 2012 linked server with the database collation set to Latin1_General_100_BIN2.
I am trying to query the linked server from a MSSQL 2005 database where the collation is set to SQL_Latin1_General_CP1_CI_AS.
When I execute the following query I receive: An invalid tabular data stream (TDS) collation was encountered.
SELECT
reel_key COLLATE SQL_Latin1_General_CP1_CI_AS
FROM [SomeServer].[SomeDatabase].[dbo].[SomeTable]
The linked server reel_key field is a char(7).
From what I've read and researched this should work but it does not. Where am I going wrong?
Try set property linked server
Use Remote Collation = true
and set collation name
EXEC sp_serveroption 'SERVER', 'use remote collation', 'true'
EXEC sp_serveroption 'SERVER', 'collation name', 'SQL_Latin1_General_CP1_CI_AS'
or force sort
SELECT
reel_key
FROM [SomeServer].[SomeDatabase].[dbo].[SomeTable]
ORDER BY reel_key COLLATE SQL_Latin1_General_CP1_CI_AS
After trying to upgrade a shadow-copy of our PostgreSQL 9.6.6 RDS instance to 10.4, most operations on the database, including those done with a "root" user (the one created when setting up the database), result in an error like this:
SQL Error [42501]: ERROR: permission denied for schema public
Position: 15
Another example is a query like select * from example_table limit 100; which results in the error:
SQL Error [42P01]: ERROR: relation "example_table" does not exist
Position: 15
ERROR: relation "example_table" does not exist
Position: 15
ERROR: relation "example_table" does not exist
Position: 15
However, I am able to execute SELECT * FROM pg_catalog.pg_tables where schemaname = 'public'; which correctly lists all my tables
The upgrade logs don't seem to show anything unusual. I've been unable to find any RDS-specific instructions on upgrading from 9.x to 10.x so I assumed that the normal upgrade procedure in the interface (which I've used in the past and seems to be using a pg_upgrade operation) would "just work". Is there anything I'm missing?
I want to setup my postgreSQL server to 'Europe/Berlin' but having an error:
SET time zone 'Europe/Berlin';
ERROR: invalid value for parameter "TimeZone": "Europe/Berlin"
But the real issue is with DdbSchema, when I want to connect to my DB i've got the error
FATAL: invalid value for parameter "TimeZone": "Europe/Berlin"
DbSchema works when I connect to my local db but not with my NAS (Synology) DB.
Any idea ?
Found a way to solve the problem:
You have to start java with the proper time zone.
In my case, my server is GMT, so i had to add the args -Duser.timezone=GMT
For DbSchema, edit the file DbSchema.bat or DbSchema.sh
Find the declaration of SWING_JVM_ARGS
Add the argument -Duser.timezone=GMT a the end of the line
Start DbSchema with this script DbSchema.bat or DbSchema.sh
I think your solution is only a workaround for the actual problem concerning the zoneinfo on the synology diskstation.
I got exactly the same error when trying to connect to the postgres database on my diskstation. The query select * from pg_timezone_names; gives you all timezone names postgresql is aware of.
There are 87 entries all starting with "Timezone":
name | abbrev | utc_offset | is_dst
------------------------+--------+------------+--------
Timezone/Kuwait | AST | 03:00:00 | f
Timezone/Nairobi | EAT | 03:00:00 | f
...
The configured postgres timezonesets contain much more entries, so there must be another source that postgres is building this view of at startup. I discovered that there is a compile-option --with-system-tzdata=DIRECTORY that tells postgres to obtain its values from system zoneinfo.
I looked in /usr/share/zoneinfo and found one subdirectory called Timezone with exactly 87 entries. And there obviously was no subdirectory called Europe (with a timezone file called Berlin). I did not quickly find a solution for the diskstation to update the tzdata automatically or manually by unpacking tzdata2016a.tar.gz and making (make not found...). As a quickfix I copied the Berlin timezone file from another linux system and the problem was solved, so that I now can connect via java/jdbc using the correct timezone "Europe/Berlin"!