Can't create schema in postgres - postgresql

I'm trying to create schema with query:
CREATE SCHEMA IF NOT EXISTS hdb_catalog
but following error occurred:
2019-09-10 13:47:37.025 UTC [129] ERROR: duplicate key value violates unique constraint "pg_namespace_nspname_index"
2019-09-10 13:47:37.025 UTC [129] DETAIL: Key (nspname)=(hdb_catalog) already exists.
2019-09-10 13:47:37.025 UTC [129] STATEMENT:
CREATE SCHEMA IF NOT EXISTS hdb_catalog
How it is possible with IF NOT EXISTS?

That looks like you have catalog corruption.
With some luck, only the index is affected. You can try to repair it using
REINDEX pg_catalog.pg_namespace;
Like in all cases of corruption, it is commendable to create a new cluster with initdb and use pg_dump/pg_restore to copy the database there. There might be more problems.
Also, try to find out what caused the corruption. Often it is bad hardware.

Related

postgres database logical replication: ERROR: duplicate key value violates unique constraint "users_unique_key"

i am trying to set up master/slave postgres replication.
i have 2 servers, master with populated data ,i did complete backup of the database i want to replicate and restored it in slave, then i configured the replication with (copy_data=false) . when insert a new row in any table, it works fine, but when i do update an existing row, i get the error:
022-03-16 22:29:08.708 UTC [625449] ERROR: duplicate key value violates unique constraint "users_unique_key"
2022-03-16 22:29:08.708 UTC [625449] DETAIL: Key (userid, classid)=(4556, 4507) already exists.
2022-03-16 22:29:08.711 UTC [625285] LOG: background worker "logical replication worker" (PID 625449) exited with exit code 1
The table has primary key and set to "not null". My question is , how do i setup logical replication, with UPDATE statement possible ?

PostgreSQL on Corda enterprise node throws relation errors

Running corda enterprise with PostgreSQL in docker container. I have followed the instruction in docs and have set database schema. On database start I see the following errors. Can anyone help what is going on there?
2018-10-11 06:57:57.491 UTC [1506] ERROR: relation "node_checkpoints" does not exist at character 22
2018-10-11 06:57:57.491 UTC [1506] STATEMENT: select count(*) from node_checkpoints
2018-10-11 06:58:22.440 UTC [1506] ERROR: relation "corda-schema.databasechangeloglock" does not exist at character 22
2018-10-11 06:58:22.440 UTC [1506] STATEMENT: select count(*) from "corda-schema".databasechangeloglock
It seems the database user name and schema name don't have the same value, ensure that correct default schema is set for the user by running as database administrator:
ALTER ROLE "[USER]" SET search_path = "[SCHEMA]";
Other possible issue is to mixing upper/lower case and other characters in schema name, could you ensure that schema name has all lower cases (e.g. corda-schema and not CORDA-SCHEMA or Corda-Schema).

Why is liquibase deleting databasechangelog rows and trying to create a renamed database table?

I am using postgres 10.5 and liquibase 3.6.2 on a Mac.
I nuke & re-create my database, run liquibase update, and it works.
But a second liquibase update fails with an exception that the pkey already exists.
After the first liquibase update, the databasechangelog table contains 97 entries. After the second, it contains 10, and the time and deployment ids for those are different than they were after the first update!
Table foo was created in an early change.
Later it was changed to be named bar, but the pkey is still foo.pkey.
Liquibase-update should not be trying to re-create foo, but it does, and fails because foo.pkey already exists.
A) In general, how can I get liquibase to output more info about what it's doing? I tried both of the commands:
liquibase --logLevel=debug --logFile=`pwd`/foo.log update
liquibase --logLevel debug --logFile `pwd`/foo.log update
Both seem to work the same, and foo.log isn't created and there's no more output in the terminal.
B) How can I stop liquibase from trying to re-make this and nuking my databasechangelog?
I tried to make a small example that fails, but this seems to work... Others here are using it with postgres 9.5.10 with no problem...
All I see in the terminal is:
Starting Liquibase at Wed, 14 Nov 2018 13:06:44 PST (version 3.6.2 built at 2018-07-03 11:28:09)
Unexpected error running Liquibase: ERROR: relation "cant_change_pkey" already exists [Failed SQL: CREATE TABLE nuss.cant_change (message_id UUID NOT NULL, origin VARCHAR(4), type VARCHAR(12) NOT NULL, CONSTRAINT CANT_CHANGE_PKEY PRIMARY KEY (message_id), UNIQUE (message_id))]
liquibase.exception.MigrationFailedException: Migration failed for change set db/changelog/changelog-new1.xml::first-one::rstrauss:
Reason: liquibase.exception.DatabaseException: ERROR: relation "cant_change_pkey" already exists [Failed SQL: CREATE TABLE nuss.cant_change (message_id UUID NOT NULL, origin VARCHAR(4), type VARCHAR(12) NOT NULL, CONSTRAINT CANT_CHANGE_PKEY PRIMARY KEY (message_id), UNIQUE (message_id))]
at liquibase.changelog.ChangeSet.execute(ChangeSet.java:637)
at liquibase.changelog.visitor.UpdateVisitor.visit(UpdateVisitor.java:53)
at liquibase.changelog.ChangeLogIterator.run(ChangeLogIterator.java:78)
at liquibase.Liquibase.update(Liquibase.java:202)
at liquibase.Liquibase.update(Liquibase.java:179)
at liquibase.integration.commandline.Main.doMigration(Main.java:1205)
at liquibase.integration.commandline.Main.run(Main.java:191)
at liquibase.integration.commandline.Main.main(Main.java:129)
Caused by: liquibase.exception.DatabaseException: ERROR: relation "cant_change_pkey" already exists [Failed SQL: CREATE TABLE nuss.cant_change (message_id UUID NOT NULL, origin VARCHAR(4), type VARCHAR(12) NOT NULL, CONSTRAINT CANT_CHANGE_PKEY PRIMARY KEY (message_id), UNIQUE (message_id))]
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:356)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:57)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:125)
at liquibase.database.AbstractJdbcDatabase.execute(AbstractJdbcDatabase.java:1229)
at liquibase.database.AbstractJdbcDatabase.executeStatements(AbstractJdbcDatabase.java:1211)
at liquibase.changelog.ChangeSet.execute(ChangeSet.java:600)
... 7 common frames omitted
Caused by: org.postgresql.util.PSQLException: ERROR: relation "cant_change_pkey" already exists
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2476)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2189)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:300)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:428)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:354)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:301)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:287)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:264)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:260)
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:352)
... 12 common frames omitted
For more information, please use the --logLevel flag

Applying oplog but found duplicate key error

Mongo version is 3.0.6, I have a process to apply oplog from another database to destination database by mongodump and mongorestore by using --oplogReplay option.
But I found duplicate key error messages many time, source and target database have the same structure (indies and fields) that is impossible to have duplicated record on target because it should be error on source db first.
And error message looks like this
2017-08-20T00:55:55.900+0000 Failed: restore error: error applying oplog: applyOps: exception: E11000 duplicate key error collection: <collection_name> index: <field> dup key: { : null }
And today I found a mystery message like this
2017-08-25T01:02:14.134+0000 Failed: restore error: error applying oplog: applyOps: not master
What's a mean? And my understanding, mongorestore has "--stopOnError" option that means the default process, if have any errors, the restore process will skip and move on. But I got above error and then the restore process has been terminated anytime. :(
This does not answer directly to your question, sorry for that, but...
If you need to apply oplog changes for database A to database B, it would be better to use mongo-connector program, than mongodump/mongorestore -pair.

Restore Postgres database dump

Unable to restore Postgres db dump
I have used following commnands
sudo psql my_database_name < feb9.sql
SET
SET
SET
SET
SET
CREATE EXTENSION
COMMENT
SET
SET
SET
ERROR: relation "admin_tools_dashboard_preferences" already exists
ALTER TABLE
ERROR: relation "admin_tools_dashboard_preferences_id_seq" already exists
ALTER TABLE
ALTER SEQUENCE
ERROR: relation "admin_tools_menu_bookmark" already exists
ALTER TABLE
ERROR: relation "admin_tools_menu_bookmark_id_seq" already exists
ALTER TABLE
ALTER SEQUENCE
ERROR: relation "auth_group" already exists
ALTER TABLE
ERROR: relation "auth_group_id_seq" already exists
ERROR: duplicate key value violates unique constraint "product_rateclass_code_uniq"
DETAIL: Key (code)=(0) already exists.
CONTEXT: COPY product_rateclass, line 1: "40787 0 Tariff Rates"
setval
--------
40791
(1 row)
ERROR: relation "admin_tools_menu_bookmark_user_id" already exists
ERROR: relation "auth_group_name_like" already exists
ERROR: relation "auth_group_permissions_group_id" already exists
ERROR: relation "auth_group_permissions_permission_id" already exists
ERROR: relation "auth_permission_content_type_id" already exists
ERROR: relation "auth_user_groups_group_id" already exists
ERROR: relation "auth_user_groups_user_id" already exists
ERROR: relation "auth_user_user_permissions_permission_id" already exists
ERROR: relation "auth_user_user_permissions_user_id" already exists
ERROR: relation "auth_user_username_like" already exists
REVOKE
REVOKE
GRANT
GRANT
I have received the above error logs (I have such logs running into hundreds of lines, I have used several lines for reference).
Post execution of the command, the database is still containing old records, instead of new records.
I dropped the existing database and created the new one.
$dropdb development_db_name
$ createdb developmnent_db_name
Then I restored the db using
sudo psql my_database_name < feb9.sql