SOLVED:gbak restoring DB raises violation of PRIMARY or UNIQUE KEY constraint - firebird

I'm copying database from production to test env.
Both of servers are Firebird x64 v3.0.4.
I caught following error messages and stopped gbak -R.
gbak: ERROR:violation of PRIMARY or UNIQUE KEY constraint "RDB$INDEX_12" on table "RDB$RELATION_CONSTRAINTS"
gbak: ERROR: Problematic key value is ("RDB$CONSTRAINT_NAME" = 'RDB$INDEX_0')
How to find target constraints and indexes?

It is my mistake.
I ran gbak for firebird 3.0 database from firebird 2.5 client.
I could import backup files successfully.
Thanks.

Related

Database Migration Service - Aurora PostgreSql -> CloudSQL fails with confusing error( Unable to drop table postgres)

Attempting to migrate from AWS Aurora PostgreSQL 13.4 to Google Cloud SQL PostgreSQL 13.
Migration job gives this error:
finished setup replication with errors: failed to drop database "postgres": generic::unknown: retry budget exhausted (10 attempts): pq: database "postgres" is being accessed by other users
The user the DMS is using only has SELECT permissions on the source database(Aurora)
I'm very confused as to why it is trying to drop the "postgres" database at all. Not sure if it is trying to drop the database in the source or destination. Not sure what I'm missing.
I've installed necessary extensions in the destination DB(pg_cron). No difference.
User in source database has SELECT on all tables/schemas outlined in the docs(including pglogical schema)
I've tried various PostgreSQL versions in the destination cluster( 13.x, 14.x). No difference.
The "Test connection" tool when creating the migration job, shows no errors. (There is a warning about a few tables not having Primary keys, but that's it.)

How to connect public schema to restore backup in pgAdmin4

I am trying to restore a postgresql database backup file of power outage information sent by a work colleague and keep running into an error. They sent a file with a .backup extension and said "it should restore to the public schema of an empty postgresql database". I am using pgAdmin 4 and following the steps suggested here https://o7planning.org/11913/backup-and-restore-postgres-database-with-pgadmin to restore the database of
creating a new empty database
right clicking on the new database and clicking restore
linking to file path of backup and running
however, each time I get the error
pg_restore: error: could not execute query: ERROR: schema "outage_data" does not exist
Not sure how to solve it. Any help would be greatly appreciated!
(Not sure if it is important but I am running on Windows 10 and pgAdmin4 version 6)
Do you get more errors after that? This might be a harmless error.
For example, if a table in "public" references a table in "outage_data", but only public was dumped, then you will get this error when it tries to recreate the foreign key constraint. That constraint of course will be missing, but no other harm is done. The public table and all of its data will still be there.

ERROR: connection for foreign table "test_enames" cannot be established

I am getting below error while selecting the foreign table from Postgres & please help me on fixing the issue.
ERROR: connection for foreign table "test_enames" cannot be established
DETAIL: ORA-12154: TNS:could not resolve the connect identifier
specified
SQL state: HV00N
Details
1.I am using Postgres 13 version on Windows 10 64 Bit machine.
2.I installed oracle_fdw-2.3.0-pg13-win64 successfully in my Windows 10 6bit machine.
3.Created system variable for TNS_ADMIN=C:\Oracle\product\12.2.0x64\client_1\network\admin
4.Created below steps successfully.
CREATE FOREIGN DATA WRAPPER oracle_fdw
CREATE SERVER foreign_oracle
TYPE 'Oracle12'
VERSION '12'
FOREIGN DATA WRAPPER oracle_fdw
OPTIONS (dbserver '//vms1.abc.com:1524/ABC00D70');
CREATE USER MAPPING FOR postgres SERVER foreign_oracle
OPTIONS ("user" 'Test', password 'Test1');
CREATE FOREIGN TABLE test_enames(
eno numeric NULL,
ename character varying(100),
eloc character varying(100)
) SERVER foreign_oracle
OPTIONS (table 'TEST_ENAMES');
But still getting error while selecting table , let me know incase if i missed any steps.
Thanks
That seems like an Oracle configuration problem.
To debug this, try running the following as the operating system user that runs the PostgreSQL service:
sqlplus Test/Test1#//vms150.abc.com:1524/ABC00D70
If that doesn't work, start debugging that. That should be easier, since oracle_fdw is not part of the equation then.
If the sqlplus call above works as you intend it to, make sure that the PostgreSQL service has the TNS_ADMIN environment variable set (did you restart the service after setting it?). I am not sure how to check the environment for a running process on Windows, but "Process Explorer" might do the trick.
Besides, you shouldn't use CREATE FOREIGN DATA WRAPPER to create the FDW, but create the extension as specified in the documentation:
CREATE EXTENSION oracle_fdw;
That will create the foreign data wrapper for you.

SymmetricDS Postgres target gives "Failed to read table" for all sym_* tables

I'm trying to setup a simple replication from MySQL to Postgres. Identical schemas. After following the steps in the Demo Tutorial with a slight change (using MySQL and Postgres drivers) I am still unable to get the replication working.
A few changes were needed based on complaints after running bin/sym
SET GLOBAL show_compatibility_56 = ON needed to be set in the MySQL DB
For Postgres I needed to use protocolVersion=3 instead of 2 which was set in the example.
The weird thing is that SymmetricDS is able to create the sym_* tables, but complains about not being able to read them. I have verified that the tables do not exist before bin/sym is run, but do exist after. Here is an excerpt from the log
// Successful creation of table
[store-001] - PostgreSqlSymmetricDialect - DDL applied: CREATE TABLE "sym_notification"(
"notification_id" VARCHAR(128) NOT NULL,
...
PRIMARY KEY ("notification_id")
)
...
// Unable to read from created table
[store-001] - PostgreSqlDdlReader - Failed to read table: sym_notification
[store-001] - PostgreSqlDdlReader - Failed to read table: sym_notification
[store-001] - AbstractDatabaseWriter - Did not find the sym_notification table in the target database
[store-001] - PostgreSqlDdlReader - Failed to read table: sym_monitor
[store-001] - PostgreSqlDdlReader - Failed to read table: sym_monitor
[store-001] - AbstractDatabaseWriter - Did not find the sym_monitor table in the target database
The same error apply for all the sym_* tables.
The databases are running in Docker, but since SymmetricDS is not complaining about being unable to connect, and is able to create the tables, I assume it is not related to Docker.
The database in the Postgres DB is created by the same user as specified in engines/store-001.properties. Could this still have something to do with roles and access privileges?
If you upgrade to the latest JDBC driver from Postgres it will work.
Replace the existing Postgres driver from the lib directory from the latest from here: https://jdbc.postgresql.org/download.html
Try to connect to the postgres database with the same username/password used by symmetric-ds from some DB navigator, for example Jetbrain's Datagrip and then try inserting, updating, selecting something from sym_* tables. Assign access rights to the user if necessary.
When using Postgres 9.6.1 (current latest release) the following error is logged on the server when running bin/sym
ERROR: column am.amcanorder does not exist at character 427
The problem was resolved by using Postgres 9.5.5 instead thanks to Nick Barnes pointing this out in a comment.

How can we clean old DB postgres data from heroku?

I have an posgresql database in heroku and I am doing playframework migrations.
I've reseted this database using heroku pg:reset DATABASE and also dropped all tables and evolutions. However, when I delete all my files from /conf/evolutions/defaults and try to set everything again, some old data persists, giving messages like
ERROR: column "step_description" of relation "step" already exists [ERROR:0, SQLSTATE:42701]