SymmetricDS Postgres target gives "Failed to read table" for all sym_* tables - postgresql

I'm trying to setup a simple replication from MySQL to Postgres. Identical schemas. After following the steps in the Demo Tutorial with a slight change (using MySQL and Postgres drivers) I am still unable to get the replication working.
A few changes were needed based on complaints after running bin/sym
SET GLOBAL show_compatibility_56 = ON needed to be set in the MySQL DB
For Postgres I needed to use protocolVersion=3 instead of 2 which was set in the example.
The weird thing is that SymmetricDS is able to create the sym_* tables, but complains about not being able to read them. I have verified that the tables do not exist before bin/sym is run, but do exist after. Here is an excerpt from the log
// Successful creation of table
[store-001] - PostgreSqlSymmetricDialect - DDL applied: CREATE TABLE "sym_notification"(
"notification_id" VARCHAR(128) NOT NULL,
...
PRIMARY KEY ("notification_id")
)
...
// Unable to read from created table
[store-001] - PostgreSqlDdlReader - Failed to read table: sym_notification
[store-001] - PostgreSqlDdlReader - Failed to read table: sym_notification
[store-001] - AbstractDatabaseWriter - Did not find the sym_notification table in the target database
[store-001] - PostgreSqlDdlReader - Failed to read table: sym_monitor
[store-001] - PostgreSqlDdlReader - Failed to read table: sym_monitor
[store-001] - AbstractDatabaseWriter - Did not find the sym_monitor table in the target database
The same error apply for all the sym_* tables.
The databases are running in Docker, but since SymmetricDS is not complaining about being unable to connect, and is able to create the tables, I assume it is not related to Docker.
The database in the Postgres DB is created by the same user as specified in engines/store-001.properties. Could this still have something to do with roles and access privileges?

If you upgrade to the latest JDBC driver from Postgres it will work.
Replace the existing Postgres driver from the lib directory from the latest from here: https://jdbc.postgresql.org/download.html

Try to connect to the postgres database with the same username/password used by symmetric-ds from some DB navigator, for example Jetbrain's Datagrip and then try inserting, updating, selecting something from sym_* tables. Assign access rights to the user if necessary.

When using Postgres 9.6.1 (current latest release) the following error is logged on the server when running bin/sym
ERROR: column am.amcanorder does not exist at character 427
The problem was resolved by using Postgres 9.5.5 instead thanks to Nick Barnes pointing this out in a comment.

Related

Database Migration Service - Aurora PostgreSql -> CloudSQL fails with confusing error( Unable to drop table postgres)

Attempting to migrate from AWS Aurora PostgreSQL 13.4 to Google Cloud SQL PostgreSQL 13.
Migration job gives this error:
finished setup replication with errors: failed to drop database "postgres": generic::unknown: retry budget exhausted (10 attempts): pq: database "postgres" is being accessed by other users
The user the DMS is using only has SELECT permissions on the source database(Aurora)
I'm very confused as to why it is trying to drop the "postgres" database at all. Not sure if it is trying to drop the database in the source or destination. Not sure what I'm missing.
I've installed necessary extensions in the destination DB(pg_cron). No difference.
User in source database has SELECT on all tables/schemas outlined in the docs(including pglogical schema)
I've tried various PostgreSQL versions in the destination cluster( 13.x, 14.x). No difference.
The "Test connection" tool when creating the migration job, shows no errors. (There is a warning about a few tables not having Primary keys, but that's it.)

Apache Superset: Migrate from sqlite to Postgres

When I migrate from sqlite to postgres, I cannot make any writes to the new database.
The log shows the following error:
Unique key conflict id= 10, already exist in table ab_user
Two potential scenarios:
Scenario A: You are starting from scratch and want to use Postgres instead of sqlite.
Install postgres
Add postgres connection string to superset_config.py file
Run superset db upgrade this will create all the tables on Postgres
Run superset init
Launch Superset
Scenario B: You have an already populated sqliteDB and want to migrate it to Postgres.
Install postgres
Add postgres connection string to superset_config.py file
Copy all tables from sqlite to postgres, many ways to do this. My preferred way is to do is to use ruby
After you have copied your data run superset db upgrade and superset init
After you have done this you will need to update key sequences on Postgres on table information_schema.sequences otherwise you will hit unique key conflicts errors.
Launch Superset
In both scenarios you should see the message below after you have run superset db upgrade this means you have configured your superset_config.py properly.:
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
Make sure superset is looking at the config file by setting the environment variables:
export PYTHONPATH=/home/local_settings/:$PYTHONPATH
OR
export SUPERSET_CONFIG_PATH=/home/local_settings/

Postgresql Monitoring using Grafana

I am very new in Grafana. We have setup grafana and now we want to monitor our postgresql DB using it.
We have created the postgresql datasource and provide all the required detail of our psql DB machine and connection showing ok in Grafana. Now we have imported the Grafana dashboard with id 9948 Dashboard is imported but didn't showing any stats over it.
We are getting the below error:
Error pq: relation "collectd" does not exist
Error Templating init failed
pq: relation "identifiers" does not exist
We have installed the collectd also using apt install collectd on DB machine but didn't find any configurable option in collectd.conf file.
Can you please help me.
Thanks.
You need to install the timescaleDB and create the PG extension.
\c database
CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;
Once done, follow the instructions from following git project.
https://github.com/petercelentanojr/timescaledb_collectd_resources/blob/master/collectd_timescaledb_schema/collectd_timescaledb_bootstrap.sql

Why is Nhibernate SchemaExport unable to create a PostgreSQL database?

I have the following code to generate the schema for a database in Nhibernate
new SchemaExport(configuration).Execute(true, true, false);
but when run against a PostgreSQL database, I end up getting the following error
[NpgsqlException (0x80004005): FATAL: 3D000: database "dbname" does not exist]
If I however create the database manually, the schema is exported without errors. An so the question: Why is Nhibernate SchemaExport unable to create a PostgreSQL database and yet this works against other databases like SQLite, MsSqlCe and MsSql Server.
I have searched for online literature but have been unable to locate any highlighting on this issue.
I am using Nhibernate 3.3.1 with PostgreSQL 9.2.
You must create the database before you can create the tables and other objects within the database.
Do this with a CREATE DATABASE statement on a PostgreSQL connection - either in your app, or via psql or PgAdmin-III.
PostgreSQL doesn't support creating databases on demand / first access. Perhaps that's what your tool is expecting?
If you think the DB does exist and you can see it in other tools, maybe you're not connecting to the same database server? Check the server address and port.

Updating Heroku Postgres from dev to basic

I am trying to update my database from Dev to Basic on heroku. I followed all the steps mentioned here but after heroku pg:promote HEROKU_POSTGRESQL_WHATEVER I wanted to check if my database had everything, so I just went and looked on the website and for the basic version it says
![Data Size 0 B
Tables 0
PG Version ?][1]
The basic
While it should be
I am not sure what went wrong.
The table and database size is computed via an asynchronous process. This can sometimes take a little while to show. If you've recently migrated then you should try connecting with heroku pg:psql then running:
VACUUM ANALYZE;
This will ensure Postgres has proper stastics then reports the tables correctly for when Heroku asks about the table size. Additionally you could manually explore your database once connected to ensure your data is there:
\dt --- to display tables
SELECT * FROM foo; --- to ensure data is there on a specific table