PostreSQL error relation already exists while creating a partition - postgresql

I created a partition (programmatically with Java, JPA/native query), after that I deleted it manually pgAdmin with DROP table my_partition. After that, I'm trying to re-create it again programmatically, but I get this error
SQL Error: 0, SQLState: 42P07
ERROR: relation "partition_2020_12_08" already exists
CREATE TABLE "myschema.com".partition_2020_12_08 PARTITION OF "myschema.com".measurement FOR VALUES FROM (1607385600000) TO (1607471999999)
Interesting that when I execute that SQL with pgAdmin, it works fine. It looks to me that PostreSQL caches some information when I'm using it with JDBC/Java driver.
How to debug this issue? I need to have a possibility to re-create the same partitions if needed.

Related

SQL Error [42P07]: ERROR: relation "table1" already exist

Running a query to create a table, in the framework DBeaver v22, the error returns from a random table, every time I run the SQL script and it hits a query to create a table.
The script has a few thousands of lines, lots of drops and creates tables and. the very same error happens randomly when a CREATE query gets executed.
At the time I created this thread, I executed the script and it returned error in the creation of table1.
But It could have been any other. It doesn`t seem to be an error in he syntax/grammar of my SQL, but somehow in the engine of DBeaver 22.2. Because the error returns in a random table as per script execution.
SQL Error [42P07]: ERROR: relation "table1" already exist
Even though I added the following query to DROP TABLE, right before the one to CREATE table, the error still returns, when the query to create gets executed.
DROP TABLE IF EXISTS sandbox.table1;
CREATE TABLE sandbox.table1 as ();
I wonder if it takes a long time to drop the table so that, the create command line returns error
Is that possible to be the cause ?
Do I need a timer to wait for RDBMS fully drop the table?
SQL Error [42P07]: ERROR: relation "table1" already exist
Accessing further logs I`ve identified the root cause was permission error.
As It couldn't delete then creating table caused the error
org.jkiss.dbeaver.model
Error
Wed Dec 07 11:38:44 BRT 2022
SQL Error [42501]: ERROR: permission denied for relation table1

invalid input syntax for type json aws dms postgres

I'm running a task that migrates all data from a postgres 10.4 to a RDS postgres 10.4.
Not able to migrate tables which have jsonb column.
After error, whole table is getting suspended.Table contain 449 rows only.
I have made following error policy, still whole table suspended.
"DataErrorPolicy": "IGNORE_RECORD",
"DataTruncationErrorPolicy": "IGNORE_RECORD",
"DataErrorEscalationPolicy": "SUSPEND_TABLE",
"DataErrorEscalationCount": 1000,
My expectation is that whole table should be transferred, it can ignore record if any json is wrong.
I dont know why its giving this error 'invalid input syntax for type json' , i have checked all json and all jsons are valid.
After debugging more, this error has been considered as TABLE error , but why ? Thats why table got suspended since TableErrorPolicy is 'SUSPEND_TABLE'.
Why this error considered as table error instead of record error?
Is JSONB column not supported by DMS thats why we are getting below error?
Logs :-
2020-09-01T12:10:04 https://forums.aws.amazon.com/I: Next table to load 'public'.'TEMP_TABLE' ID = 1, order = 0 (tasktablesmanager.c:1817)
2020-09-01T12:10:04 https://forums.aws.amazon.com/I: Start loading table 'public'.'TEMP_TABLE' (Id = 1) by subtask 1.
Start load timestamp 0005AE3F66381F0F (replicationtask_util.c:755)
2020-09-01T12:10:04 https://forums.aws.amazon.com/I: REPLICA IDENTITY information for table 'public'.'TEMP_TABLE': Query status='Success' Type='DEFAULT'
Description='Old values of the Primary Key columns (if any) will be captured.' (postgres_endpoint_unload.c:191)
2020-09-01T12:10:04 https://forums.aws.amazon.com/I: Unload finished for table 'public'.'TEMP_TABLE' (Id = 1). 449 rows sent. (streamcomponent.c:3485)
2020-09-01T12:10:04 https://forums.aws.amazon.com/I: Table 'public'.'TEMP_TABLE' contains LOB columns, change working mode to default mode (odbc_endpoint_imp.c:4775)
2020-09-01T12:10:04 https://forums.aws.amazon.com/I: Table 'public'.'TEMP_TABLE' has Non-Optimized Full LOB Support (odbc_endpoint_imp.c:4788)
2020-09-01T12:10:04 https://forums.aws.amazon.com/I: Load finished for table 'public'.'TEMP_TABLE' (Id = 1). 449 rows received. 0 rows skipped.
Volume transferred 190376. (streamcomponent.c:3770)
2020-09-01T12:10:04 https://forums.aws.amazon.com/E: RetCode: SQL_ERROR SqlState: 22P02 NativeError: 1 Message: ERROR: invalid input syntax for type json;
Error while executing the query https://forums.aws.amazon.com/ (ar_odbc_stmt.c:2648)
2020-09-01T12:10:04 https://forums.aws.amazon.com/W: Table 'public'.'TEMP_TABLE' (subtask 1 thread 1) is suspended (replicationtask.c:2471)
Edit- after debugging more, this error has been considered as TABLE error , but why ?
JSONB column data type must be nullable in target DB.
Note- In my case, after making JSONB column as nullable, this error disappeared.
As mentioned in AWS documentation-
In this case, AWS DMS treats JSONB data as if it were a LOB column. During the full load phase of a migration, the target column must be nullable.
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.Prerequisites
https://aws.amazon.com/premiumsupport/knowledge-center/dms-error-null-value-column/
AWS DMS treats the JSON data type in PostgreSQL as a LOB data type column. This means that the LOB size limitation when you use limited LOB mode applies to JSON data. For example, suppose that limited LOB mode is set to 4,096 KB. In this case, any JSON data larger than 4,096 KB is truncated at the 4,096 KB limit and fails the validation test in PostgreSQL.
Reference: AWS DMS - JSON data types being truncated
Update: You can tweak the error handling task settings to skip erroneous rows by setting the value for DataErrorPolicy to IGNORE_RECORD which determines the action AWS DMS takes when there is an error related to data processing at the record level.
Some examples of data processing errors include conversion errors, errors in transformation, and bad data. The default is LOG_ERROR. IGNORE_RECORD, the task continues and the data for that record is ignored.
Reference: AWS DMS - Error handling task settings
You mentioned that you're migrating from PostgreSQL to PostgreSQL. Is there a specific reason to Use AWS DMS?
AWS Docs: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.Homogeneous
When you migrate from a database engine other than PostgreSQL to a PostgreSQL database, AWS DMS is almost always the best migration tool to use. But when you are migrating from a PostgreSQL database to a PostgreSQL database, PostgreSQL tools can be more effective.
...
We recommend that you use PostgreSQL database migration tools such as pg_dump under the following conditions:
You have a homogeneous migration, where you are migrating from a source PostgreSQL database to a target PostgreSQL database.
You are migrating an entire database.
The native tools allow you to migrate your data with minimal downtime.

How to set table to LOGGED in postgresql 9.2?

I'm using PostgreSQL 9.2. I created a bunch of UNLOGGED tables, loaded data into there, and created primary keys. Now I'd like to set them back to LOGGED status.
I tried the command:
ALTER TABLE table_name SET LOGGED;
However, I get this error:
ERROR: syntax error at or near "LOGGED"
What is the proper syntax for this?

DB2 SQL Error: SQLCODE=-911, SQLSTATE=40001, SQLERRMC=68

I am getting this error when I ran:
alter table tablename add column columnname varchar(1) default 'N';
DB2 SQL Error: SQLCODE=-911, SQLSTATE=40001, SQLERRMC=68
How to solve it?
The alter statement wants to get an X lock on this row in SYSIBM.SYSTABLES. There is an open transaction that has this row/index value in an incompatible lock state. This lock that caused the timeout could even be from an open cursor that reads this row with an RS or RR isolation level.
Terminate any other SQL currently trying to query SYSTABLES and any utilities that may be trying to update SYSTABLES like reorg and runstats then try the alter again.
See DB2 Info center (I picked the one for DB2 10, most likely this error code is the same in other versions, but doublecheck!).
Seems there is a transaction open on your table, that prevents your alter command from execution.
after you have Altered a table you need to Reorg: reade up on it here:
Run the runstats script, which is a DB2 script, at regular intervals and set the script to gather RUNSTATS WITH DISTRIBUTION AND DETAILED INDEXES ALL.
In addition to running the runstats scripts regularly, you can perform the following tasks to avoid the problem:
Use REOPT ONCE or REOPT ALWAYS with the command-line interface (CLI ) packages to change the query optimization behavior.
In the DB2 database, change the table to make it volatile. Volatile tables indicate to the DB2 optimizer that the table cardinality can change significantly at run time (from empty to large and vice versa). Therefore, DB2 uses an index to access a table rather than a table scan.

Table invisible in PostgreSQL - Undefined relation issue at different sessions

I have executed the following create statement using SQLWorkbench at my target postgresql database:
CREATE TABLE Config (
id serial PRIMARY KEY,
pub_ip_range_low varchar(100),
pub_ip_range_high varchar(100)
);
Right after table creation I request the table content by typing 'select * from config;' and see that table could be retrieved. Nevertheless, my java program that uses JDBC type 4 driver cannot access the table when I issue the same select statement in it. An exception is thrown when the program tries to access it which says says "Undefined relation" for the config table.
My questions are:
Why sqlworkbench where I had previously run the create statement recognizes the table while my java program cannot find it?
Where does the postgressql DBMS puts the tables I created? I don't see them neither in public nor in information schema.
NOTE:
I checked target postgres database and cannot see the table Config anywhere although SQL workbench can query it. Then I opened another SQL workbench instance and noticed that the table cannot be queried (i.e. not found). So, my conclusion is that PostgreSQL puts the table I created in the first running SQLBench instance into some location that is bound to that session. Another SQL Workbench instance or my java program is not bound to session, so cannot query the previously created table config.
The only "bloody location" that is session-local in PostgreSQL is the schema pg_temp, in other words: temporary tables. But your CREATE command does not display the keyword TEMP[ORARY]. Of course, as long as the transaction is not commited, nobody sees anything outside the transaction.
It's more likely you are seeing a switcheroo of hosts / databases / ports / or the schema search_path. A mixup with the mixed-case table name is a hot candidate, too. If you don't double-quote "Config", the table ends up all lower case in the system, so: config. If you later double quote the name, it won't match. The manual has the details.
Maybe the create failed on the extra trailing comma?
CREATE TABLE config (
id serial PRIMARY KEY,
pub_ip_range_low varchar(100),
pub_ip_range_high varchar(100) -- >> ,
);