pglogical.replicate_ddl_command quote handling - postgresql

Provider is on OEL 7 postgres 12.4 and Subscriber in on RDS 13.2
pglogical.replicate_ddl_command works fine as long as there are no quotes between start and end of the command.
for example, below works fine
select pglogical.replicate_ddl_command('create table public.foo ( like public.orders including all)','{default}'::text[]);
Event triggers are setup to add this newly created table to default replication_set.
Next, we need to attach table foo as a partition to table orders, and that's where quotes in FOR VALUES become a problem.
select pglogical.replicate_ddl_command('Alter table public.orders attach partition public.foo for values from ('2021-05-01') TO ('2021-06-01')','{default}'::text[]);
ERROR: syntax error at or near "2021"
LINE 1: ...ers attach partition public.foo for values from ('2021-05-01...
^
Could not find anything in docs related to this.
please help.

while I was able to get away with the error by using double quotes '', as the command argument on the function is text, the ATTACH PARTITION did not replicate. Although that is a separate issue than what I asked in this question, appreciate any feedback and ideas.
Leaving this note for anyone else who runs into my situation. The requirement is you need to add both parent and all its partitions to the same replication set. In my case I had all the partitions added but not the parent.

Related

Trying to select all rows in this column with a certain value, and failing [duplicate]

I'm writing a Java application to automatically build and run SQL queries. For many tables my code works fine but on a certain table it gets stuck by throwing the following exception:
Exception in thread "main" org.postgresql.util.PSQLException: ERROR: column "continent" does not exist
Hint: Perhaps you meant to reference the column "countries.Continent".
Position: 8
The query that has been run is the following:
SELECT Continent
FROM network.countries
WHERE Continent IS NOT NULL
AND Continent <> ''
LIMIT 5
This essentially returns 5 non-empty values from the column.
I don't understand why I'm getting the "column does not exist" error when it clearly does in pgAdmin 4. I can see that there is a schema with the name Network which contains the table countries and that table has a column called Continent just as expected.
Since all column, schema and table names are retrieved by the application itself I don't think there has been a spelling or semantical error so why does PostgreSQL cause problems regardless? Running the query in pgAdmin4 nor using the suggested countries.Continent is working.
My PostgreSQL version is the newest as of now:
$ psql --version
psql (PostgreSQL) 9.6.1
How can I successfully run the query?
Try to take it into double quotes - like "Continent" in the query:
SELECT "Continent"
FROM network.countries
...
In working with SQLAlchemy environment, i have got this error with the SQL like this,
db.session.execute(
text('SELECT name,type,ST_Area(geom) FROM buildings WHERE type == "plaza" '))
ERROR: column "plaza" does not exist
Well, i changed == by = , Error still persists, then i interchanged the quotes, like follows. It worked. Weird!
....
text("SELECT name,type,ST_Area(geom) FROM buildings WHERE type = 'plaza' "))
This problem occurs in postgres because the table name is not tablename instead it is "tablename".
for eg.
If it shows user as table name,
than table name is "user".
See this:
Such an error can appear when you add a space in the name of a column by mistake (for example "users ").
QUICK FIX (TRICK)
If you have recently added a field which you have already deleted before and now trying to add the same field back then let me share you this simple trick! i did this and the problem was gone!!
so, now just delete the migration folder entirely on the app,then instead of adding that field you need to now add a field but with the name of which you have never declared on this app before, example if you are trying to add title field then create it by the name of heading and now do the migration process separately on the app and runserver, now go to admin page and look for that model and delete all the objects and come to models back and rename the field that you recently made and name it to which you were wishing it with earlier and do the migrations again and now your problem must have been gone!!
this occurs when the objects are there in the db but you added a field which wasn't there when the earlier objs were made, so by this we can delete those objs and make fresh ones again!
I got the same error when I do PIVOT in RedShift.
My code is similar to
SELECT *
INTO output_table
FROM (
SELECT name, year_month, sales
FROM input_table
)
PIVOT
(
SUM(sales)
FOR year_month IN ('nov_2020', 'dec_2020', 'jan_2021', 'feb_2021', 'mar_2021', 'apr_2021', 'may_2021', 'jun_2021', 'jul_2021', 'aug_2021',
'sep_2021', 'oct_2021', 'nov_2021', 'dec_2021', 'jan_2022', 'feb_2022', 'mar_2022', 'apr_2022', 'may_2022', 'jun_2022',
'jul_2022', 'aug_2022', 'sep_2022', 'oct_2022', 'nov_2022')
)
I tried year_month without any quote (got the error), year_month with double quote (got the error), and finally year_month with single quote (it works this time). This may help if someone in the same situation like my example.

Pgadmin is not able to read the column names due to header in column name [duplicate]

I'm writing a Java application to automatically build and run SQL queries. For many tables my code works fine but on a certain table it gets stuck by throwing the following exception:
Exception in thread "main" org.postgresql.util.PSQLException: ERROR: column "continent" does not exist
Hint: Perhaps you meant to reference the column "countries.Continent".
Position: 8
The query that has been run is the following:
SELECT Continent
FROM network.countries
WHERE Continent IS NOT NULL
AND Continent <> ''
LIMIT 5
This essentially returns 5 non-empty values from the column.
I don't understand why I'm getting the "column does not exist" error when it clearly does in pgAdmin 4. I can see that there is a schema with the name Network which contains the table countries and that table has a column called Continent just as expected.
Since all column, schema and table names are retrieved by the application itself I don't think there has been a spelling or semantical error so why does PostgreSQL cause problems regardless? Running the query in pgAdmin4 nor using the suggested countries.Continent is working.
My PostgreSQL version is the newest as of now:
$ psql --version
psql (PostgreSQL) 9.6.1
How can I successfully run the query?
Try to take it into double quotes - like "Continent" in the query:
SELECT "Continent"
FROM network.countries
...
In working with SQLAlchemy environment, i have got this error with the SQL like this,
db.session.execute(
text('SELECT name,type,ST_Area(geom) FROM buildings WHERE type == "plaza" '))
ERROR: column "plaza" does not exist
Well, i changed == by = , Error still persists, then i interchanged the quotes, like follows. It worked. Weird!
....
text("SELECT name,type,ST_Area(geom) FROM buildings WHERE type = 'plaza' "))
This problem occurs in postgres because the table name is not tablename instead it is "tablename".
for eg.
If it shows user as table name,
than table name is "user".
See this:
Such an error can appear when you add a space in the name of a column by mistake (for example "users ").
QUICK FIX (TRICK)
If you have recently added a field which you have already deleted before and now trying to add the same field back then let me share you this simple trick! i did this and the problem was gone!!
so, now just delete the migration folder entirely on the app,then instead of adding that field you need to now add a field but with the name of which you have never declared on this app before, example if you are trying to add title field then create it by the name of heading and now do the migration process separately on the app and runserver, now go to admin page and look for that model and delete all the objects and come to models back and rename the field that you recently made and name it to which you were wishing it with earlier and do the migrations again and now your problem must have been gone!!
this occurs when the objects are there in the db but you added a field which wasn't there when the earlier objs were made, so by this we can delete those objs and make fresh ones again!
I got the same error when I do PIVOT in RedShift.
My code is similar to
SELECT *
INTO output_table
FROM (
SELECT name, year_month, sales
FROM input_table
)
PIVOT
(
SUM(sales)
FOR year_month IN ('nov_2020', 'dec_2020', 'jan_2021', 'feb_2021', 'mar_2021', 'apr_2021', 'may_2021', 'jun_2021', 'jul_2021', 'aug_2021',
'sep_2021', 'oct_2021', 'nov_2021', 'dec_2021', 'jan_2022', 'feb_2022', 'mar_2022', 'apr_2022', 'may_2022', 'jun_2022',
'jul_2022', 'aug_2022', 'sep_2022', 'oct_2022', 'nov_2022')
)
I tried year_month without any quote (got the error), year_month with double quote (got the error), and finally year_month with single quote (it works this time). This may help if someone in the same situation like my example.

Mixed SRID's in trigger stopping qgis from committing changes

I have a trigger that takes a line, grabs its ST_StartPoint and ST_EndPoint and then grabs the nearest point to those endpoints, and assigns a node_id to a column. This Fiddle shows the trigger as well as some example data. When running this trigger, I am getting an error on QGIS stating the following:
Could not commit changes to layer pipes
Errors: ERROR: 1 feature(s) not added.
Provider errors:
PostGIS error while adding features: ERROR: Operation on mixed SRID geometries
CONTEXT: SQL statement "SELECT
j.node_id,
i.node_id
FROM ST_Dump(ST_SetSRID(NEW.geom,2346)) dump_line,
LATERAL (SELECT s.node_id,(ST_SetSRID(s.geom,2346))
FROM structures s
ORDER BY ST_EndPoint((dump_line).geom)<->s.geom
LIMIT 1) j (node_id,geom_closest_downstream),
LATERAL (SELECT s.node_id,(ST_SetSRID(s.geom,2346))
FROM structures s
ORDER BY ST_StartPoint((dump_line).geom)<->s.geom
LIMIT 1) i (node_id,geom_closest_upstream)"
PL/pgSQL function sewers."Up_Str"() line 3 at SQL statement
I have attempted to resolve the issue by editing the trigger to this but this has not fixed the problem. Any ideas would be greatly appreciated.
The line ORDER BY ST_EndPoint((dump_line).geom)<->s.geom (and the similar one for the start point) is likely the faulty one.
You could, again, declare the CRS of s.geom. Note that by doing this any spatial index on structures would not be used, it would have to be created on ST_SetSRID(geom,2346)
The clean way would be to set the CRS at the column level on the structures table
alter table structures alter column geom TYPE geometry(point,2346) using st_setSRID(geom,2346);

At what point are DB2 declared global temporary tables automatically deleted...?

When are DB2 declared global temporary tables 'cleaned up' and automatically deleted by the system...? This is for DB2 on AS400 v7r3m0, with DBeaver 5.2.5 as the dev client, and MS-Access 2007 for packaged apps for the end-users.
Today I started experimenting with a DGTT, thanks to this answer. So far I'm pleased with the functionality, although I did find our more recent system version has the WITH DATA option, which is an obvious advantage.
Everything is working, but at times I receive this error:
SQL Error [42710]: [SQL0601] NEW_PKG_SHEETS_DATA in QTEMP type *FILE already exists.
The meaning of the error is obvious, but the timing is not. When I started today, I could run the query multiple times, and the error didn't occur. It seemed as if the system was cleaning up and deleting it, which is just what I was looking for. But then the error started and now it's happening with more frequency.
If I make strategic use of DROP TABLE, this resolves the error, unless the table doesn't exist, in which case I get another error. I can also disconnect/reconnect to the server from my SQL dev client, as I would expect, since that would definitely drop the session.
This IBM article about DGTTs speaks much of sessions, but not many specifics. And this article is possibly the longest command syntax I've yet encountered in the IBM documentation. I got through it, but it didn't answer the question of what decided when a DGTT is deleted.
So I would like to ask:
What are the boundaries of a session..?
I'm thinking this is probably defined by the environment in my SQL client..?
I guess the best/safest thing to do is use DROP TABLE as needed..?
Does any one have any tips, tricks, or pointers they could share..?
Below is the SQL that I'm developing. For brevity, I've excluded chunks of the WITH-AS and SELECT statements:
DROP TABLE SESSION.NEW_PKG_SHEETS ;
DECLARE GLOBAL TEMPORARY TABLE SESSION.NEW_PKG_SHEETS_DATA
AS ( WITH FIRSTDAY AS (SELECT (YEAR(CURDATE() - 4 MONTHS) * 10000) +
(MONTH(CURDATE() - 4 MONTHS) * 100) AS DATEISO
FROM SYSIBM.SYSDUMMY1
-- <VARIETY OF ADDITIONAL CTE CLAUSES>
-- <SELECT STATEMENT BELOW IS A BIT LONGER>
SELECT DAACCT AS DAACCT,
DAIDAT AS DAIDAT,
DAINV# AS DAINV,
CAST(DAITEM AS NUMERIC(6)) AS DAPACK,
CAST(0 AS NUMERIC(14)) AS UPCNUM,
DAQTY AS DAQTY
FROM DAILYTRANS
AND DAIDAT >= (SELECT DATEISO+000 FROM FIRSTDAY) -- 1ST DAY FOUR MONTHS AGO
AND DAIDAT <= (SELECT DATEISO+399 FROM FIRSTDAY) -- LAST DAY OF LAST MONTH
) WITH DATA ;
DROP TABLE SESSION.NEW_PKG_SHEETS ;
The DGTT will only get cleaned automatically up by Db2 when the connection ends successfully (connect reset or equivalent according to whatever interface to Db2 is being used ).
For both Db2 for i and Db2-LUW, consider using the WITH REPLACE clause for the DECLARE GLOBAL TEMPORARY TABLE statement. That will ensure you don't need to explicitly drop the DGTT if the session remains open but the code needs the table to be replaced at next execution whether or not the DGTT already exists.
Using that WITH REPLACE clause means you do not need to worry about issuing a DROP statement for the DGTT, unless you really want to issue a drop.
Sometimes sessions may get re-used, or a close/disconnect might not happen or might not complete, or more likely a workstation performs a retry, and in those cases the WITH REPLACE can be essential for easily avoiding runtime errors.
Note that Db2 for Z/OS (at v12) does not offer the WITH REPLACE clause for DGTT, but has instead an optional syntax on commit drop table (but this is not documented for Db2-for-i and Db2-LUW).

Postgres table partition range by expression

I'm new to table partition in Postgres and I was trying something which didn't seem to work. Here's a simplified version of the problem:
cREATE table if not exists Location1
PARTITION OF "Location"
FOR VALUES FROM
(extract(epoch from now()::timestamp))
TO
(extract(epoch from now()::timestamp)) ;
insert into "Location" ("BuyerId", "Location")
values (416177285, point(-100, 100));
It fails with:
ERROR: syntax error at or near "extract"
LINE 2: FOR VALUES FROM (extract(epoch from now()::timestamp))
If I try to create the partition using literal values, it works fine.
I'm using Postgres 10 on linux.
After a lot of trial and error, I realized that Postgres 10 doesn't really have a full partitioning system out of the box. For a usable partitioned database, the following postgres modules are necessary to be installed:
pg_partman https://github.com/pgpartman/pg_partman (literally a gift from God)
pg_jobmon https://github.com/omniti-labs/pg_jobmon (to monitor what pg_partman is doing)
The above advice is for those new to the scene, to help you avoid a lot of headaches.
I assumed that automatic creation of partitions done by postgres-core was a no-brainer, but evidently the postgres devs know something I don't?
If any relational database from the 80's is asked to insert a row, it succeeds after passing the constraint checks.
The latest version of Postgres 2018 is asked to insert a row: "Sorry, I don't know where to put it."
It's a good first effort, but hopefully Postgres 11 will have a proper partitioning system OOTB...