Presto To Postgres Not able to query - postgresql

I have made the connections from presto to postgresql. Describing of table is working but not select query.
Its giving the below error:
select rul_name from fms.fmref_schema.ru_tbl;
Query 20200521_122033_00164_rmrsd, FAILED, 1 node
Splits: 17 total, 0 done (0.00%)
0:00 [0 rows, 0B] [0 rows/s, 0B/s]
Query 20200521_122033_00164_rmrsd failed: com.facebook.presto.plugin.jdbc.JdbcSplit cannot be cast to com.facebook.presto.plugin.jdbc.JdbcSplit

Related

PostgreSQL statement timeout error on a function when "statement_timeout = 0"

I'm using PGWatch2 monitoring for a master(write) and two replication(read) servers. This monitoring uses this function to get query information. There is no errors on master server but replications have this error:
pgwatch2#database ERROR: canceling statement due to statement timeout
pgwatch2#database STATEMENT:
with q_data as (
select
...
This query takes about 7s on master and
both replication servers and master has statement_timeout=0:
statement_timeout
-------------------
0
(1 row)
I'm using PostgreSQL 12.9 on UBUNTU 20.04.1

SQLSTATE=23514 : Facing error on Set integrity

Can someone suggest me what is the problem, why I am not able to enable integrity, although no constraint found with name of CATEGORY.
create function FCOK_ACCOUNT_CATEGORY_C2(xmlrecord XML) returns integer language sql contains sql no external action deterministic return xmlcast(xmlquery('$d/row/c2' passing xmlrecord as "d") as integer);
reorg table FCOK_ACCOUNT_TEST inplace
db2pd -db admindb –reorg
Database Member 0 -- Database ADMINDB -- Active -- Up 10 days 03:14:17 -- Date 2021-07-26-19.28.05.648688
Table Reorg Information:
Address TbspaceID TableID PartID MasterTbs MasterTab TableName Type IndexID TempSpaceID
0x0A001F00174DF588 4 21566 n/a n/a n/a FCOK_ACCOUNT Offline 0 4
0x0A001F0027981508 3 26880 n/a n/a n/a FCOK_ACCOUNT_TEST Online 0 3
Table Reorg Stats:
Address TableName Start End PhaseStart MaxPhase Phase CurCount MaxCount Status Completion
0x0A001F00174DF588 FCOK_ACCOUNT 07/16/2021 18:38:46 07/16/2021 18:43:15 07/16/2021 18:39:40 3 IdxRecreat 0 0 Done 0
0x0A001F0027981508 FCOK_ACCOUNT_TEST 07/26/2021 18:13:16 07/26/2021 18:14:38 n/a n/a n/a 0 0 Done 0
bash-4.2$
set integrity for FCOK_ACCOUNT_TEST off;
ALTER TABLE FCOK_ACCOUNT_TEST ADD CATEGORY INTEGER generated always as (FCOK_ACCOUNT_CATEGORY_C2(XMLRECORD))
set integrity for FCOK_ACCOUNT_TEST immediate checked;
db2 "set integrity for DB2ADMIN.FCOK_ACCOUNT_TEST immediate checked"
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL3603N Integrity processing through the SET INTEGRITY statement has found
an integrity violation involving a constraint, a unique index, a generated
column, or an index over an XML column. The associated object is identified by
"DB2ADMIN.FCOK_ACCOUNT_TEST.CATEGORY". SQLSTATE=23514
bash-4.2$
bash-4.2$ db2 "select TYPE, ENFORCED from SYSCAT.TABCONST where CONSTNAME='CATEGORY'"
TYPE ENFORCED
0 record(s) selected.
bash-4.2$ db2 "select COLSEQ,COLNAME from SYSCAT.KEYCOLUSE where CONSTNAME='CATEGORY'"
COLSEQ COLNAME
0 record(s) selected.
bash-4.2$ db2 "reorg table db2admin.FCOK_ACCOUNT_TEST inplace"
SQL0668N Operation not allowed for reason code "1" on table
"DB2ADMIN.FCOK_ACCOUNT_TEST". SQLSTATE=57016

Postgres, I am getting ERROR: unexpected chunk number 0 (expected 1) for toast value 12599063 in pg_toast_16687

I am new to Postgres and one of my reports that is using select and extracting JSON return the following error.
ERROR: unexpected chunk number 0 (expected 1) for toast value 12599063 in pg_toast_16687
SQL state: XX000
I do not know how to proceed in fixing my query. Any idea?
Run this command:
select reltoastrelid::regclass from pg_class where relname = 'table_name';
Where the table_name is where the error occur. Then check the result if it is the same toast# like pg_toast.pg_toast_XXXXX. Mine happen to be 16687
Then run these commands to reindex:
REINDEX table table_name;
REINDEX table pg_toast.pg_toast_16687;
VACUUM analyze table_name;
That is data corruption:
restore from backup
upgrade to the latest PostgreSQL minor release
check the hardware

PostgreSQL's table oid is not found in database

I have a question about PostgreSQL's table oid.
I create a table. oid is 24622
(-rw------- 1 postgres postgres 8192 Nov 29 17:45 24622)
and I found also modified files at the same time.
(-rw------- 1 postgres postgres 73728 Nov 29 17:45 12741)
(-rw------- 1 postgres postgres 32768 Nov 29 17:45 12744)
(-rw------- 1 postgres postgres 65536 Nov 29 17:45 12764)
(-rw------- 1 postgres postgres 57344 Nov 29 17:45 12767)
but those tables are not found in same database, no one is found.
ksh2=# select oid,relname from pg_class where oid = '12741';
oid | relname
-----+---------
(0 rows)
How can I find those tables???
(I also change schema, and try to find, but no one is found)
Thank you.
A filename doesn't necessarily correspond with oid:
Note that while a table's filenode often matches its OID, this is not
necessarily the case; some operations, like TRUNCATE, REINDEX, CLUSTER
and some forms of ALTER TABLE, can change the filenode while
preserving the OID.
The filename is stored in the relfilenode column:
Name of the on-disk file of this relation; zero means this is a
"mapped" relation whose disk file name is determined by low-level
state
So try searching for the relfilenode:
select name, relkind from pg_class where relfilenode = 12741;
The relkind column tells you what type of object the file represents:
r = ordinary table
i = index
S = sequence
v = view
c = composite type
t = TOAST table
f = foreign table

"Lost connection to MySQL server during query" in Google Cloud SQL

I am having a weird, recurring but not constant, error where I get "2013, 'Lost connection to MySQL server during query'". These are the premises:
a Python app runs around 15-20minutes every hour and then stops (hourly scheduled by cron)
the app is on a GCE n1-highcpu-2 instance, the db is on a D1 with a per package pricing plan and the following mysql flags
max_allowed_packet 1073741824
slow_query_log on
log_output TABLE
log_queries_not_using_indexes on
the database is accessed only by this app and this app only so the usage is the same, around 20 consecutive minutes per hour and then nothing at all for the other 40 minutes
the first query it does is
SELECT users.user_id, users.access_token, users.access_token_secret, users.screen_name, metadata.last_id
FROM users
LEFT OUTER JOIN metadata ON users.user_id = metadata.user_id
WHERE users.enabled = 1
the above query joins two tables that are each around 700 lines longs and do not have indexes
after this query (which takes 0.2 seconds when it runs without problems) the app starts without any issues
Looking at the logs I see that each time this error presents itself the interval between the start of the query and the error is 15 minutes.
I've also enabled the slow query log and those query are registered like this:
start_time: 2014-10-27 13:19:04
query_time: 00:00:00
lock_time: 00:00:00
rows_sent: 760
rows_examined: 1514
db: foobar
last_insert_id: 0
insert_id: 0
server_id: 1234567
sql_text: ...
Any ideas?
If your connection is idle for the 15 minute gap the you are probably seeing GCE disconnect your idle TCP connection, as described at https://cloud.google.com/compute/docs/troubleshooting#communicatewithinternet. Try the workaround that page suggests:
sudo /sbin/sysctl -w net.ipv4.tcp_keepalive_time=60 net.ipv4.tcp_keepalive_intvl=60 net.ipv4.tcp_keepalive_probes=5
(You may need to put this configuration into /etc/sysctl.conf to make it permanent)