PostGIS-enabled Postgres DB JDBC Query Stuck in SQLException.setNextException - postgresql

I am currently running a PostGIS-enabled Postgres database with the following version string:
Version string PostgreSQL 9.4.1 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bit
The JDBC driver I am using to connect is
9.4-1201-jdbc41
I am running the following query:
SELECT * FROM foo;
The schema for 'foo' is as follows:
CREATE TABLE foo
(
gid integer NOT NULL DEFAULT nextval('address_gid_seq'::regclass),
objectid numeric(10,0),
house_num integer,
half_add character varying(4),
pre_dir character varying(2),
st_name character varying(50),
st_type character varying(4),
suf_dir character varying(2),
unit_type character varying(4),
unit_id character varying(6),
city character varying(15),
state character varying(2),
zipcode numeric(10,0),
angle numeric,
parcel_num character varying(11),
idnum numeric(10,0),
status character varying(1),
status_dat date,
esnnum character varying(5),
geom geometry(Point,3857),
CONSTRAINT address_pkey PRIMARY KEY (gid)
)
I did not create this table, so I am not sure what may have gone wrong, but the count of the rows (done as a shortcut using pgAdmin3) is ~250,000, so there is demonstrably data in there. Asking to get some of the data via a 'limit' works, although it is incredibly slow.
I can pause my query in a debugger, which pauses in the following stack:
PSQLWarning(SQLException).setNextException(SQLException) line: 294
PSQLWarning(SQLWarning).setNextWarning(SQLWarning) line: 213
Jdbc4ResultSet(AbstractJdbc2ResultSet).addWarning(SQLWarning) line: 2669
AbstractJdbc2ResultSet$CursorResultHandler.handleWarning(SQLWarning) line: 1841
QueryExecutorImpl$3.handleWarning(SQLWarning) line: 2179
QueryExecutorImpl.processResults(ResultHandler, int) line: 2023
QueryExecutorImpl.fetch(ResultCursor, ResultHandler, int) line: 2201
Jdbc4ResultSet(AbstractJdbc2ResultSet).next() line: 1924
I don't really have a ton of time to learn everything about how Postgres' JDBC driver is implemented, so I thought I shout out and see if anyone else has experienced this and if there is something wrong with the data in the table. If I had access to the source data, I might be able to fix it on that end; but it seems strange that a query against an existing Postgres table would result in what seems to be an infinite loop.
I should add that ResultSet.next() never steps in the debugger, the code just stays in the setNextException() method indefinitely.
EDIT:
I am getting tons of this in the "messages" in pgAdmin:
NOTICE: [g_serialized.c:gserialized_get_type:50] entered
NOTICE: [lwgeom.c:lwgeom_set_srid:1455] entered with srid=3857
NOTICE: [lwgeom.c:lwgeom_is_empty:1233] lwgeom_is_empty: got type Point
NOTICE: [lwout_wkb.c:lwgeom_to_wkb:710] WKB output size: 25
NOTICE: [lwout_wkb.c:lwgeom_to_wkb:723] Hex WKB output size: 51
NOTICE: [lwgeom.c:lwgeom_is_empty:1233] lwgeom_is_empty: got type Point
NOTICE: [lwout_wkb.c:lwpoint_to_wkb_buf:393] Entering function, buf = 0x2acec3c3e770
NOTICE: [lwout_wkb.c:lwpoint_to_wkb_buf:395] Endian set, buf = 0x2acec3c3e772
NOTICE: [lwout_wkb.c:integer_to_wkb_buf:189] Writing value '536870913'
NOTICE: [lwout_wkb.c:lwpoint_to_wkb_buf:398] Type set, buf = 0x2acec3c3e77a
NOTICE: [lwout_wkb.c:integer_to_wkb_buf:189] Writing value '3857'
NOTICE: [lwout_wkb.c:lwpoint_to_wkb_buf:403] SRID set, buf = 0x2acec3c3e782
NOTICE: [lwout_wkb.c:ptarray_to_wkb_buf:360] Writing point #0
NOTICE: [lwout_wkb.c:ptarray_to_wkb_buf:364] Writing dimension #0 (buf = 0x2acec3c3e782)
NOTICE: [lwout_wkb.c:ptarray_to_wkb_buf:364] Writing dimension #1 (buf = 0x2acec3c3e792)
NOTICE: [lwout_wkb.c:ptarray_to_wkb_buf:369] Done (buf = 0x2acec3c3e7a2)
NOTICE: [lwout_wkb.c:lwpoint_to_wkb_buf:407] Pointarray set, buf = 0x2acec3c3e7a2
NOTICE: [lwout_wkb.c:lwgeom_to_wkb:759] buf (0x2acec3c3e7a3) - wkb_out (0x2acec3c3e770) = 51
NOTICE: [g_serialized.c:gserialized_get_type:50] entered
NOTICE: [g_serialized.c:lwgeom_from_gserialized:1137] Got type 1 (Point), srid=3857
NOTICE: [g_serialized.c:lwgeom_from_gserialized_buffer:1091] Got type 1 (Point), hasz=0 hasm=0 geodetic=0 hasbox=0
client_min_messages is showing no setting.

The solution to this problem is as mentioned in the comments:
Set client_min_messages to ERROR. This will avoid shipping a dozen error messages to the client over JDBC per geometry record, which will increase the performance by at least an order of magnitude in my case.

Related

psycopg2 execute_values fails with > 100 rows in supabase?

Not sure if this is a Supabase issue or a psycopg2 issue honestly and would love some help debugging.
I have the following code:
args = [('HaurGlass','60000','2022-10-20T21:15:39.751Z','10130506261','ac76e8db-ace0-40df-b6fa-f470641805e9','ad43639e-f66e-49d5-8fe8-d1ce5cd26193','{}')]
statement = ('''
INSERT INTO %s (%s) VALUES %s ON CONFLICT (company_id, crm_id)
DO UPDATE SET (%s)=(%s) RETURNING crm_id, id''')
statement = cur.mogrify(statement,
(AsIs(db_table), AsIs(','.join(keys)),
AsIs("%s"), AsIs(','.join(update_keys)),
AsIs(','.join(excluded_keys))))
output = execute_values(cur, statement, args, fetch=True)
The weird thing is that if args is <=100 rows in length, this query works without any problems. As soon as I increase the length of args to 101 rows or more, my Postgres logs show:
INSERT INTO licenses (name,value,subscription_end,crm_id,company_id,csm_id,custom_data) VALUES ('HaurGlass','60000','2022-10-20T21:15:39.751Z','10130506261','ac76e8db-ace0-40df-b6fa-f470641805e9','ad43639e-f66e-49d5-8fe8-d1ce5cd26193','{}')...
which would be good, except that it's immediately followed by:
INSERT INTO licenses (name,value,subscription_end,crm_id,company_id,csm_id,custom_data) VALUES ('HaurGlass','60000','2022-10-20T21:15:39.751Z','10130506261','ac76e8db-ace0-40df-b6fa-f470641805e9',NULL,'{}'),...
I've also confirmed that the number of records in the second "NULLifying" query is exactly equal to len(args)-100.
Any idea what is going on?
OK so it turns out I was missing the page_size parameter. All I had to do was:
output = execute_values(cur, statement, args, fetch=True, page_size=len(args))

Why does db2 timestampdiff return error SYSFUN:07?

Given a query like this one:
select timestampdiff(4, char(ORDER_DT - ORDER_DT)) as TEST
from mytable;
Using IBM DB2 z/OS 12 with IDAA, you may get this error:
ROUTINE SYSFUN.TIMESTAMPDIFF (SPECIFIC NAME TIMESTAMPDIFF)
HAS RETURNED AN ERROR SQLSTATE WITH DIAGNOSTIC TEXT SYSFUN:07.
SQLCODE=-443, SQLSTATE=38552.
In some cases the char cast may return a leading space, so the timestampdiff argument will be something like ' 00000000000000.000000'. This argument will return the SYSFUN:07 error in certain circumstances.
The fix is to cast to char(22):
select timestampdiff(4, cast(ORDER_DT - ORDER_DT as char(22))) as TEST
from mytable;

How to pass table name as a variable to execute() with the postgres crate?

I'm new to Rust and I was trying to play with the postgres crate. I was able to create a table by hardcoding the table name, but I'm always having code going panic when trying to pass the table name from a variable.
rustc --version 1.36.0
cargo --version 1.36.0
postgres = "0.15"
fn main() {
let conn = Connection::connect("postgresql://postgres:postgres#localhost/db1",
TlsMode::None).unwrap();
let tname = "message";
conn.execute("CREATE TABLE IF NOT EXISTS $1 (
id SERIAL PRIMARY KEY,
title VARCHAR NOT NULL,
body VARCHAR,
)", &[&tname]).ok().expect("Table message creation failed");
thread 'main' panicked at 'Table message creation failed', src/libcore/option.rs:1036:5
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:39
1: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:71
2: std::panicking::default_hook::{{closure}}
at src/libstd/sys_common/backtrace.rs:59
at src/libstd/panicking.rs:197
3: std::panicking::default_hook
at src/libstd/panicking.rs:211
4: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:474
5: std::panicking::continue_panic_fmt
at src/libstd/panicking.rs:381
6: rust_begin_unwind
at src/libstd/panicking.rs:308
7: core::panicking::panic_fmt
at src/libcore/panicking.rs:85
8: core::option::expect_failed
at src/libcore/option.rs:1036
9: core::option::Option<T>::expect
at /rustc/a53f9df32fbb0b5f4382caaad8f1a46f36ea887c/src/libcore/option.rs:314
10: rustdb::main
at ./main.rs:27
11: std::rt::lang_start::{{closure}}
at /rustc/a53f9df32fbb0b5f4382caaad8f1a46f36ea887c/src/libstd/rt.rs:64
12: std::panicking::try::do_call
at src/libstd/rt.rs:49
at src/libstd/panicking.rs:293
13: __rust_maybe_catch_panic
at src/libpanic_unwind/lib.rs:85
14: std::rt::lang_start_internal
at src/libstd/panicking.rs:272
at src/libstd/panic.rs:394
at src/libstd/rt.rs:48
15: std::rt::lang_start
at /rustc/a53f9df32fbb0b5f4382caaad8f1a46f36ea887c/src/libstd/rt.rs:64
16: main
17: __libc_start_main
18: _start
You cannot use placeholders (eg. $1) to substitute a table name into a query.
One of the functions of placeholders is to allow a query to be prepared once, then executed multiple times. This saves on the overhead of planning the query each time you want to use it, which can be substantial. However it would not be possible to plan the query if the database did not even know which table was being queried.
If you need to dynamically insert the table name at runtime, you will need to do that in rust before passing the SQL to the database:
let sql = format!("CREATE TABLE IF NOT EXISTS {} (
id SERIAL PRIMARY KEY,
title VARCHAR NOT NULL,
body VARCHAR,
)", tname);
If the table name is being passed in from user input, don't forget to guard against SQL injection by validating it beforehand.
Also note that the panic is due to the use of .ok().expect(....).
ok() will take the result of executing the SQL and convert it into an Option. If the result was an error it will be discarded, so you never get to see the error message which would probably have helped you diagnose the problem. Result implements expect directly, with the advantage that instead of discarding the error, it will display it as part of the Panic. So, you would be better off with:
conn.execute(sql, &[] as &[String]).expect("Failed creating table");
However, if there is a realistic chance that the SQL statement is going to fail you would be better off to check the result and handle it more gracefully than crashing the program.

very large fields in As400 ISeries database

I would like to save a large XML string (possibly longer than 32K or 64K) into an AS400 file field. Either DDS or SQL files would be OK. Example of SQL file below.
CREATE TABLE MYLIB/PRODUCT
(PRODCODE DEC (5 ) NOT NULL WITH DEFAULT,
PRODDESC CHAR (30 ) NOT NULL WITH DEFAULT,
LONGDESC CLOB (70K ) ALLOCATE(1000) NOT NULL WITH DEFAULT)
We would use RPGLE to read and write to fields.
The goal is to then pull out data via ODBC connection on a client side.
AS400 character fields seem to have 32K limit, so this is not great option.
What options do I have? I have been reading up on CLOBs but there appear to be restrictions writing large strings to CLOBS and reading CLOB field remotely. Note that client is (still) on v5R4 of AS400 OS.
thanks!
Charles' answer below shows how to extract data. I would like to insert data. This code runs, but throws a '22501' SQL error.
D wLongDesc s 65531a varying
D longdesc s sqltype(CLOB:65531)
/free
//eval longdesc = *ALL'123';
eval Wlongdesc = '123';
exec SQL
INSERT INTO PRODUCT (PRODCODE, PRODDESC, LONGDESC)
VALUES (123, 'Product Description', :LongDesc );
if %subst(sqlstt:1:2) <> '00';
// an error occurred.
endif;
// get length explicitly, variables are setup by pre-processor
longdesc_len = %len(%trim(longdesc_data));
wLongDesc = %subst(longdesc_data:1:longdesc_len);
/end-free
C Eval *INLR = *on
C Return
Additional question: Is this technique suitable for storing data which I want to extract via ODBC connection later? Does ODBC read CLOB as pointer or can it pull out text?
At v5r4, RPGLE actually supports 64K character variables.
However, the DB is limited to 32K for regular char/varchar fields.
You'd need to use a CLOB for anything bigger than 32K.
If you can live with 64K (or so )
CREATE TABLE MYLIB/PRODUCT
(PRODCODE DEC (5 ) NOT NULL WITH DEFAULT,
PRODDESC CHAR (30 ) NOT NULL WITH DEFAULT,
LONGDESC CLOB (65531) ALLOCATE(1000) NOT NULL WITH DEFAULT)
You can use RPGLE SQLTYPE support
D code S 5s 0
d wLongDesc s 65531a varying
D longdesc s sqltype(CLOB:65531)
/free
exec SQL
select prodcode, longdesc
into :code, :longdesc
from mylib/product
where prodcode = :mykey;
wLongDesc = %substr(longdesc_data:1:longdesc_len);
DoSomthing(wLongDesc);
The pre-compiler will replace longdesc with a DS defined like so:
D longdesc ds
D longdesc_len 10u 0
D longdesc_data 65531a
You could simply use it directly, making sure to only use up to longdesc_len or covert it to a VARYING as I've done above.
If absolutely must handle larger than 64K...
Upgrade to a supported version of the OS (16MB variables supported)
Access the CLOB contents via an IFS file using a file reference
Option 2 is one I've never seen used....and I can't find any examples. Just saw it mentioned in this old article..
http://www.ibmsystemsmag.com/ibmi/developer/general/BLOBs,-CLOBs-and-RPG/?page=2
This example shows how to write to a CLOB field in Db2 database... with help from Charles and Mr Murphy's feedback.
* ----------------------------------------------------------------------
* Create table with CLOB:
* CREATE TABLE MYLIB/PRODUCT
* (MYDEC DEC (5 ) NOT NULL WITH DEFAULT,
* MYCHAR CHAR (30 ) NOT NULL WITH DEFAULT,
* MYCLOB CLOB (65531) ALLOCATE(1000) NOT NULL WITH DEFAULT)
* ----------------------------------------------------------------------
D PRODCODE S 5i 0
D PRODDESC S 30a
D i S 10i 0
D wLongDesc s 65531a varying
D longdesc s sqltype(CLOB:65531)
D* Note that variables longdesc_data and longdesc_len
D* get create automatocally by SQL pre-processor.
/free
eval wLongdesc = '123';
longdesc_data = wLongDesc;
longdesc_len = %len(%trim(wLongDesc));
exec SQL set option commit = *none;
exec SQL
INSERT INTO PRODUCT (MYDEC, MYCHAR, MYCLOB)
VALUES (123, 'Product Description',:longDesc);
if %subst(sqlstt:1:2)<>'00' ;
// an error occurred.
endif;
Eval *INLR = *on;
Return;
/end-free

Convert a string representing a timestamp to an actual timestamp in PostgreSQL?

In PostgreSQL: I convert string to timestamp with to_timestamp():
select * from ms_secondaryhealthcarearea
where to_timestamp((COALESCE(update_datetime, '19900101010101'),'YYYYMMDDHH24MISS')
> to_timestamp('20121128191843','YYYYMMDDHH24MISS')
But I get this error:
ERROR: syntax error at end of input
LINE 1: ...H24MISS') >to_timestamp('20121128191843','YYYYMMDDHH24MISS')
^
********** Error **********
ERROR: syntax error at end of input
SQL state: 42601
Character: 176
Why? How to convert a string to timestamp?
One too many opening brackets. Try this:
select *
from ms_secondaryhealthcarearea
where to_timestamp(COALESCE(update_datetime, '19900101010101'),'YYYYMMDDHH24MISS') >to_timestamp('20121128191843','YYYYMMDDHH24MISS')
You had two opening brackets at to_timestamp:
where to_timestamp((COA.. -- <-- the second one is not needed!
#ppeterka has pointed out the syntax error.
The more pressing question is: Why store timestamp data as string to begin with? If your circumstances allow, consider converting the column to its proper type:
ALTER TABLE ms_secondaryhealthcarearea
ALTER COLUMN update_datetime TYPE timestamp
USING to_timestamp(update_datetime,'YYYYMMDDHH24MISS');
Or use timestamptz - depending on your requirements.
Another way to convert a string to a timestamp type of PostgreSql is the above,
SELECT to_timestamp('23-11-1986 06:30:00', 'DD-MM-YYYY hh24:mi:ss')::timestamp without time zone;
I had the same requirement as how I read the title. How to convert an epoch timestamp as text to a real timestamp. In my case I extracted one from a json object. So I ended up with a timestamp as text with milliseconds
'1528446110978' (GMT: Friday, June 8, 2018 8:21:50.978 AM)
This is what I tried. Just the latter (ts_ok_with_ms) is exactly right.
SELECT
data->>'expiration' AS expiration,
pg_typeof(data->>'expiration'),
-- to_timestamp(data->>'expiration'), < ERROR: function to_timestamp(text) does not exist
to_timestamp(
(data->>'expiration')::int8
) AS ts_wrong,
to_timestamp(
LEFT(
data->>'expiration',
10
)::int8
) AS ts_ok,
to_timestamp(
LEFT(
data->>'expiration',
10
)::int8
) + (
CASE
WHEN LENGTH(data->>'expiration') = 13
THEN RIGHT(data->>'expiration', 3) ELSE '0'
END||' ms')::interval AS ts_ok_with_ms
FROM (
SELECT '{"expiration": 1528446110978}'::json AS data
) dummy
This is the (transposed) record that is returned:
expiration 1528446110978
pg_typeof text
ts_wrong 50404-07-12 12:09:37.999872+00
ts_ok 2018-06-08 08:21:50+00
ts_ok_with_ms 2018-06-08 08:21:50.978+00
I'm sure I overlooked a simpler version of how to get from a timestamp string in a json object to a real timestamp with ms (ts_ok_with_ms), but I hope this helps nonetheless.
Update: Here's a function for your convenience.
CREATE OR REPLACE FUNCTION data.timestamp_from_text(ts text)
RETURNS timestamptz
LANGUAGE SQL AS
$$
SELECT to_timestamp(LEFT(ts, 10)::int8) +
(
CASE
WHEN LENGTH(ts) = 13
THEN RIGHT(ts, 3) ELSE '0'
END||' ms'
)::interval
$$;