PostgreSQL Array Insert Size Limitation Error - postgresql

I get the following Postgres error:
ERROR: value too long for type character varying(1024)
The offending statement is:
INSERT INTO integer_array_sensor_data (sensor_id, "time", status, "value")
VALUES (113, 86651204, 0, '{7302225, 7302161, 7302593, 730211,
... <total of 500 values>... 7301799, 7301896}');
The table:
CREATE TABLE integer_array_sensor_data (
id SERIAL PRIMARY KEY,
sensor_id INTEGER NOT NULL,
"time" INTEGER NOT NULL,
status INTEGER NOT NULL,
"value" INTEGER[] NOT NULL,
server_time TIMESTAMP WITHOUT TIME ZONE NOT NULL DEFAULT NOW()
);
Researching PostgreSQL documentation doesn't mention anything about limitation on the array size.
Any idea how to fix this?

The problem doesn't come from the Array itself, but from the Varchar String declaring the array values in your Insert. Some drivers type the string literals as varchar(1024) causing that issue.
Instead of
'{1,2,3}'
try using
ARRAY[1,2,3]
otherwise you can try declaring the type of your string as TEXT (unlimited)
'{1,2,3}'::text

I start understanding my question although I have not found a solution. Problem is that there is a limitation on the string somewhere library level. I am actually using pqxx and you can't have strings longer than 1024 chars. I have accepted the answer of Guillaume F. because he figured this out but the casting doesn't work. I will edit my reply once I find a solution so people know what to do.
I just tried prepared statements and they have the same limitation.
The workaround is to use COPY or its pqxx binding pqxx:tablewriter.

Related

Jooq fails with "on conflict ... where" for a partical index, but sql works directly?

Using JOOQ 3.17.4, pgjdbc 42.5.0, postgres 14.3
I am aware of this answer here: https://stackoverflow.com/a/67292162/924597 - I've tried using the unqualified field name, but it makes no difference.
I'm trying to issue SQL that does "on conflict .. do update ... where" using a partial index, but I'm getting the error:
ERROR: there is no unique or exclusion constraint matching the ON CONFLICT specification
The strange thing is, Postgres gives the failure, only when issuing the SQL through JOOQ.
If I copy the SQL out of my console (as printed by P6Spy) and paste into IntelliJ IDEA's SQL editory - the exact same SQL executes properly.
Schema definition:
create table user_authz_request (
id bigint generated always as identity
(start with 30000000)
primary key not null,
status auth_request_status not null,
service_point_id bigint not null references service_point,
email varchar(256) not null,
client_id varchar(256) not null,
id_provider id_provider not null,
subject varchar(256) not null,
responding_user bigint references app_user null,
description varchar(1024) not null,
date_requested timestamp without time zone default transaction_timestamp(),
date_responded timestamp without time zone null
);
create unique index user_authz_request_once_active_key
on user_authz_request(service_point_id, client_id, subject)
where status = 'REQUESTED';
JOOQ Code:
db.insertInto(USER_AUTHZ_REQUEST).
set(USER_AUTHZ_REQUEST.SERVICE_POINT_ID, req.getServicePointId()).
set(USER_AUTHZ_REQUEST.STATUS, REQUESTED).
set(USER_AUTHZ_REQUEST.EMAIL, email).
set(USER_AUTHZ_REQUEST.CLIENT_ID, user.getClientId()).
set(USER_AUTHZ_REQUEST.ID_PROVIDER, idProvider).
set(USER_AUTHZ_REQUEST.SUBJECT, user.getSubject()).
set(USER_AUTHZ_REQUEST.DESCRIPTION, req.getComments()).
onConflict(
USER_AUTHZ_REQUEST.SERVICE_POINT_ID,
USER_AUTHZ_REQUEST.CLIENT_ID,
USER_AUTHZ_REQUEST.SUBJECT).
where(USER_AUTHZ_REQUEST.STATUS.eq(REQUESTED)).
doUpdate().
set(USER_AUTHZ_REQUEST.DESCRIPTION, req.getComments()).
set(USER_AUTHZ_REQUEST.DATE_REQUESTED, LocalDateTime.now()).
execute();
Generated SQL:
insert into api_svc.user_authz_request (service_point_id, status, email,
client_id, id_provider, subject,
description)
values (20000001, cast('REQUESTED' as api_svc.auth_request_status),
'email', 'client id',
cast('AAF' as api_svc.id_provider),
'subject', 'first')
on conflict (service_point_id, client_id, subject)
where status = cast('REQUESTED' as api_svc.auth_request_status) do
update
set description = 'first',
date_requested = cast('2022-09-20T05:35:35.927+0000' as timestamp(6))
The above is the SQL that fails when it runs in my server, but works fine when I execute it through IntelliJ.
What am I doing wrong?
Starting from jOOQ 3.18 and #12531, jOOQ will auto-inline all the bind variables in that WHERE clause, because it hardly ever makes sense to use bind values. It's a big exception in jOOQ's usual behaviour, because there are only very few cases where:
Bind values are syntactically correct
But at the same time, completely useless
In most other cases where bind values are auto-inlined, they are also not syntactically correct, so auto-inlining may be less controversial.
Until 3.18 and #12531 ships, you can simply inline your bind value manually, instead:
where(USER_AUTHZ_REQUEST.STATUS.eq(inline(REQUESTED, USER_AUTHZ_REQUEST.STATUS))).
See also:
Inlining bind values
Auto inlining bind values for some columns
It's actually the same problem as the one you've linked: Upsert with "on conflict do update" with partial index in jOOQ, just a different manifestation

Dynamic Frame writing extra columns

I have a glue task that is reading in data from S3, running a couple of SQL queries on the data, and outputting the data to Redshift. I am having an odd problem where when writing the dynamic_frame to Redshift(using glueContext.write_dynamic_frame.from_options) new columns are being created. These are some of my existing columns with the type appended to the end. For example if my frame schema is as follows:
id: string
value: short
value2: long
ts: timestamp
In Redshift I am seeing:
id varchar(256)
value: smallint <---- The data here is always null
value2: bigint <---- The data here is always null
ts: timestamp
value_short: smallint
value2_long: bigint
The value_short and value2_long columns are being created at time of exection(currently testing with creds that have alter table permissions)
When looking at the COPY command that was run I am seeing the columns value_short and value2_long in the command. I am not seeing the columns present in the dynamic frame before that is being written with glueContext.write_dynamic_frame.from_options
Casting the types explicitly as aloissiola suggested solved this problem for me. Specifically, I used the dynamicFrame.resolveChoice function:
changetypes = select1.resolveChoice(
specs=[
("value", "cast:int"),
("value2", "cast:int")
]
)
It looks like you can cast to short and long types as well. https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-crawler-pyspark-extensions-types.html I went through and specified types for all my columns.
The trick is to cast the short value to integer. Long -> bigint seems working for me.

SQL WHERE clause not functional with string

I am trying to run a query that has a where clause with a string from a column of type VARCHAR(50) through PHP, yet for some reason it does not work in either PHP or MySQLWorkbench. My database looks like:
Database Picture:
The table title is 'paranoia' where the column 'codename' is VARCHAR(50) and 'target' is VARCHAR(50). The query I am trying to run takes the form, when searching for a codename entry clearly named '13Brownie' with no spaces, as follows:
UPDATE paranoia SET target='sd' WHERE codename='13Brownie'
Yet for some reason passing a string to the argument for codename is ineffective. The WHERE clause works when I do codename=7 or codename=3 and returns those respective integer codenames, and I can do codename=0 to get all the other lettered codenames. The string input works in neither MySQLWorkbench or the PHP script I will be using to update such selected rows, but again the integer input does.
It seems like the WHERE clause is only taking the integer values of my string input or the column is actually made up of the integer values of each entry, but the column codename is clearly defined as VARCHAR(50). I have been searching for hours but to no avail.
It is likely that there are white-space characters in the data. Things to try:
SELECT * FROM paranoia WHERE codename like '13%'
SELECT * FROM paranoia WHERE codename = '13Brownie '
SELECT codename, LEN(codename) FROM paranoia
VARCHAR(10) is a valid type to accept a string of at most 10 characters. I think this can possibly happen because of a foreign key constraint enforced with another table. check if you have this constraint using the "relation view" if you are on phpmyadmin.

PostgreSQL insert query

I try to insert a single line to log table, but it throws an error message . ><
The log table structure is like this:
no integer NOT NULL nextval('log_no_seq'::regclass)
ip character varying(50)
country character varying(10)
region character varying(10)
city character varying(50)
postalCode character varying(10)
taken numeric
date date
and my query:
INSERT INTO log (ip,country,region,city,postalCode,taken,date) VALUES
("24.24.24.24","US","NY","Binghamton","11111",1,"2011-11-09")
=> ERROR: column "postalcode" of relation "log" does not exist
second try query : (without postalcode)
INSERT INTO log (ip,country,region,city,taken,date) VALUES
("24.24.24.24","US","NY","11111",1,"2011-11-09")
=> ERROR: column "24.24.24.24" does not exist
I don't know what I did wrong...
And PostgreSQL does not have datetime type? (2011-11-09 11:00:10)
Try single quotes (e.g. '2011-11-09')
PostgreSQL has a "datetime" type: timestamp. Read the manual here.
The double-qutes "" are used for identifiers if you want them as is. It's best you never have to use them as #wildplasser advised.
String literals are enclosed in single quotes ''.
Start by reading the chapter Lexical Structure. It is very informative. :)
Try it rewrite in this way:
INSERT INTO log (ip,country,region,city,"postalCode",taken,date) VALUES
('24.24.24.24','US','NY','Binghamton','11111',1,'2011-11-09');
When you are using mixed case in the name of column, or reserved words (such as "column", "row" etc.), you have to use double quotes, instead of values, where you have to use a single ones, as you can see in the example.

Insert hex string value to sql server image field is appending extra 0

Have an image field and want to insert into this from a hex string:
insert into imageTable(imageField)
values(convert(image, 0x3C3F78...))
however when I run select the value is return with an extra 0 as 0x03C3F78...
This extra 0 is causing a problem in another application, I dont want it.
How to stop the extra 0 being added?
The schema is:
CREATE TABLE [dbo].[templates](
[templateId] [int] IDENTITY(1,1) NOT NULL,
[templateName] [nvarchar](50) NOT NULL,
[templateBody] [image] NOT NULL,
[templateType] [int] NULL)
and the query is:
insert into templates(templateName, templateBody, templateType)
values('I love stackoverflow', convert(image, 0x3C3F786D6C2076657273696F6E3D.......), 2)
the actual hex string is quite large to post here.
I have just had similar problem and I blame myself.
It is possible, that you copy just part of data you need. In my case, I added '0' to the end of the blob.
The cause of this could be copying the value from SQL Management Studio to clipboard.
insert into imageTable(imageField) values(0x3C3F78...A)
Select returned: 0x03C3F78...
insert into imageTable(imageField) values(0x3C3F78...A 0 )
Select returned: 0x3C3F78...
I hope this will help.
Best wishes.
This is correct for 0x0: each pair of digits makes one byte so 0x00 has the value stored
When I run SELECT convert(varbinary(max), 0x55) I get 0x55 out on SQL Server 2008. SELECT convert(varbinary(max), 85) gives me 0x00000055 which is correct is 85 is a 32 bit integer
What datatype are you casting to varbinary?
Edit: I still can't reproduce using image not varbinary
Some questions though:
is this an upgraded database? What is the compatibility level?
why use image: use varbinary(max) instead
what happens when you change everything to varbinary?