Postgres query error: syntax error at or near "," - postgresql

I've seen and solved this error before but I'm really lost here as to what's wrong with my query - it seems fine to me on the surface so I'm wondering if anyone has any ideas here?
Here's the query:
INSERT INTO processing_queue (
source,
description_index,
charge_index,
charge_code_index,
charge_code_label,
cpt_code_index,
hcpcs_code_index,
ms_drg_code_index,
svccd_code_index,
ndc_code_index,
no_of_drg_discharges_col_index,
revenue_code_col_index,
department_col_index,
skip_rows,
ignore_on,
sheet_index
) VALUES (
76,
NULL,
1,
0,
'Mnemonic',
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
'{0, 1}',
'Standard Price',
0
) RETURNING *;
I'm using the node-postgres library here and just entering the above as a raw query with the client.query method.

Confirmed that the query is not the problem and is working when I try it in psql. The error is somewhere in the thin wrapper code that I have written around node-postgres. Will try to close this as the answer is probably not so useful to anyone except me.

Related

Getting "invalid byte sequence for encoding "UTF8": 0x00" error when executing aws_s3.table_import_from_s3 command in postgresql

I am getting below error when executing :
SELECT aws_s3.table_import_from_s3( 'test',
'a,b,c,d,e',
'(format csv)',
'abc-ttt-dev',
'outer/inner/Inbound/sample.csv',
'us-east-1'
);
SQL Error [22021]: ERROR: invalid byte sequence for encoding "UTF8": 0x00
Where: COPY test, line 1
SQL statement "copy test (a,b,c,d,e) from '/rdsdbdata/extensions/aws_s3/amazon-s3-fifo-6826-20210708T140854Z-0' with (format csv)"
Just for information, Below query works perfectly fine.
SELECT aws_s3.table_import_from_s3(
'test',
'a,b,d,e',
'DELIMITER ''|''',
'abc-ttt-dev',
'outer/inner/Inbound/sample.txt',
'us-east-1'
);
Table script is
CREATE TABLE test (
a text not NULL,
b text not NULL,
c text not NULL,
d text not NULL,
e text not NULL
);
I think there are NULL values in your csv file and that is the reason you get this error.
You can either pre process the file using lambda or refer to Postgres error on insert - ERROR: invalid byte sequence for encoding “UTF8”: 0x00 for other workarounds.

How to use transaction with PQexecPrepared libpq

I'm new to postgresql and would like to ask how to do a transaction using BEGIN, COMMIT and PQexecPrepared. In my program, I have to update many tables before COMMIT. I do understand that we need to use:
1. PQexec(conn,"BEGIN");
2. execute some queries
3. PQexec(conn,"COMMIT");
I had first tried by using PQexecParams, it worked:
PQexec(conn,"BEGIN");
PQexecParams(conn, "INSERT INTO Cars (Id,Name, Price) VALUES ($1,$2,$3)",
3, NULL, parValues, NULL , NULL, 0 );
PQexec(conn,"COMMIT");
However when I had tried by using PQexecPrepared, my table Cars wasn't updated after COMMIT (of course it worked in autocommit mode without BEGIN &COMMIT )
PQexec(conn,"BEGIN");
PQprepare(conn,"teststmt", "INSERT INTO Cars (Id,Name, Price) VALUES ($1,$2,$3)", 3, NULL );
PQexecPrepared(conn, "teststmt", 3, parValues,NULL, NULL,0);
PQexec(conn,"COMMIT");
Do you have any advice in this case?

HSQLDB merge WHEN MATCHED AND fails

I have the following tables:
create table WorkPendingSummary
(
WorkPendingID int not null,
WorkPendingDate date not null,
Status varchar(20) not null,
EndDate date null
)
create table WorkPendingSummaryStage
(
WorkPendingID int not null,
WorkPendingDate date not null,
Status varchar(20) not null
)
I then have the following merge statement:
MERGE INTO WorkPendingSummary w USING WorkPendingSummaryStage
AS vals(WorkPendingID, WorkPendingDate, Status)
ON w.WorkPendingID = vals.WorkPendingID
WHEN MATCHED AND vals.status = 'CLOSED'
THEN UPDATE SET w.workpendingdate = vals.workpendingdate, w.status = vals.status, w.enddate = current_time
The documentation at: http://hsqldb.org/doc/guide/dataaccess-chapt.html#dac_merge_statement states that the "WHEN MATCHED" statement can have an additional "AND" clause as I have above, however that fails with:
unexpected token: AND required: THEN : line: 4 [SQL State=42581, DB Errorcode=-5581]
Does this feature work or am I just missing something?
Using HSQLDB 2.3.1.
Thanks!
The documentation is for version 2.3.3 and forthcoming 2.3.4. The AND clause is supported in these latest versions.

How can I combine these two statements?

I'm currently trying to insert data into a database from a text boxes, $enter / $enter2 being where the text is being written.
The database consists of three columns ID, name and nametwo
ID is auto incrementing and works fine
Both statements work fine on their own, but because they are being issued separately the first leaves nametwo blank and the second leaves name blank.
I've tried combining both but haven't had much luck, hope someone can help.
$dbh->do("INSERT INTO $table(name) VALUES ('".$enter."')");
$dbh->do("INSERT INTO $table(nametwo) VALUES ('".$enter2."')");
To paraphrase what others have said:
my $sth = $dbh->prepare("INSERT INTO $table(name,nametwo) values (?,?)");
$sth->execute($enter, $enter2);
So you don't have to worry about quoting.
You should read database manual.
The query should be:
$dbh->do("INSERT INTO $table(name,nametwo) VALUES ('".$enter."', '".$enter2."')");
The SQL syntax is
INSERT INTO MyTable (
name_one,
name_two
) VALUES (
"value_one",
"value_two"
)
Your way of generating SQL statements is very fragile. For example, it will fail if the table name is Values or the value is Jester's.
Solution 1:
$dbh->do("
INSERT INTO ".$dbh->quote_identifier($table_name)."
name_one,
name_two
) VALUES (
".$dbh->quote($value_one).",
".$dbh->quote($value_two)."
)
");
Solution 2: Placeholders
$dbh->do(
" INSERT INTO ".$dbh->quote_identifier($table_name)."
name_one,
name_two
) VALUES (
?, ?
)
",
undef,
$value_one,
$value_two,
);

Perl MySQL Cruddy! Assistance

I am fairly new to database programming and am trying to get a basic CRUD app going. Using Cruddy! I have a very limited application that reads the data dictionary and creates forms based on each table.
As several tables have extensive foreign key entries, I want my app to perform the join operations that would be necessary for each foreign key column to be displayed as the entries to which the key refers. Cruddy! claims to have this ability - it uses CGI::AutoForm for the form creation. To get a form up and running, you have to provide entries on a column-by-column basis to a reference table ui_table_column.
Rather than writing SQL statements for all of my tables and their affiliated columns, I'm trying to get the process right for a single column.
From my DDL for this table:
CONSTRAINT `fk_Holder_Sample1`
FOREIGN KEY (`sample_id`)
REFERENCES `sample` (`sample_id`)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
And my attempts at setting up the AutoForm SQL entries:
INSERT INTO ui_table_column (
table_name, field_name, appear_order, heading, searchable, updatable, insertable, input_control_type, multi_insert_delimiter,
search_control_type, search_mult_select, use_data_dict, datatype, default_value, required, input_size, input_maxlength, brief_heading,
alt_mask_field, mask_table_name, mask_field_name, id_field_name, no_cache, radio_checkbox_cols, field_group, element_attrs, help_summary)
VALUES (
'SAMPLE', 'SAMPLE_ID', 10, 'ID', 'Y', 'N', 'N', 'TEXT', NULL,
'SELECT', 4, 'Y', NULL, NULL, NULL, NULL, NULL, NULL,
NULL, 'sample', 'name', 'sample_id', 'Y', NULL, NULL, NULL, NULL);
INSERT INTO ui_table_column (
table_name, field_name, appear_order, heading, searchable, updatable, insertable, input_control_type, multi_insert_delimiter,
search_control_type, search_mult_select, use_data_dict, datatype, default_value, required, input_size, input_maxlength, brief_heading,
alt_mask_field, mask_table_name, mask_field_name, id_field_name, no_cache, radio_checkbox_cols, field_group, element_attrs, help_summary)
VALUES (
'SAMPLE', 'SAMPLE_NAME', 20, 'Name', 'Y', 'Y', 'Y', 'TEXT', NULL,
'MATCH TEXT', NULL, 'Y', NULL, NULL, NULL, NULL, NULL, 'Name',
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
INSERT INTO ui_table_column (
table_name, field_name, appear_order, heading, searchable, updatable, insertable, input_control_type, multi_insert_delimiter,
search_control_type, search_mult_select, use_data_dict, datatype, default_value, required, input_size, input_maxlength, brief_heading,
alt_mask_field, mask_table_name, mask_field_name, id_field_name, no_cache, radio_checkbox_cols, field_group, element_attrs, help_summary)
VALUES (
'HOLDER', 'SAMPLE_ID', 30, 'sample', 'Y', 'Y', 'Y', 'SELECT', NULL,
'SELECT', 4, 'Y', NULL, NULL, NULL, NULL, NULL, 'Sample',
NULL, 'sample', 'NAME', 'SAMPLE_ID', 'Y', NULL, NULL, NULL, NULL);
When I refresh my app page (both just refreshing the broswer and calling apachectl) there is no change - that is, I still see Sample ID as a field in the Holder page.
Has anyone had success with this or can advise me on what I'm doing wrong?
EDIT: The silence from SO I take as indicative that this particular framework has not seen widespread use. I would like to open my question up a little, then, and ask what solutions have you used? I am actually experimenting with Catalyst::Plugin::AutoCRUD.
Answered after learner concluded with another framework but for future reference these fields must be in UPPER CASE.
For the example above, the first and third insert statements would have:
(alt_mask_field, mask_table_name, mask_field_name, id_field_name) = (NULL,'SAMPLE','NAME','SAMPLE_ID').
I wound up using the module in my edit. I will flag this as closed tomorrow.