How to write array[] bytea using libpq PQexecParams? - postgresql

I need help with write array[] bytea using libpq PQexecParams
I'v done a simple version where i'm write a single binary data in a single bytea arg using PQexecParams like this solution Insert Binary Large Object (BLOB) in PostgreSQL using libpq from remote machine
But I have problems with write array[] bytea like this
select * func(array[3, 4], array[bytea, bytea])

Unless you want to read the PostgreSQL source to figure out the binary format for arrays, you will have to use the text format, e.g.
{\\xDEADBEEF,\\x00010203}
The binary format for arrays is defined in array_send in src/backend/utils/adt/arrayfuncs.c. Have a look at the somments and definitions in src/include/utils/array.h as well. Consider also that all integers are sent in “network bit order”.
One way you can examine the binary output format and saving yourself a lot of trouble experimenting is to use binary copy, e.g.
COPY (SELECT ARRAY['\xDEADBEEF'::bytea,'\x00010203'::bytea])
TO '/tmp/file' (FORMAT 'binary');

Related

Dealing with parsing oids in Postgres

I'm currently improving a library client for Postgresql, the library already has working communication protocol including DataRow and RowDescription.
The problem I'm facing right now is how to deal with values.
Returning plain string with array of integers for example is kind of pointless.
By my research I found that some other libraries (like for Python) either return is as unmodified string or convert primitive types including arrays.
What I mean by conversion is making Postgres DataRow raw data as Python-type value: Postgres integer is parsed as python number, Postgres booleans as python booleans, etc.
Should I make second query to get information column type and use its converters or should I leave it plain?
You could opt to get the array values in the internal format by setting the corresponding "result-column format code" in the Bind message to 1, but that is typically a bad choice, since the internal format varies from type to type and may even depend on the server's architecture.
So your best option is probably to parse the string representation of the array on the client side, including all the escape characters.
When it comes to finding the base type for an array type, there is no other option than querying pg_type like
SELECT typelem::regtype FROM pg_type WHERE oid = 1007;
typelem
---------
integer
(1 row)
You could cache these values on the client side so that you don't have to query more than once per type and database session.

Db2 for I: Cpyf *nochk emulation

In the IBM i system there's a way to copy a from a structured file to one without structure using Cpyf *nochk.
How can it be done with sql?
The answer may be "You can't", not if you are using DDL defined tables anyway. The problem is that *NOCHK just dumps data into the file like a flat file. Files defined with CRTPF, whether they have source, or are program defined, don't care about bad data until read time, so they can contain bad data. In fact you can even read bad data out of a file if you use a program definition for that file.
But, an SQL Table (one defined using DDL) cannot contain bad data. No matter how you write it, the database validates the data at write time. Even the *NOCHK option of the CPYF command cannot coerce bad data into an SQL table.
There really isn't an easy way
Closest would be to just build a big character string using CONCAT...
insert into flatfile
select mycharfld1
concat cast(myvchar as char(20))
concat digits(zonedFld3)
from mytable
That works for fixed length, varchar (if casted to char) and zoned decimal...
Packed decimal would be problematic..
I've seen user defined functions that can return the binary character string that make up a packed decimal...but it's very ugly
I question why you think you need to do this.
You can use QSYS2.QCMDEXC stored procedure to execute OS commands.
Example:
call qsys2.qcmdexc ( 'CPYF FROMFILE(QTEMP/FILE1) TOFILE(QTEMP/FILE2) MBROPT(*replace) FMTOPT(*NOCHK)' )

PostgreSQL 9.4 attachments in JSONB

How can I insert attachments to JSONB using PostgreSQL ?
Is any special key, like "_attachments:{}" ? Where Can I find in manual, about inserting files, binary data, attachments ?
This really has nothing to do with PostgreSQL its self, it's down to the JSON object-serialization format, rather than PostgreSQL's implementation of it.
JSON is a text-based serialization, so you cannot embed binary data in it directly.
You must encode it to a form that's valid encoded text with no null bytes, etc.
Typically you do this by encoding it as base64 or base85.
In PostgreSQL you'll want to use encode(some_bytea, 'base64') and the corresponding decode call. PostgreSQL doesn't have built-in base85 support.
See:
Binary Data in JSON String. Something better than Base64

import csv files on postgres numeric types

I need import a file to my Postgres database and get this error:
invalid input syntax for integer in fabrica, "1";
SQL state: 22P02
my command is:
copy trazabilidade(fabrica, --integer
idChapa, --integer
descricao, --varchar
espessura, --double precision
comprimento, --double precision
largura, --double precision
peso) from 'C:/temp_nexo/traz.csv' delimiter ';';
How can I import data from csv file types that have numbers?
http://wiki.postgresql.org/wiki/COPY
Can not extend Pg coercions
The data-loading mechanism relies on the data being a formal representation of a Pg data-type, or coercible (e.g, cast'able) by Pg. However, there isn't currently a way to add custom-coercions for the Pg types. You can not for instance, make '31,337'::int work by overriding the coercion to an Int.
It also suggests two alternatives, namely pgloader.
pgloader is much better at loading error-prone data in a more flexible format than the built-in COPY is. The downsides are additional install complexity (Python+psycopg+configuration) and a sometimes significant speed loss compared with the built-in COPY.
As per Denis's reply about the COPY command, you can't add custom coercions to postgres copy commands. If pgloader is overkill, you can load your data to a temp table and then from there examine, then cast/trim/manipulate any data you think should be valid.

Write query for inserting varbinary value in PostgreSQL

What is the syntax in PostgreSQL for inserting varbinary values?
SQL Server's syntax using a constant like 0xFFFF, it didn't work.
Given there's no "varbinary" data type in Postgres I believe you mean "bytea". Take a look at the docs about the way to specify "bytea" literals.
Depending on the language and the bindings you use there could be more sophisticated ways for transferring binary data - you could find a .Net/C#/Npgsql example here (under "Working with binary data and bytea datatype").