postgres store hash string: invalid message format - postgresql

In my node.js application I should be able to create some new users.
And to store the passwords in a secure fashion, I use the sodium-native library to generate argon2 hashes (https://github.com/sodium-friends/sodium-native). Now I try to store a string representation of those hashes in my postgres database.
The JavaScript query string looks like this:
INSERT INTO users (email, name, password) VALUES ('${email}', '${name}', '${pwHash}')
And the generated sql statement looks as follows:
INSERT INTO users (email, name, password)
VALUES ('test#test.org', 'test', '$argon2id$v=19$m=8,t=1,p=1$WAw+HmO/+RZTazVr3eOnPg$HYzaB0+Cre23XGR+A1cZawrUvkon2Cx3x7ua5I68xGo ')
Besides the hash, there is some further information stored about it to help verify passwords.
I don't know why it produces all those white-spaces, but I think it is due to the fixed length of the buffer used.
My problem is that postgres, for some reason, sends me an error: invalid message format, code: '08P01'Now, that code means protocol violation, whatever that means.
The funny thing is: when I just hard code the hash as it appears in my browser or console, then it works:
INSERT INTO users (email, name, password)
VALUES ('${email}', '${name}', '$argon2id$v=19$m=8,t=1,p=1$WAw+HmO/+RZTazVr3eOnPg$HYzaB0+Cre23XGR+A1cZawrUvkon2Cx3x7ua5I68xGo ')
It doesn't seem to make a difference, if I remove the white-spaces or not.
Can anybody tell me what I am doing wrong?
Edit: I was asked if those "blanks" really are white-spaces. At least I think so, because they appear as ones in the editor and browser and copy as ones as well. I tried to manually remove them and it didn't make any difference.
I also tried to use string concatenation instead of interpolation, but it also didn't make any difference.

Instead of converting the buffer to a string first, I now store the hash as raw binary data (data-type bytea) as it is generated by sodium-native. That also makes password verification trivial. Please do follow mu is too short's advise about SQL injection.

Related

Oracle ORA_HASH alternative in Postgres(MD5)

I am trying to find an alternative to the ORA_HASH Oracle function in Postgres(edb). I know there are hashtext() and MD5(). Hashtext should be ideal, but it's not documented and so I can't use it for some reasons. I'd like to know if there is any way of using MD5() and getting the same value that you'd get in ORA_HASH giving the same value for both of them.
For example, this is how I use it and what I get in Oracle:
SELECT ORA_HASH('huyu') FROM dual;
----------------------------------
3284595515
I'd like to do something similar in postgres, so that if I pass the 'huyu' value, I'd get the exact same '3284595515' value.
Now, I know that ORA_HASH restores a number and MD5 restores hexadecimal value. I think I'd need the function that converts the hexadecimal 32 into a number, but I can't get the same value and I'm not sure if it is possible.
Thank you in advance for any suggestions
If you rely on the exact implementation of ORA_HASH, which is a proprietary hash function in a closed-source software, you are locked to this vendor, sorry.
I don't see how using PostgreSQL's hashtext is worse than ORA_HASH: it is documented (in the source), it's implementation is public, and it is not going to be removed, because it is required for hash partitioning.

Db2 for I: Cpyf *nochk emulation

In the IBM i system there's a way to copy a from a structured file to one without structure using Cpyf *nochk.
How can it be done with sql?
The answer may be "You can't", not if you are using DDL defined tables anyway. The problem is that *NOCHK just dumps data into the file like a flat file. Files defined with CRTPF, whether they have source, or are program defined, don't care about bad data until read time, so they can contain bad data. In fact you can even read bad data out of a file if you use a program definition for that file.
But, an SQL Table (one defined using DDL) cannot contain bad data. No matter how you write it, the database validates the data at write time. Even the *NOCHK option of the CPYF command cannot coerce bad data into an SQL table.
There really isn't an easy way
Closest would be to just build a big character string using CONCAT...
insert into flatfile
select mycharfld1
concat cast(myvchar as char(20))
concat digits(zonedFld3)
from mytable
That works for fixed length, varchar (if casted to char) and zoned decimal...
Packed decimal would be problematic..
I've seen user defined functions that can return the binary character string that make up a packed decimal...but it's very ugly
I question why you think you need to do this.
You can use QSYS2.QCMDEXC stored procedure to execute OS commands.
Example:
call qsys2.qcmdexc ( 'CPYF FROMFILE(QTEMP/FILE1) TOFILE(QTEMP/FILE2) MBROPT(*replace) FMTOPT(*NOCHK)' )

How does one prevent MS Access from concatenating the schema and table names thereby taking them over the 64 character limit?

I have been trying to get around this for several day's now with no luck. I loaded Libre Office to see how that would handle it, and its native support for PostgeSQL works wonderfully and I can see the true data structure. Which is how I found out I was dealing with more than one table. What I am seeing in MS Access is the two names concatenated together. The concatenation takes them over the 64 character limit that appears to be built into the ODBC driver. I have seen many references to modifying namedatalen on the server side, but my problem is on the ODBC side. Most of the tables are under the 64 character limit even with the concatenation and work fine. As such I know everything else is working. The specific error I am getting is
'your_extra_long_schema_name_your_table_name_that_you_want_to_get_data_from'
is not a valid name. Make sure it does not include invalid characters
or punctuation and that it is not too long.
Object names in an Access database are limited to 64 characters (ref: here). When creating an ODBC linked table in the Access UI the default behaviour is to concatenate the schema name and table name with an underscore and use that as the linked table name so, for example, the remote table table1 in the schema public would produce a linked table in Access named public_table1. If such a name exceeds 64 characters then Access will throw an error.
However, we can use VBA to create the table link with a shorter name, like so:
Option Compare Database
Option Explicit
Sub so38999346()
DoCmd.TransferDatabase acLink, "ODBC Database", "ODBC;DSN=PostgreSQL35W", acTable, _
"public.hodor_hodor_hodor_hodor_hodor_hodor_hodor_hodor_hodor_hodor", _
"hodor_linked_table"
End Sub
(Tested with Access 2010.)

Inserting string in FOR BIT DATA column in DB2

When i try inserting a string value in a column in DB2, which is defined as CHAR(15) FOR BIT DATA, i notice that it gets converted into some other format, probably hexadecimal.
On retrieving the data, i get a byte array, and on trying to convert it back to ASCII, using System.Text.Encoding.ASCII.GetString, i get back a string of some junk characters.
Anyone faced this issue?
Any resolution?
Thanks in advance.
For Bit Data prevents the code page convertion between the client and the server. This is normally used to insert binary data, instead of strings.
Please, take a look at this forum, where many cases are proposed and solved: http://www.dbforums.com/db2/1663992-bit-data.html
You could eventually make a cast to database page (it depends you platform)
CAST(c1 AS CHAR(n) FOR SBCS DATA)
CAST (<forbitdataexpression> AS [VARCHAR|CHAR][(<n>)] FOR [SBCS|DBCS] DATA)
References
http://bytes.com/topic/db2/answers/180874-problem-db2-field-type-char-n-bit-data
http://bytes.com/topic/db2/answers/182124-function-convert-bit-data-column-string
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.sql.ref.doc/doc/r0023459.html

Lazarus + PostgreSQL: Why do blank textboxes get stored with single speech marks?

The language used is Lazarus Pascal and the DB is PostgreSQL.
I'm assigning values into parameters like this:
dbQuery_Supp.Params.ParamByName('pCity').AsString := txtCity.Text;
And this is written using an INSERT query to the DB.
Data gets stored correctly for fields with values. But for text boxes that have no data, I see single quotes ('') in the fields when browsed using pgadmin.
My question:
I need to make sure that if no data is input in a textbox, the field for that value be blank in the DB instead of single quotes. Traditionally (in VB) I'd check each textbox's value and only insert it if it had data. Is this the same thing to do in Lazarus or is there a way around this? Since I'm writing the values using parameters, building a string checking for each field seems like extra work. So I'm just looking for a more efficient and convenient way if there's one.
Thanks!
It is pgAdmin that shows an empty string as '' in its data visualization widget.
Presumably that's to distinguish it from NULL, which is by default shown as an empty box (this can be changed in the preferences).
Compare with the output of psql if you want to be sure.