AES_Encrypt : Password getting reset automatically - aes

I am using AES_Encrypt('123456',2) to store my password in db in encrypted format.
I have noticed that after some time the password stored in db automatically gets reset to NULL. I have cross checked this by using AES_Decrypt.
I don't know why is this happening. I am not updating my password anywhere in the code. Then I am wondering how come and when is the password getting reset. Does it has anything to do with the AES_Encrypt?

Assuming that you are referring to the MySQL AES_ENCRYPT and AES_DECRYPT routines I will direct you to the reference which states that
AES_ENCRYPT() encrypts a string and returns a binary string. AES_DECRYPT() decrypts the encrypted string and returns the original string. The input arguments may be any length. If either argument is NULL, the result of this function is also NULL.
If AES_DECRYPT() detects invalid data or incorrect padding, it returns NULL. However, it is possible for AES_DECRYPT() to return a non-NULL value (possibly garbage) if the input data or the key is invalid.
Is it possible that either of your functions could be getting supplied with invalid data or incorrect padding?
Additionally, the AES_ENCRYPT function may pad your data out to a specific length. Ensure you are not truncating to fit within your database

Related

Reversing a password-hashing function knowing plaintext + output

I have a password that has been stored and I'd like to figure out how it's been 'transformed' to be stored in my database.
The plaintext password is:
k4oK203$
And the password as it is stored 'crypted' in my database is:
6xqmRr0QNUrc0uvwGchWqA==
How would I go about figuring out what transformation (base64? sha1? md5? etc.) that were used in order to get the plain text password in to the database value?

How to Validate the data type of a field is it integer or string in AWS Glue in Scala and throw the errors of that invalid data

I want to read data from s3 and applymapping to it and then write it to another s3.
I want to check by datatype in field wise whether the data match the mapping datatype or not.
Like in mapping I made username is string.
Now when I write it to a S3 I need to check whether username field has all string or it has some thing odd value.
How can I achieve it ?
any help would really appreciable to me.

postgres store hash string: invalid message format

In my node.js application I should be able to create some new users.
And to store the passwords in a secure fashion, I use the sodium-native library to generate argon2 hashes (https://github.com/sodium-friends/sodium-native). Now I try to store a string representation of those hashes in my postgres database.
The JavaScript query string looks like this:
INSERT INTO users (email, name, password) VALUES ('${email}', '${name}', '${pwHash}')
And the generated sql statement looks as follows:
INSERT INTO users (email, name, password)
VALUES ('test#test.org', 'test', '$argon2id$v=19$m=8,t=1,p=1$WAw+HmO/+RZTazVr3eOnPg$HYzaB0+Cre23XGR+A1cZawrUvkon2Cx3x7ua5I68xGo ')
Besides the hash, there is some further information stored about it to help verify passwords.
I don't know why it produces all those white-spaces, but I think it is due to the fixed length of the buffer used.
My problem is that postgres, for some reason, sends me an error: invalid message format, code: '08P01'Now, that code means protocol violation, whatever that means.
The funny thing is: when I just hard code the hash as it appears in my browser or console, then it works:
INSERT INTO users (email, name, password)
VALUES ('${email}', '${name}', '$argon2id$v=19$m=8,t=1,p=1$WAw+HmO/+RZTazVr3eOnPg$HYzaB0+Cre23XGR+A1cZawrUvkon2Cx3x7ua5I68xGo ')
It doesn't seem to make a difference, if I remove the white-spaces or not.
Can anybody tell me what I am doing wrong?
Edit: I was asked if those "blanks" really are white-spaces. At least I think so, because they appear as ones in the editor and browser and copy as ones as well. I tried to manually remove them and it didn't make any difference.
I also tried to use string concatenation instead of interpolation, but it also didn't make any difference.
Instead of converting the buffer to a string first, I now store the hash as raw binary data (data-type bytea) as it is generated by sodium-native. That also makes password verification trivial. Please do follow mu is too short's advise about SQL injection.

Lazarus + PostgreSQL: Why do blank textboxes get stored with single speech marks?

The language used is Lazarus Pascal and the DB is PostgreSQL.
I'm assigning values into parameters like this:
dbQuery_Supp.Params.ParamByName('pCity').AsString := txtCity.Text;
And this is written using an INSERT query to the DB.
Data gets stored correctly for fields with values. But for text boxes that have no data, I see single quotes ('') in the fields when browsed using pgadmin.
My question:
I need to make sure that if no data is input in a textbox, the field for that value be blank in the DB instead of single quotes. Traditionally (in VB) I'd check each textbox's value and only insert it if it had data. Is this the same thing to do in Lazarus or is there a way around this? Since I'm writing the values using parameters, building a string checking for each field seems like extra work. So I'm just looking for a more efficient and convenient way if there's one.
Thanks!
It is pgAdmin that shows an empty string as '' in its data visualization widget.
Presumably that's to distinguish it from NULL, which is by default shown as an empty box (this can be changed in the preferences).
Compare with the output of psql if you want to be sure.

Null db values and defaults

I have 2 fields that I'm adding to a current database table with data in it. One is a bit and one is an int. If I am setting defaults for both, should I just set them to not null since there is no case where they would be null?
If you will ever need to store data where you need the ability to indicate "we don't know" then you may consider allowing null values.
For example, I store data from remote sensors. When I am unable to retrieve the sensor data, like due to network problems, I use null.
If, however, you require that a value always be present, then you should use the NOT NULL constraint.
Yes, that would do the trick. If you set those columns as not null and you don't specify a default value, you'll definitely get an error from the DB.