Can I somehow encode string field into base64 field using Tableau Desktop only?
I didn't find any function for this, probably there is a way to do it.
Related
We are working towards migration of databases from MSSQL to PostgreSQL database. During this process we came across a situation where a table contains password field which is of NVARCHAR type and this field value got converted from VARBINARY type and stored as NVARCHAR type.
For example: if I execute
SELECT HASHBYTES('SHA1','Password')`
then it returns 0x8BE3C943B1609FFFBFC51AAD666D0A04ADF83C9D and in turn if this value is converted into NVARCHAR then it is returning a text in the format "䏉悱゚얿괚浦Њ鴼"
As we know that PostgreSQL doesn't support VARBINARY so we have used BYTEA instead and it is returning binary data. But when we try to convert this binary data into VARCHAR type it is returning hex format
For example: if the same statement is executed in PostgreSQL
SELECT ENCODE(DIGEST('Password','SHA1'),'hex')
then it returns
8be3c943b1609fffbfc51aad666d0a04adf83c9d.
When we try to convert this encoded text into VARCHAR type it is returning the same result as 8be3c943b1609fffbfc51aad666d0a04adf83c9d
Is it possible to get the same result what we retrieved from MSSQL server? As these are related to password fields we are not intended to change the values. Please suggest on what needs to be done
It sounds like you're taking a byte array containing a cryptographic hash and you want to convert it to a string to do a string comparison. This is a strange way to do hash comparisons but it might be possible depending on which encoding you were using on the MSSQL side.
If you have a byte array that can be converted to string in the encoding you're using (e.g. doesn't contain any invalid code points or sequences for that encoding) you can convert the byte array to string as follows:
SELECT CONVERT_FROM(DIGEST('Password','SHA1'), 'latin1') AS hash_string;
hash_string
-----------------------------
\u008BãÉC±`\u009Fÿ¿Å\x1Afm+
\x04ø<\u009D
If you're using Unicode this approach won't work at all since random binary arrays can't be converted to Unicode because there are certain sequences that are always invalid. You'll get an error like follows:
# SELECT CONVERT_FROM(DIGEST('Password','SHA1'), 'utf-8');
ERROR: invalid byte sequence for encoding "UTF8": 0x8b
Here's a list of valid string encodings in PostgreSQL. Find out which encoding you're using on the MSSQL side and try to match it to PostgreSQL. If you can I'd recommend changing your business logic to compare byte arrays directly since this will be less error prone and should be significantly faster.
then it returns 0x8BE3C943B1609FFFBFC51AAD666D0A04ADF83C9D and in turn
if this value is converted into NVARCHAR then it is returning a text
in the format "䏉悱゚얿괚浦Њ鴼"
Based on that, MSSQL interprets these bytes as a text encoded in UTF-16LE.
With PostgreSQL and using only built-in functions, you cannot obtain that result because PostgreSQL doesn't use or support UTF-16 at all, for anything.
It also doesn't support nul bytes in strings, and there are nul bytes in UTF-16.
This Q/A: UTF16 hex to text suggests several solutions.
Changing your business logic not to depend on UTF-16 would be your best long-term option, though. The hexadecimal representation, for instance, is simpler and much more portable.
A table in DB2 contains BLOB data. I need to convert it into String so that it can be viewed in a readable format. I tried options like
getting blob object and converting to byte array
string buffer reader
sqoop import using --map-column-java and --map-column-hive options.
After these conversions also i am not able to view the data in readable format. Its in an unreadable format like 1f8b0000..
Please suggest a solution on how to handle this scenario.
I think you need to look at the CAST function.
SELECT CAST(BLOB_VAR as VARCHAR(SIZE) CCSID UNICODE) as CHAR_FLD
Also, be advised that max value of SIZE is 32K.
Let me know if you tried this.
1f8b0000 indicates the data in gzip form and therefore you have to unzip it.
How can I insert attachments to JSONB using PostgreSQL ?
Is any special key, like "_attachments:{}" ? Where Can I find in manual, about inserting files, binary data, attachments ?
This really has nothing to do with PostgreSQL its self, it's down to the JSON object-serialization format, rather than PostgreSQL's implementation of it.
JSON is a text-based serialization, so you cannot embed binary data in it directly.
You must encode it to a form that's valid encoded text with no null bytes, etc.
Typically you do this by encoding it as base64 or base85.
In PostgreSQL you'll want to use encode(some_bytea, 'base64') and the corresponding decode call. PostgreSQL doesn't have built-in base85 support.
See:
Binary Data in JSON String. Something better than Base64
So here is the problem: I'm using MongoDB in my project so there are 24-characters ObjectId, using only hexadecimal alphabet. I'm make http request in my project to a provider, in this request I need to put a unique Id for callbacks purpose, but the provider allows only 20 characters for this id, and I don't know why.
So, my question is, with a 16 characters alphabet (hexa), there are : 16^24 possible mongo Ids, right ?
Supposing I use in the HTTP request an Id based on 64 different characters ([0-9][a-z][A-Z]-_),
correct me if I'm wrong but I think there are 64^20 possible Ids.
So technically, it is possible to encode every possible MongoDB ObjectId with a corresponding Id, isn't it ?
It seems to be a classic Base64 encoding but mysteriously this does not work as I expected, I think I didn't understand how Base64 encoding works because the generated strings are bigger than original strings...
Do you think all of this is even possible or did I totally miss something ?
Thanks in advance!
EDIT:
One of my colleague tried something which seems to work.
Here is the Java code :
byte[] decodedHex = Hex.decodeHex("53884594e4b0695f366f8128".toCharArray());
byte[] encodedHexB64 = Base64.encodeBase64(decodedHex);
System.out.println(new String(encodedHexB64)); // --> U4hFlOSwaV82b4Eo
For a reason that I ignore, doing this is not the same:
String anotherB64 = Base64.encodeBase64String("53884594e4b0695f366f8128".getBytes());
System.out.println(anotherB64);
And it prints : NTM4ODQ1OTRlNGIwNjk1ZjM2NmY4MTI4
MongoDB is using ObjectId as a default primary key for the documents because it's fast to generate and very likely to be unique.
But you are not forced to use it as a primary key. You can use any BSON data type in the _id field as long is not an array. That being said, you can use your 20-char Id in _id field.
EDIT:
From your original question I didn't know that you're using an existing DB. The _id field is immutable and it cannot be changed in an existing document.
If you only wanted to convert the existing ObjectId to something else that's 20 chars long the method you posted will work.
The second method produces a long string because you're basically base64 encoding a string which will produce an even longer string.
is there any pair of encrypt and respective decrypt function?
Functions In PGCRYPTO library uses hash algorithms so they don't have decryption functions.
Also when I am using pgp_sym_encrypt() and pgp_sym_decrypt() functions,
pgp_sym_decrypt() function gives the above error for encrypted value of pgp_sym_encrypt().
I am using Postgres Plus Advanced Server 8.4.
Do I have to put \ before every escape sequence character or what?
Please provide the solution how to access bytea data and also put encrypted value in
table column and decrypt the same value.
Thanks
Tushar
If you encrypt/decrypt binary data you should use pgp_sym_encrypt_bytea and pgp_sym_decrypt_bytea functions.
The functions pgp_sym_encrypt and pgp_sym_decrypt are for textual data which has to be encoded in client encoding and possible to convert to database encoding. So you can not use them for example to encrypt images, PDFs etc.