Decrypt data in a PostgreSQL query which was encrypted with C# - postgresql

There is some code we have in C# which encrypts and decrypts data for storing in a postgresql database. The code for decrypting is as follows:
public string Decrypt(string val)
{
var sb = new StringBuilder();
string[] split = val.Split(new char[] { ' ' }, StringSplitOptions.RemoveEmptyEntries);
foreach (string s in split)
{
sb.Append(Encoding.UTF8.GetString(Decode(Convert.FromBase64String(s))));
sb.Append(" ");
}
sb.Remove(sb.Length - 1, 1); // Remove last space
return sb.ToString();
}
private static byte[] Decode(byte[] encodedData)
{
var symmetricAlgorithm = Aes.Create();
symmetricAlgorithm.Key = HexToByteArray("<aes key>");
var hashAlgorithm = new HMACSHA256();
hashAlgorithm.Key = HexToByteArray("<hash key>");
var iv = new byte[symmetricAlgorithm.BlockSize / 8];
var signature = new byte[hashAlgorithm.HashSize / 8];
var data = new byte[encodedData.Length - iv.Length - signature.Length];
Array.Copy(encodedData, 0, iv, 0, iv.Length);
Array.Copy(encodedData, iv.Length, data, 0, data.Length);
Array.Copy(encodedData, iv.Length + data.Length, signature, 0, signature.Length);
// validate the signature
byte[] mac = hashAlgorithm.ComputeHash(iv.Concat(data).ToArray());
if (!mac.SequenceEqual(signature))
{
// message has been tampered
throw new ArgumentException();
}
symmetricAlgorithm.IV = iv;
using (var memoryStream = new MemoryStream())
{
using (var cryptoStream = new CryptoStream(memoryStream, symmetricAlgorithm.CreateDecryptor(), CryptoStreamMode.Write))
{
cryptoStream.Write(data, 0, data.Length);
cryptoStream.FlushFinalBlock();
}
return memoryStream.ToArray();
}
}
private static byte[] HexToByteArray(string hex)
{
return Enumerable.Range(0, hex.Length).
Where(x => 0 == x % 2).
Select(x => Convert.ToByte(hex.Substring(x, 2), 16)).
ToArray();
}
The requirement I have now is that we want to be able to decrypt within an SQL query.. I have discovered the PGP_SYM_DECRYPT function, as well as some others like Encode()/Decode() for base64 strings and a decrypt_iv() function as well. Only I am uncertain how to use these to decrypt data.
Any crypto experts that could help me out here?
Alternatively, is there some equivalent of MSSQL's CLR functions for Postgres?

So what I'm inferring from the decryption code is the following:
Your encodedData is split up into three parts
Initialization vector (IV)
Ciphertext
Signature
You are using AES to encrypt/decrypt the data and use HMACSHA256 for the signature.
The AES block size in C# is fixed to 128 bits.
The signature is 256 bits in length (thus SHA256) (ref)
The IV is blockSize / 8 bytes long -> 16 bytes
Signature is 32 bytes
That means your encoded data is split up as follows:
[IV 16 bytes][Ciphertext n bytes][Signature 32 bytes]
To decrypt this with postgres, you need to have the pgcrypto module enabled. Lets say we have a table Foo with the field data of type bytea which contains the encrypted data. By using the Raw Encryption Functions of pgcrypto you should be able to decrypt this utilizing binary string operators to extract the parts.
(octet_length(data) - 16 - 32) should be the ciphertext length
substring(data from 0 for 16) should get the IV
substring(data from 16 for (octet_length(data) - 16 - 32)) should get the ciphertext
substring(data from (octet_length(data) - 32) for 32 should get the signature.
This results in:
SELECT decrypt_iv(
substring(data from 16 for (octet_length(data) - 16 - 32)),
"Decryption Key as bytes"::bytea,
substring(data from 0 for 16),
'aes'
)
FROM Foo
The signature is disregarded in this example, but you should be able to verify it
in a similar way with General Hashing Functions. If the decryption key is wrong, you will probably just get garbage data.

Thanks to #JensV for his answer.. we finally came up with the following:
CREATE OR REPLACE FUNCTION aes_cbc_mix_sha256_decrypt(data text, key text) RETURNS text
language plpgsql
AS
$$
DECLARE
res text;
dataHex text;
iv bytea;
aes_key bytea;
encrypted_data bytea;
BEGIN
SELECT decode(key, 'hex') INTO aes_key;
SELECT encode(decode(data, 'base64'), 'hex') INTO dataHex;
SELECT decode(SUBSTRING(dataHex FROM 1 FOR 32), 'hex') INTO iv;
SELECT decode(SUBSTRING(dataHex FROM 33 FOR (SELECT LENGTH(dataHex) - 32 - 64)), 'hex') INTO encrypted_data;
SELECT encode(decrypt_iv(encrypted_data, aes_key, iv, 'aes-cbc'), 'escape') INTO res;
RETURN res;
EXCEPTION WHEN others THEN
RETURN data;
END;
$$

Related

libpqxx C Aggregate Extension returns wrong data?

I am learning how to create C aggregate extensions and using libpqxx with C++ on the client side to process the data.
My toy aggregate extension has one argument of type bytea, and the state is also of type bytea. The following is the simplest example of my problem:
Server side:
PG_FUNCTION_INFO_V1( simple_func );
Datum simple_func( PG_FUNCTION_ARGS ){
bytea *new_state = (bytea *) palloc( 128 + VARHDRSZ );
memset(new_state, 0, 128 + VARHDRSZ );
SET_VARSIZE( new_state,128 + VARHDRSZ );
PG_RETURN_BYTEA_P( new_state );
}
Client side:
std::basic_string< std::byte > buffer;
pqxx::connection c{"postgresql://user:simplepassword#localhost/contrib_regression"};
pqxx::work w(c);
c.prepare( "simple_func", "SELECT simple_func( $1 ) FROM table" );
pqxx::result r = w.exec_prepared( "simple_func", buffer );
for (auto row: r){
cout << " Result Size: " << row[ "simple_func" ].size() << endl;
cout << "Raw Result Data: ";
for( int jj=0; jj < row[ "simple_func" ].size(); jj++ ) printf( "%02" PRIx8, (uint8_t) row[ "simple_func" ].c_str()[jj] ) ;
cout << endl;
}
The result on the client side prints :
Result Size: 258
Raw Result Data: 5c783030303030303030303030303030...
Where the 30 pattern repeats until the end of the string and the printed string in hex is 512 bytes.
I expected to receive an array of length 128 bytes where every byte is set to zero. What am I doing wrong?
The libpqxx version is 7.2 and PostgreSQL 12 on Ubuntu 20.04.
Addendum
Installation of the extesion sql statement;
CREATE OR REPLACE FUNCTION agg_simple_func( state bytea, arg1 bytea)
RETURNS bytea
AS '$libdir/agg_simple_func'
LANGUAGE C IMMUTABLE STRICT;
CREATE OR REPLACE AGGREGATE simple_func( arg1 bytea)
(
sfunc = agg_simple_func,
stype = bytea,
initcond = "\xFFFF"
);
The answer appears to be that the bytea type data on the client side must be retrieved as follows in the libpqxx library as of 7.0 (Not tested in earlier versions):
row[ "simple_func" ].as<std::basic_string<std::byte>>()
This retrieves the right bytea data without any conversions, string idiosyncrasies or unexpected behavior like I was seeing.
I recommend that you tackle these things one by one: first get the function to work, testing it with psql in interactive queries, then write the client code (or vice versa).
I can't speak about libpqxx, but I have to complain about your function: what you presented won't even compile, because you wrote DATUM in upper case and forgot headers and other important stuff.
This function will compile and run as you expect:
#include "postgres.h"
#include "fmgr.h"
PG_MODULE_MAGIC;
PG_FUNCTION_INFO_V1(simplest_func);
Datum simplest_func(PG_FUNCTION_ARGS) {
bytea *new_state = (bytea *) palloc(128 + VARHDRSZ);
memset(new_state, 0, 128 + VARHDRSZ);
SET_VARSIZE(new_state, 128 + VARHDRSZ);
PG_RETURN_BYTEA_P(new_state);
}
The memset will work that way, but the better and more idiomatic and robust way to set the value of a varlena is
memset(VARDATA(new_state), 0, 128);
I have no idea, how you got your result, but since the code you presented doesn't compile, I don't know how your function really looks.

I get an error when I rotate the ciphertext vector,

I want to rotate the product vector of two ciphertext vectors, but I get an error and I don't know how I should try to solve it.
It is correct when I rotate a ciphertext vector, but when this ciphertext vector is a product, an error occurs.
int main() {
EncryptionParameters parms(scheme_type::BFV);
parms.set_poly_modulus_degree(poly_modulus_degree);
parms.set_coeff_modulus(CoeffModulus::BFVDefault(poly_modulus_degree));
parms.set_plain_modulus(PlainModulus::Batching(poly_modulus_degree, 20));
auto context = SEALContext::Create(parms);
KeyGenerator keygen(context);
PublicKey psk=keygen.public_key();
SecretKey sk=keygen.secret_key();
RelinKeys rlk=keygen.relin_keys();
GaloisKeys glk=keygen.galois_keys();
Encryptor encryptor(context, psk);
Decryptor decryptor(context, sk);
Evaluator evaluator(context);
BatchEncoder batch_encoder(context);
size_t slot_count = batch_encoder.slot_count();
size_t row_size = slot_count / 2;
mt19937 rnd(time(NULL));
vector<uint64_t> v(slot_count,0ull);
for(int i=0;i<1000;i++){
v[i]=rnd()%1000;
}
print_matrix(v,row_size);
Plaintext pr;
Ciphertext cr;
batch_encoder.encode(v,pr);
encryptor.encrypt(pr,cr);
evaluator.multiply_inplace(cr,cr);
decryptor.decrypt(cr,pr);
batch_encoder.decode(pr,v);
print_matrix(v,row_size);
//error:: what(): encrypted size must be 2
evaluator.rotate_rows(cr,pow(2,0),glk,cr);
return 0;
}
It is complaining that the size of the ciphertext is larger than 2. You need to perform a relinearization on the cipher cr right after the evaluator.multiply_inplace(cr,cr); which will lower the size back down to 2:
...
evaluator.multiply_inplace(cr,cr);
evaluator.relinearize_inplace(cr, rlk);
...
You can read more information from Microsoft SEAL's code relinkeys.h.

How to pack/unpack a byte in q/kdb

What I'm trying to do here is pack a byte like I could in c# like this:
string symbol = "T" + "\0";
byte orderTypeEnum = (byte)OrderType.Limit;
int size = -10;
byte[] packed = new byte[symbol.Length + sizeof(byte) + sizeof(int)]; // byte = 1, int = 4
Encoding.UTF8.GetBytes(symbol, 0, symbol.Length, packed, 0); // add the symbol
packed[symbol.Length] = orderTypeEnum; // add order type
Array.ConstrainedCopy(BitConverter.GetBytes(size), 0, packed, symbol.Length + 1, sizeof(int)); // add size
client.Send(packed);
Is there any way to accomplish this in q?
As for the Unpacking in C# I can easily do this:
byte[] fillData = client.Receive();
long ticks = BitConverter.ToInt64(fillData, 0);
int fillSize = BitConverter.ToInt32(fillData, 8);
double fillPrice = BitConverter.ToDouble(fillData, 12);
new
{
Timestamp = ticks,
Size = fillSize,
Price = fillPrice
}.Dump("Received response");
Thanks!
One way to do it is
symbol:"T\000"
orderTypeEnum: 123 / (byte)OrderType.Limit
size: -10i;
packed: "x"$symbol,("c"$orderTypeEnum),reverse 0x0 vs size / *
UPDATE:
To do the reverse you can use 1: function:
(8 4 8; "jif")1:0x0000000000000400000008003ff3be76c8b43958 / server data is big-endian
("jif"; 8 4 8)1:0x0000000000000400000008003ff3be76c8b43958 / server data is little-endian
/ ticks=1024j, fillSize=2048i, fillPrice=1.234
*) When using BitConverter.GetBytes() you should also check the value of BitConverter.IsLittleEndian to make sure you send bytes over the wire in a proper order. Contrary to popular belief .NET is not always little-endian. Hovewer, an internal representation in kdb+ (a value returned by 0x0 vs ...) is always big-endian. Depending on your needs you may or may not want to use reverse above.

VB6 Hashing SHA1 output not matched

need help for my problem here. i do searching and googling for this problem but still don't found the solution why my output didnt matched with the expected output.
data to hash :
0800210142216688003333311100000554478000000
expected output :
DAAC526D4806C88CEDB8B7C6EA42A7442DE6E7DC
my output :
805C790E6BF39E3482067C44909EE126F9CBB878
and i am using this function to generate the hash
Public Function HashString(ByVal Str As String, Optional ByVal Algorithm As HashAlgorithm = SHA1) As String
On Error Resume Next
Dim hCtx As Long
Dim hHash As Long
Dim lRes As Long
Dim lLen As Long
Dim lIdx As Long
Dim AbData() As Byte
lRes = CryptAcquireContext(hCtx, vbNullString, vbNullString, PROV_RSA_FULL, CRYPT_VERIFYCONTEXT)
If lRes <> 0 Then
lRes = CryptCreateHash(hCtx, Algorithm, 0, 0, hHash)
If lRes <> 0 Then
lRes = CryptHashData(hHash, ByVal Str, Len(Str), 0)
If lRes <> 0 Then
lRes = CryptGetHashParam(hHash, HP_HASHSIZE, lLen, 4, 0)
If lRes <> 0 Then
ReDim AbData(0 To lLen - 1)
lRes = CryptGetHashParam(hHash, HP_HASHVAL, AbData(0), lLen, 0)
If lRes <> 0 Then
For lIdx = 0 To UBound(AbData)
HashString = HashString & Right$("0" & Hex$(AbData(lIdx)), 2)
Next
End If
End If
End If
CryptDestroyHash hHash
End If
End If
CryptReleaseContext hCtx, 0
If lRes = 0 Then
MsgBox Err.LastDllError
End If
End Function
and this is command to call the function
Dim received As String
Dim HASH As String
HASH = "0800210142216688003333311100000554478000000"
received = HashString(HASH)
Debug.Print ("HASH VALUE : " & received)
thanks
UPDATE:
finally i managed to get the expected output. i change the function to generate the sha1 using the sha1 function in this website :
http://vb.wikia.com/wiki/SHA-1.bas
and i do use this function to convert my hexstring to byte array
Public Function HexStringToByteArray(ByRef HexString As String) As Byte()
Dim bytOut() As Byte, bytHigh As Byte, bytLow As Byte, lngA As Long
If LenB(HexString) Then
' preserve memory for output buffer
ReDim bytOut(Len(HexString) \ 2 - 1)
' jump by every two characters (in this case we happen to use byte positions for greater speed)
For lngA = 1 To LenB(HexString) Step 4
' get the character value and decrease by 48
bytHigh = AscW(MidB$(HexString, lngA, 2)) - 48
bytLow = AscW(MidB$(HexString, lngA + 2, 2)) - 48
' move old A - F values down even more
If bytHigh > 9 Then bytHigh = bytHigh - 7
If bytLow > 9 Then bytLow = bytLow - 7
' I guess the C equivalent of this could be like: *bytOut[++i] = (bytHigh << 8) || bytLow
bytOut(lngA \ 4) = (bytHigh * &H10) Or bytLow
Next lngA
' return the output
HexStringToByteArray = bytOut
End If
End Function
and i using this command to get the expected output
Dim received As String
Dim HASH As String
Dim intVal As Integer
Dim temp() As Byte
HASH = "08002101422166880033333111000005544780000000"
temp = HexStringToByteArray(HASH)
received = Replace(HexDefaultSHA1(temp), " ", "")
Debug.Print ("HASH VALUE : " & received)
and finally i got the same output as expected. Yeah!!..
805c... is the SHA1 hash of the characters in your input string, i.e. '0', '8', '0', '0', ...
daac... is the SHA1 hash of the characters in your input string after conversion of each pair of hexadecimal digits to a byte, i.e. 0x08, 0x00, ...
Convert the input string to an array of bytes prior to hashing.
Your output is correct. This is SHA1 using python:
>>> import hashlib
>>> s = hashlib.sha1('0800210142216688003333311100000554478000000')
>>> s.hexdigest()
'805c790e6bf39e3482067c44909ee126f9cbb878'
Where did you get the other SHA1 computation from?

How to calculate CheckSum in FIX manually?

I have a FixMessage and I want to calculate the checksum manually.
8=FIX.4.2|9=49|35=5|34=1|49=ARCA|52=20150916-04:14:05.306|56=TW|10=157|
The body length here is calculated:
8=FIX.4.2|9=49|35=5|34=1|49=ARCA|52=20150916-04:14:05.306|56=TW|10=157|
0 + 0 + 5 + 5 + 8 + 26 + 5 + 0 = 49(correct)
The checksum is 157 (10=157). How to calculate it in this case?
You need to sum every byte in the message up to but not including the checksum field. Then take this number modulo 256, and print it as a number of 3 characters with leading zeroes (e.g. checksum=13 would become 013).
Link from the FIX wiki: FIX checksum
An example implementation in C, taken from onixs.biz:
char *GenerateCheckSum( char *buf, long bufLen )
{
static char tmpBuf[ 4 ];
long idx;
unsigned int cks;
for( idx = 0L, cks = 0; idx < bufLen; cks += (unsigned int)buf[ idx++ ] );
sprintf( tmpBuf, "%03d", (unsigned int)( cks % 256 ) );
return( tmpBuf );
}
Ready-to-run C example adapted from here
8=FIX.4.2|9=49|35=5|34=1|49=ARCA|52=20150916-04:14:05.306|56=TW|10=157|
#include <stdio.h>
void GenerateCheckSum( char *buf, long bufLen )
{
unsigned sum = 0;
long i;
for( i = 0L; i < bufLen; i++ )
{
unsigned val = (unsigned)buf[i];
sum += val;
printf("Char: %02c Val: %3u\n", buf[i], val); // print value of each byte
}
printf("CheckSum = %03d\n", (unsigned)( sum % 256 ) ); // print result
}
int main()
{
char msg[] = "8=FIX.4.2\0019=49\00135=5\00134=1\00149=ARCA\00152=20150916-04:14:05.306\00156=TW\001";
int len = sizeof(msg) / sizeof(msg[0]);
GenerateCheckSum(msg, len);
}
Points to Note
GenerateCheckSum takes the entire FIX message except CheckSum field
Delimiter SOH is written as \001 which has ASCII value 1
static void Main(string[] args)
{
//10=157
string s = "8=FIX.4.2|9=49|35=5|34=1|49=ARCA|52=20150916-04:14:05.306|56=TW|";
byte[] bs = GetBytes(s);
int sum=0;
foreach (byte b in bs)
sum = sum + b;
int checksum = sum % 256;
}
//string to byte[]
static byte[] GetBytes(string str)
{
byte[] bytes = new byte[str.Length * sizeof(char)];
System.Buffer.BlockCopy(str.ToCharArray(), 0, bytes, 0, bytes.Length);
return bytes;
}
Using BodyLength[9] and CheckSum[10] fields.
BodyLength is calculated starting from field starting after BodyLenght and
before CheckSum field.
CheckSum is calculated from ‘8= upto SOH before the checksum field.
Binary value of each character is calculated and compared to the LSB of the calculated value to the checksum value.
If the checksum has been calculated to be 274 then the modulo 256 value is 18 (256 + 18 = 274). This value would be transmitted a 10=018 where
"10="is the tag for the checksum field.
In Java there is a method from QuickFixJ.
String fixStringMessage = "8=FIX.4.29=12535=81=6090706=011=014=017=020=322=837=038=4.39=054=155=ALFAA99=20220829150=0151=06020=06021=06022=F9014=Y";
int checkSum = quickfix.MessageUtils.checksum(fixStringMessage);
System.out.prinln(checkSum);
Output: 127
Hope it can help you.