Swift 3 - AES Encryption Heimdall - Zero Padding - swift

i am using Heimdall
https://github.com/henrinormak/Heimdall
for generating my 1024 bit RSA keys and encrypting messages:
let heimdall = Heimdall(publicTag: publicTag, publicKeyData: data)
When i utf8-encode and base64-encode i pass it to the encrypt method:
let utf8Encoded = self.mystring.data(using: String.Encoding.utf8)!
let base64Encoded = utf8Encoded.base64EncodedData()
let encrypted = heimdall.encrypt(base64Encoded)
print("encrypted \(encrypted!)") // -> 160 bytes !! why not 128
The encrypted part should 128 bytes and not 160.
Can anybody help me to get there?
How can i keep generating 1024 bit rsa keys and encrypting messages with those keys to end up with 128 byte arrays?
Thanks and Greetings !!

From the Heimdall docs: Note on encryption/decryption:
The payload is built, containing the encrypted key, followed by the encrypted message.
Then this is Base64 encoded increasing the length by 1/3.
Thus the output is (the aes key + the encrypted data length + padding) Base64 encoded.

Related

Generate public key from modulus and exponent (again)

I know that there are answers already on SO, but after trying a few of them (ex1, ex2) I still cannot produce the correct public key from modulus and an exponent.
This is my python3 code:
from Crypto.PublicKey.RSA import construct
import urllib.parse
import base64
import re
def decode_base64(data, altchars=b'+/'):
"""Decode base64, padding being optional.
:param data: Base64 data as an ASCII byte string
:returns: The decoded byte string.
"""
data = re.sub(rb'[^a-zA-Z0-9%s]+' % altchars, b'', data) # normalize
missing_padding = len(data) % 4
if missing_padding:
data += b'='* (4 - missing_padding)
return base64.b64decode(data, altchars)
e = int.from_bytes(decode_base64(b'AQAB'), 'big', signed=False)
decoded = decode_base64(b'tVKUtcx_n9rt5afY_2WFNvU6PlFMggCatsZ3l4RjKxH0jgdLq6CScb0P3ZGXYbPzXvmmLiWZizpb-h0qup5jznOvOr-Dhw9908584BSgC83YacjWNqEK3urxhyE2jWjwRm2N95WGgb5mzE5XmZIvkvyXnn7X8dvgFPF5QwIngGsDG8LyHuJWlaDhr_EPLMW4wHvH0zZCuRMARIJmmqiMy3VD4ftq4nS5s8vJL0pVSrkuNojtokp84AtkADCDU_BUhrc2sIgfnvZ03koCQRoZmWiHu86SuJZYkDFstVTVSR0hiXudFlfQ2rOhPlpObmku68lXw-7V-P7jwrQRFfQVXw', 'big')
n = int.from_bytes(decoded, 'big', signed=False)
rsaKey = construct((n, e))
pubKey = rsaKey.exportKey()
print(pubKey.decode('ascii'))
But whenever I try to verify a jwt token I get a "signature_invalid" error.
Am I not decoding the binary encoded bytes correctly?
----UPDATE---
As suggested in the comments, I've updated my code to url decode the bytes first, but I still get the same signature invalid error as before.

How do I get to the same results as the Linux crypt and salt output?

I used the following command on my Ubuntu machine "openssl passwd -crypt - salt pass book" to generate a salted password.
What hash is the output made up of? e.g SHA-512, MD5 etc. Also, i'm wondering how it's made up. For example, is it made by hashing "passbook" together?
I need more information on what hashing/algorithm is being used to generate the output I see.
Thanks
The result provided by the openssl passwd app when using the -crypt algorithm seems to be the same as the result provided by the Linux/Unix crypt() function. You can verify this with the following (quick'n'dirty) code snippet:
#include <crypt.h>
#include <stdio.h>
int main(
int argc,
char **argv)
{
char *key = argv[1];
char *salt = argv[2];
char *enc = crypt(key, salt);
printf("key = \"%s\", salt = \"%s\", enc = \"%s\"\n",
key ? key:"NULL", salt ? salt:"NULL", enc ? enc:"NULL");
}
Result:
$ ./main book pass
key = "book", salt = "pass", enc = "pahzZkfwawIXw"
$ openssl passwd -crypt -salt pass book
pahzZkfwawIXw
The exact details of how the crypt() function seem to be explained most clearly in its OSX man page, in particular:
Traditional crypt:
The first 8 bytes of the key are null-padded, and the low-order 7 bits of each character is
used to form the 56-bit DES key.
The salt is a 2-character array of the ASCII-encoded salt. Thus, only 12 bits of salt are
used. count is set to 25.
Algorithm:
The salt introduces disorder in the DES algorithm in one of 16777216 or 4096 possible ways
(ie. with 24 or 12 bits: if bit i of the salt is set, then bits i and i+24 are swapped in
the DES E-box output).
The DES key is used to encrypt a 64-bit constant, using count iterations of DES. The value
returned is a null-terminated string, 20 or 13 bytes (plus null) in length, consisting of
the salt, followed by the encoded 64-bit encryption.

I was wondering if someone could explain to me .decode and .encode in hashlib?

I understand that you have a hex string and perform SHA256 on it twice and then byte-swap the final hex string. The goal of this code is to find a Merkle Root by concatenating two transactions. I would like to understand what's going on in the background a bit more. What exactly are you decoding and encoding?
import hashlib
transaction_hex = "93a05cac6ae03dd55172534c53be0738a50257bb3be69fff2c7595d677ad53666e344634584d07b8d8bc017680f342bc6aad523da31bc2b19e1ec0921078e872"
transaction_bin = transaction_hex.decode('hex')
hash = hashlib.sha256(hashlib.sha256(transaction_bin).digest()).digest()
hash.encode('hex_codec')
'38805219c8ac7e9a96416d706dc1d8f638b12f46b94dfd1362b5d16cf62e68ff'
hash[::-1].encode('hex_codec')
'ff682ef66cd1b56213fd4db9462fb138f6d8c16d706d41969a7eacc819528038'
header_hex is a regular string of lower case ASCII characters and the decode() method with 'hex' argument changes it to a (binary) string (or bytes object in Python 3) with bytes 0x93 0xa0 etc. In C it would be an array of unsigned char of length 64 in this case.
This array/byte string of length 64 is then hashed with SHA256 and its result (another binary string of size 32) is again hashed. So hash is a string of length 32, or a bytes object of that length in Python 3. Then encode('hex_codec') is a synomym for encode('hex') (in Python 2); in Python 3, it replaces it (so maybe this code is meant to work in both versions). It outputs an ASCII (lower hex) string again that replaces each raw byte (which is just a small integer) with a two character string that is its hexadecimal representation. So the final bit reverses the double hash and outputs it as hexadecimal, to a form which I usually call "lowercase hex ASCII".

How to convert [UInt8] to ASCII in swift 3.0

Use MD5 encrypted string,for16 byte [UInt8] array, but I also need to ASCII value,how to convert [UInt8] to ASCII in swift 3.0
Try something like this:
let asciiBytes: [UInt8] = [77,105,99,104,97,101,108]
let s = String(bytes: asciiBytes, encoding: .ascii)
print(s) // prints "Michael"
Most data bytes do not have an ASCII representation, see the comment by Martin.
If one needs to present data in ASCII the common approaches are to encode it to Base64 or hexadecimal.

AES decryption using pycrypto

As a self study exercise, I'm trying to learn how to use some of the pycrypto library. I need to decrypt a ciphertext string in CBC_MODE using AES. I the ciphertext, key, and IV are all given. Here is the code that I have written:
from Crypto.Cipher import AES
mode = AES.MODE_CBC
key = "a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1"
ciphertext = "a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1a1";
iv = ciphertext[:32]
ciphertext = ciphertext[32:]
decryptor = AES.new(key, mode, iv)
plaintext = decryptor.decrypt(ciphertext)
print plaintext
When I run this, I get the following error:
ValueError: IV must be 16 bytes long
I know that the IV string is 32 hex characters, and therefore 16 bytes. I think that this might be a typing problem, but I don't know how to correct it. Can anyone help?
Thank you!
Your strings contain only hex characters, but they are still plain strings, so every character counts.
So your IV string is 32 byte long as you sliced it out from ciphertext.
I suspect you're right and it is down to typing. Try one of these:
iv = binascii.unhexlify(ciphertext[:32])
or
iv = long(ciphertext[:32], 16)
Tell the computer you are dealing with hex. It is treating it as a string.
iv = iv.decode('hex');