CryptoSwift with AES128 CTR Mode - Buggy counter increment? - swift

i've encountered a problem on the CryptoSwift-API (krzyzanowskim) while using AES128 with the CTR-Mode and my test function (nullArrayBugTest()) that produces on specific counter values (between 0 and 25 = on 13 and 24) a wrong array count that should usually be 16!
Even if I use the manually incremented "iv_13" with the buggy value 13 instead of the default "iv_0" and the counter 13...
Test it out to get an idea what I mean.
func nullArrayBugTest() {
var ctr:CTR
let nilArrayToEncrypt = Data(hex: "00000000000000000000000000000000")
let key_ = Data(hex: "000a0b0c0d0e0f010203040506070809")
let iv_0: Array<UInt8> = [0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f]
//let iv_13: Array<UInt8> = [0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x1c]
var decryptedNilArray = [UInt8]()
for i in 0...25 {
ctr = CTR(iv: iv_0, counter: i)
do {
let aes = try AES(key: key_.bytes, blockMode: ctr)
decryptedNilArray = try aes.decrypt([UInt8](nilArrayToEncrypt))
print("AES_testcase_\(i) for ctr: \(ctr) withArrayCount: \(decryptedNilArray.count)")
}catch {
print("De-/En-CryptData failed with: \(error)")
}
}
}
Output with buggy values
The question why I always need the encrypted array with 16 values is not important :D.
Does anybody know why the aes.decrypt()-function handles that like I received?
Thanks for your time.
Michael S.

CryptoSwift defaults to PKCS#7 padding. Your resulting plaintexts have invalid padding. CryptoSwift ignores padding errors, which IMO is a bug, but that's how it's implemented. (All the counters that you're considering "correct" should really have failed to decrypt at all.) (I spoke this over with Marcin and he reminded me that even at this low level, it's normal to ignore padding errors to avoid padding oracle attacks. I'd forgotten that I do it this way too....)
That said, sometimes the padding will be "close enough" that CryptoSwift will try to remove padding bytes. It usually won't be valid padding, but it'll be close enough for CrypoSwift's test.
As an example, your first counter creates the following padded plaintext:
[233, 222, 112, 79, 186, 18, 139, 53, 208, 61, 91, 0, 120, 247, 187, 254]
254 > 16, so CryptoSwift doesn't try to remove padding.
For a counter of 13, the following padded plaintext is returned:
[160, 140, 187, 255, 90, 209, 124, 158, 19, 169, 164, 110, 157, 245, 108, 12]
12 < 16, so CryptoSwift removes 12 bytes, leaving 4. (This is not how PKCS#7 padding works, but it's how CryptoSwift works.)
The underlying problem is you're not decrypting something you encrypted. You're just running a static block through the decryption scheme.
If you don't want padding, you can request that:
let aes = try AES(key: key_.bytes, blockMode: ctr, padding: .noPadding)
This will return you what you're expecting.
Just in case there's any confusion by other readers: this use of CTR is wildly insecure and no part of it should be copied. I'm assuming that the actual encryption code doesn't work anything like this.

I guess the encryption happens without the padding applied, but then u use padding to decrypt. To fix that, use the same technique on both sides. That said, this is a solution (#rob-napier answer is more detailed):
try AES(key: key_.bytes, blockMode: ctr, padding: .noPadding)

Related

WebCrypto AES-CBC outputting 256bit instead of 128bits

I'm playing with WebCrypto and I'm getting a confusing output.
The following test case encrypts a random 16byte (128bit) plain text with a newly generated 128bit key and 128bit random IV but is outputting a 32byte (256bit) output.
If I remember the details of AES-CBC it should output 128bit blocks.
function test() {
var data = new Uint8Array(16);
window.crypto.getRandomValues(data);
console.log(data)
window.crypto.subtle.generateKey(
{
name: "AES-CBC",
length: 128,
},
false,
["encrypt", "decrypt"]
)
.then(function(key){
//returns a key object
console.log(key);
window.crypto.subtle.encrypt(
{
name: "AES-CBC",
iv: window.crypto.getRandomValues(new Uint8Array(16)),
},
key,
data
)
.then(function(encrypted){
console.log(new Uint8Array(encrypted));
})
.catch(function(err){
console.error(err);
});
})
.catch(function(err){
console.error(err);
});
}
Example output:
Uint8Array(16) [146, 207, 22, 56, 56, 151, 125, 174, 137, 69, 133, 36, 218, 114, 143, 174]
CryptoKey {
algorithm: {name: "AES-CBC", length: 128}
extractable: false
type: "secret"
usages: (2) ["encrypt", "decrypt"]
__proto__: CryptoKey
Uint8Array(32) [81, 218, 52, 158, 115, 105, 57, 230, 45, 253, 153, 54, 183, 19, 137, 240, 183, 229, 241, 75, 182, 19, 237, 8, 238, 5, 108, 107, 123, 84, 230, 209]
Any idea what I've got wrong.
(Open to moving to crypto.stackexchange.com if more suitable)
I'm testing on Chrome 71 on MacOS at the moment.
Yes. The extra 16 bytes is the padding. Even when the message text is a multiple of the block size, padding is added, otherwise the decryption logic doesn't know when to look for padding.
The Web Cryptography API Specification says:
When operating in CBC mode, messages that are not exact multiples of
the AES block size (16 bytes) can be padded under a variety of padding
schemes. In the Web Crypto API, the only padding mode that is
supported is that of PKCS#7, as described by Section 10.3, step 2, of
[RFC2315].
This means unlike other language implementations (like Java) where you can specify NoPadding when you know that your input message text is always going to be a multiple of block size (128 bits for AES), Web Cryptography API forces you to have PKCS#7 padding.
If we look into RFC2315:
Some content-encryption algorithms assume the input length is a
multiple of k octets, where k > 1, and let the application define a
method for handling inputs whose lengths are not a multiple of k
octets. For such algorithms, the method shall be to pad the input at
the trailing end with k - (l mod k) octets all having value k - (l mod
k), where l is the length of the input. In other words, the input is
padded at the trailing end with one of the following strings:
01 -- if l mod k = k-1
02 02 -- if l mod k = k-2
.
.
.
k k ... k k -- if l mod k = 0
The padding can be removed unambiguously since all input is padded and
no padding string is a suffix of another. This padding method is
well-defined if and only if k < 256; methods for larger k are an open
issue for further study.
Note: k k ... k k -- if l mod k = 0
If you refer to the subtle.encrypt signature, you have no way to specify the padding mode. This means, the decryption logic always expects the padding.
However, in your case, if you use the Web Cryptography API only for encryption and your Python app (with NoPadding) only for decryption, I think you can simply strip off the last 16 bytes from the cipher text before feeding it to the Python app. Here is the code sample just for demonstration purpose:
function test() {
let plaintext = 'GoodWorkGoodWork';
let encoder = new TextEncoder('utf8');
let dataBytes = encoder.encode(plaintext);
window.crypto.subtle.generateKey(
{
name: "AES-CBC",
length: 128,
},
true,
["encrypt", "decrypt"]
)
.then(function(key){
crypto.subtle.exportKey('raw', key)
.then(function(expKey) {
console.log('Key = ' + btoa(String.
fromCharCode(...new Uint8Array(expKey))));
});
let iv = new Uint8Array(16);
window.crypto.getRandomValues(iv);
let ivb64 = btoa(String.fromCharCode(...new Uint8Array(iv)));
console.log('IV = ' + ivb64);
window.crypto.subtle.encrypt(
{
name: "AES-CBC",
iv: iv,
},
key,
dataBytes
)
.then(function(encrypted){
console.log('Cipher text = ' +
btoa(String.fromCharCode(...new Uint8Array(encrypted))));
})
.catch(function(err){
console.error(err);
});
})
.catch(function(err){
console.error(err);
});
}
The output of the above is:
IV = qW2lanfRo2H/3aSLzxIecA==
Key = 0LDBq5iz243HBTUE/lrM+A==
Cipher text = Wa4nIF0tt4PEBUChiH1KCkSOg6L2daoYdboEEf+Oh6U=
Now, I use take these as input, strip off the last 16 bytes of the cipher text and still get the same message text after decryption using the following Java code:
package com.sapbasu.javastudy;
import java.nio.charset.StandardCharsets;
import java.util.Base64;
import javax.crypto.Cipher;
import javax.crypto.SecretKey;
import javax.crypto.spec.IvParameterSpec;
import javax.crypto.spec.SecretKeySpec;
public class EncryptCBC {
public static void main(String[] arg) throws Exception {
SecretKey key = new SecretKeySpec(Base64.getDecoder().decode(
"0LDBq5iz243HBTUE/lrM+A=="),
"AES");
IvParameterSpec ivSpec = new IvParameterSpec(Base64.getDecoder().decode(
"qW2lanfRo2H/3aSLzxIecA=="));
Cipher cipher = Cipher.getInstance("AES/CBC/NoPadding");
cipher.init(Cipher.DECRYPT_MODE, key, ivSpec);
byte[] cipherTextWoPadding = new byte[16];
System.arraycopy(Base64.getDecoder().decode(
"Wa4nIF0tt4PEBUChiH1KCkSOg6L2daoYdboEEf+Oh6U="),
0, cipherTextWoPadding, 0, 16);
byte[] decryptedMessage = cipher.doFinal(cipherTextWoPadding);
System.out.println(new String(decryptedMessage, StandardCharsets.UTF_8));
}
}

AES encryption under contiki on CC2650

I am currently trying to add AES encryption to an existing beacon-demo that uses the TI cc2650 sensortag. I am using the AES API provided by contiki under core/lib.
My main looks like this:
static const uint8_t AES_key[16] = { 0xC0 , 0xC1 , 0xC2 , 0xC3 ,
0xC4 , 0xC5 , 0xC6 , 0xC7 ,
0xC8 , 0xC9 , 0xCA , 0xCB ,
0xCC , 0xCD , 0xCE , 0xCF };// AES Key
static uint8_t plain_text[16] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
14, 15, 16}; // Plain-text to be encrypted.
const struct aes_128_driver AES_128;
.
.
.
printf("Plain_Text: %d \r\n", plain_text);
AES_128.set_key(AES_key);
AES_128.encrypt(plain_text);
printf("Encrypted_Text: %p\r\n", plain_text);
Unfortunately when I run the code the plain text is unchangeable. Using some extra prints, I realize that the encrypt function is working but the output is still unchangeable. Can someone please tell me what am I doing wrong here?
Please note that I already added the following line to my conf file:
#define AES_128_CONF aes_128_driver
Well as #kfx pointed out in the comments const struct aes_128_driver AES_128 was shadowing the global variable.

Creating NSString from byte array generates wrong characters using Swift language

I'm trying to decode some Unicode strings from a binary file. I know that they are encoded as UTF-16, and they have a 'big endian' BOM (0xFFFE). But when I try to turn them into a string I end up with a bunch of Chinese characters.
var bytes:[UInt8] = [0x41, 0x00, 0x42, 0x00, 0x43, 0x00, 0x0E, 0xFE]
let text = NSString(bytes: &bytes, length: bytes.count, encoding:NSUTF16BigEndianStringEncoding)
print(text)
This prints Chinese ideograms and a [?] rather than "ABC‼︎", which is what (I believe) it should.
I've tried different encodings but nothing works correctly. Can anyone help?
The data sample you provide is coded little endian.
I don't know if it is you sample data or not.
Well, there's probably something wrong with your input.
First, BOM should be placed as a first sequence in the input. Second, the order of bytes you provided are reversed.
This example shows correct parsing:
var bytes:[UInt8] = [0xFF, 0xFE, 0x41, 0x00, 0x42, 0x00, 0x43, 0x00]
var text = NSString(bytes: &bytes, length: bytes.count, encoding:NSUTF16LittleEndianStringEncoding)!
print(text) // prints "ABC\n"
bytes = [0xFE, 0xFF, 0x00, 0x41, 0x00, 0x42, 0x00, 0x43]
text = NSString(bytes: &bytes, length: bytes.count, encoding:NSUTF16BigEndianStringEncoding)!
print(text) // "ABC\n"

Snappy-java uncompress fails for valid data

I am trying to uncompress the bytestring using snappy-java
ByteString(0xff, 0x06, 0x00, 0x00, 0x73, 0x4e, 0x61, 0x50, 0x70, 0x59, 1, 14, 0, 0, 38, -104, 43, -49, 0, 0, 0, 6, 0, 0, 0, 0, 79, 75)
It contains two frames, first with chunk value 0xff(stream identifier) and length 6 and second frame of chunk type 1(uncompressed), with length 14. Which is valid as per protocol spec found [here] (http://code.google.com/p/snappy/source/browse/trunk/framing_format.txt)
The code used to uncompress is here
val c = ByteString(0xff, 0x06, 0x00, 0x00, 0x73, 0x4e, 0x61, 0x50, 0x70, 0x59, 1, 14, 0, 0, 38, -104, 43, -49, 0, 0, 0, 6, 0, 0, 0, 0, 79, 75)
Snappy.uncompress(c.toArray)
The code throws FAILED_TO_UNCOMPRESS error, which is part of jna. I am using scala v2.11.3 and snappy-java v1.0.5.4
Exception in thread "main" java.io.IOException: FAILED_TO_UNCOMPRESS(5)
at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
at org.xerial.snappy.SnappyNative.rawUncompress(Native Method)
at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:395)
at org.xerial.snappy.Snappy.uncompress(Snappy.java:431)
at org.xerial.snappy.Snappy.uncompress(Snappy.java:407)
Failed to uncompress error is because Snappy.uncompress does not support framed input. The framing format is finalized recently and the implemented is added in SnappyFramedInputStream. The source is located here
The following is the code to decompress snappy frames
def decompress(contents: Array[Byte]): Array[Byte] = {
val is = new SnappyFramedInputStream(new ByteArrayInputStream(contents))
val os = new ByteArrayOutputStream(Snappy.uncompressedLength(contents))
is.transferTo(os)
os.close()
os.toByteArray
}

How to set sockaddr_in6::sin6_addr byte order to network byte order?

I developing a network application and using socket APIs.
I want to set sin6_addr byte order of sockaddr_in6 structure.
For 16 bits or 32 bits variables, it is simple: Using htons or htonl:
// IPv4
sockaddr_in addr;
addr.sin_port = htons(123);
addr.sin_addr.s_addr = htonl(123456);
But for 128 bits variables, I dont know how to set byte order to network byte order:
// IPv6
sockaddr_in6 addr;
addr.sin6_port = htons(123);
addr.sin6_addr.s6_addr = ??? // 16 bytes with network byte order but how to set?
Some answers may be using htons for 8 times (2 * 8 = 16 bytes), or using htonl for 4 times (4 * 4 = 16 bytes), but I don't know which way is correct.
Thanks.
The s6_addr member of struct in6_addr is defined as:
uint8_t s6_addr[16];
Since it is an array of uint8_t, rather than being a single 128-bit integer type, the issue of endianness does not arise: you simply copy from your source uint8_t [16] array to the destination. For example, to copy in the address 2001:888:0:2:0:0:0:2 you would use:
static const uint8_t myaddr[16] = { 0x20, 0x01, 0x08, 0x88, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02 };
memcpy(addr.sin6_addr.s6_addr, myaddr, sizeof myaddr);
The usual thing would be to use one of the hostname lookup routines and use the result of that, which is already in network byte order. How come you're dealing with hardcoded numeric IP addresses at all?