WebCrypto AES-CBC outputting 256bit instead of 128bits - aes

I'm playing with WebCrypto and I'm getting a confusing output.
The following test case encrypts a random 16byte (128bit) plain text with a newly generated 128bit key and 128bit random IV but is outputting a 32byte (256bit) output.
If I remember the details of AES-CBC it should output 128bit blocks.
function test() {
var data = new Uint8Array(16);
window.crypto.getRandomValues(data);
console.log(data)
window.crypto.subtle.generateKey(
{
name: "AES-CBC",
length: 128,
},
false,
["encrypt", "decrypt"]
)
.then(function(key){
//returns a key object
console.log(key);
window.crypto.subtle.encrypt(
{
name: "AES-CBC",
iv: window.crypto.getRandomValues(new Uint8Array(16)),
},
key,
data
)
.then(function(encrypted){
console.log(new Uint8Array(encrypted));
})
.catch(function(err){
console.error(err);
});
})
.catch(function(err){
console.error(err);
});
}
Example output:
Uint8Array(16) [146, 207, 22, 56, 56, 151, 125, 174, 137, 69, 133, 36, 218, 114, 143, 174]
CryptoKey {
algorithm: {name: "AES-CBC", length: 128}
extractable: false
type: "secret"
usages: (2) ["encrypt", "decrypt"]
__proto__: CryptoKey
Uint8Array(32) [81, 218, 52, 158, 115, 105, 57, 230, 45, 253, 153, 54, 183, 19, 137, 240, 183, 229, 241, 75, 182, 19, 237, 8, 238, 5, 108, 107, 123, 84, 230, 209]
Any idea what I've got wrong.
(Open to moving to crypto.stackexchange.com if more suitable)
I'm testing on Chrome 71 on MacOS at the moment.

Yes. The extra 16 bytes is the padding. Even when the message text is a multiple of the block size, padding is added, otherwise the decryption logic doesn't know when to look for padding.
The Web Cryptography API Specification says:
When operating in CBC mode, messages that are not exact multiples of
the AES block size (16 bytes) can be padded under a variety of padding
schemes. In the Web Crypto API, the only padding mode that is
supported is that of PKCS#7, as described by Section 10.3, step 2, of
[RFC2315].
This means unlike other language implementations (like Java) where you can specify NoPadding when you know that your input message text is always going to be a multiple of block size (128 bits for AES), Web Cryptography API forces you to have PKCS#7 padding.
If we look into RFC2315:
Some content-encryption algorithms assume the input length is a
multiple of k octets, where k > 1, and let the application define a
method for handling inputs whose lengths are not a multiple of k
octets. For such algorithms, the method shall be to pad the input at
the trailing end with k - (l mod k) octets all having value k - (l mod
k), where l is the length of the input. In other words, the input is
padded at the trailing end with one of the following strings:
01 -- if l mod k = k-1
02 02 -- if l mod k = k-2
.
.
.
k k ... k k -- if l mod k = 0
The padding can be removed unambiguously since all input is padded and
no padding string is a suffix of another. This padding method is
well-defined if and only if k < 256; methods for larger k are an open
issue for further study.
Note: k k ... k k -- if l mod k = 0
If you refer to the subtle.encrypt signature, you have no way to specify the padding mode. This means, the decryption logic always expects the padding.
However, in your case, if you use the Web Cryptography API only for encryption and your Python app (with NoPadding) only for decryption, I think you can simply strip off the last 16 bytes from the cipher text before feeding it to the Python app. Here is the code sample just for demonstration purpose:
function test() {
let plaintext = 'GoodWorkGoodWork';
let encoder = new TextEncoder('utf8');
let dataBytes = encoder.encode(plaintext);
window.crypto.subtle.generateKey(
{
name: "AES-CBC",
length: 128,
},
true,
["encrypt", "decrypt"]
)
.then(function(key){
crypto.subtle.exportKey('raw', key)
.then(function(expKey) {
console.log('Key = ' + btoa(String.
fromCharCode(...new Uint8Array(expKey))));
});
let iv = new Uint8Array(16);
window.crypto.getRandomValues(iv);
let ivb64 = btoa(String.fromCharCode(...new Uint8Array(iv)));
console.log('IV = ' + ivb64);
window.crypto.subtle.encrypt(
{
name: "AES-CBC",
iv: iv,
},
key,
dataBytes
)
.then(function(encrypted){
console.log('Cipher text = ' +
btoa(String.fromCharCode(...new Uint8Array(encrypted))));
})
.catch(function(err){
console.error(err);
});
})
.catch(function(err){
console.error(err);
});
}
The output of the above is:
IV = qW2lanfRo2H/3aSLzxIecA==
Key = 0LDBq5iz243HBTUE/lrM+A==
Cipher text = Wa4nIF0tt4PEBUChiH1KCkSOg6L2daoYdboEEf+Oh6U=
Now, I use take these as input, strip off the last 16 bytes of the cipher text and still get the same message text after decryption using the following Java code:
package com.sapbasu.javastudy;
import java.nio.charset.StandardCharsets;
import java.util.Base64;
import javax.crypto.Cipher;
import javax.crypto.SecretKey;
import javax.crypto.spec.IvParameterSpec;
import javax.crypto.spec.SecretKeySpec;
public class EncryptCBC {
public static void main(String[] arg) throws Exception {
SecretKey key = new SecretKeySpec(Base64.getDecoder().decode(
"0LDBq5iz243HBTUE/lrM+A=="),
"AES");
IvParameterSpec ivSpec = new IvParameterSpec(Base64.getDecoder().decode(
"qW2lanfRo2H/3aSLzxIecA=="));
Cipher cipher = Cipher.getInstance("AES/CBC/NoPadding");
cipher.init(Cipher.DECRYPT_MODE, key, ivSpec);
byte[] cipherTextWoPadding = new byte[16];
System.arraycopy(Base64.getDecoder().decode(
"Wa4nIF0tt4PEBUChiH1KCkSOg6L2daoYdboEEf+Oh6U="),
0, cipherTextWoPadding, 0, 16);
byte[] decryptedMessage = cipher.doFinal(cipherTextWoPadding);
System.out.println(new String(decryptedMessage, StandardCharsets.UTF_8));
}
}

Related

Convert a hexadecimal string into list of integers

Similar to this question here but for flutter/dart: Easier / better way to convert hexadecimal string to list of numbers in Python?
I have a string with hexadecimal value like 09F911029D74E35BD84156C5635688C0 and I want to return [9, 249, 17, 2, 157, 116, 227, 91, 216, 65, 86, 197, 99, 86, 136, 192]. In python I can just use list(binascii.unhexlify('09F911029D74E35BD84156C5635688C0')). Is there such a function for flutter/dart?
Try the Package convert to decode or encode your hex values into desired values
var hex = 09F911029D74E35BD84156C5635688C0;
var result = hex.decode(hex);
also import 'package:convert/convert.dart';
let me know if this works .

Array prefix(while:) looping different number of times for same logic. (unable to comprehend)

Here is the shot of the playground where I am trying to print all the even numbers until you reach an odd one in 3 ways and I don't understand why there are different number of times it looped
From my understanding it should loop only 7 times because there are 6 even numbers in the starting and the 7th time there is an odd number which must terminate the loop but we have gotten 8 times in one case ! Why ?
Also quite literally the second and third way are same but still there is a difference in the number of times the loop was executed.
This, I don't understand.
===== EDIT =======
I was asked for text form of the code so here it is
import UIKit
// First way
var arr = [34, 92, 84, 78, 90, 42, 5, 89, 64, 32, 30].prefix { num in
return num.isMultiple(of: 2)
}
print(arr)
// Second way
var arr2 = [34, 92, 84, 78, 90, 42, 5, 89, 64, 32, 30].prefix(while: {$0.isMultiple(of: 2)})
print(arr2)
//Third way (almost second way but directly prinitng this time)
print([34, 92, 84, 78, 90, 42, 5, 89, 64, 32, 30]
.prefix(while: {$0.isMultiple(of: 2)}))
The message of "(x times)" isn't really saying how many times a line is executed in a loop.
Think of it like this, sometimes the playground want to output multiple messages on the same line, because two things are being evaluated on that line. This happen the most often in loops, where in each iteration a new thing is evaluated, and Xcode wants to display that on the same line, but can't, so can only display "(2 times)".
You can also see this happen in other places, like closures that are on the same line:
let x = Optional.some(1).map { $0 + 1 }.map { String($0) } // (3 times)
Here, the 3 things that are evaluated are:
x
$0 + 1
String($0)
Another example:
func f() { print("x");print("y") } // (2 times)
f()
Normally, each print statement will have some text displayed on the right, won't they? But since they are on the same line, Xcode can only show "(2 times)".
The same thing happens in this case:
var arr2 = [34, 92, 84, 78, 90, 42, 5, 89, 64, 32, 30].prefix(while: {$0.isMultiple(of: 2)})
There are 8 things to evaluate. The closure result is evaluated 7 times, plus once for the value of arr2.
If you put the variable declaration and the closure on separate lines, then Xcode can show the value of the variable on a separate line, which is why the number is one less in the two other cases.
To illustrate this graphically:
You are forcing Xcode to combine the two boxed messages into one single line. Xcode can't usefully fit all that text onto one single line, so the best it can do is "(8 times)".
I believe these are just artifacts of how playgrounds work. With a little re-formatting I get "7 times" in all three variations:
let data = [34, 92, 84, 78, 90, 42, 5, 89, 64, 32, 30]
let arr = data.prefix {
$0.isMultiple(of: 2)
}
print(arr)
let arr2 = data.prefix(while: {
$0.isMultiple(of: 2)
})
print(arr2)
print(data.prefix(while: {
$0.isMultiple(of: 2)
}))

Hash is not working on Array<UInt8> datatype in swift

I'm trying to convert a payload using CBOR.encode method from SwiftCBOR which gives me a result in Array<UInt8> format and when the result is being hashed its gives the result in Array<UInt8> format but i need the hash of it
payload = {productName:"Philips Hue White A19 4-Pack 60W Equivalent Dimmable LED Smart Bulb"}
let payloadBytes = CBOR.encode(payload)
print("payloadbytes:",payloadBytes); // payloadbytes: [120, 83, 123,..., 98, 34, 125]
print("hashed payload",payloadBytes.sha512()); //[176, 26, 154,..., 85, 75, 77]
the expected output of hashing the payloadbytes is
6097B04A89468E7AAA8A1784B9CBAF0D2E968AFFC7F5AFE3B0B30CDF44A79EB41464EC773D0D729FA7E6AD6F7462EDEA09B34177CDB0D0DF4A6DDF3C6117C01E

How do I get the per-spec output of a LV-MaxSonar LV-EZ0 rangefinder with Android Things?

I'm using this sensor with a Raspberry Pi B 3 and Android Things 1.0
I have it connected as per these instructions
The spec for its output suggests I should receive "an ASCII capital “R”, followed by three ASCII character digits representing the range in inches up to a maximum of 255, followed by a carriage return (ASCII 13)"
I have connected to the device and configured it as follows (connection parameters map to the "Serial, 0 to Vcc, 9600 Baud, 81N" in that spec, I think):
PeripheralManager manager = PeripheralManager.getInstance();
mDevice = manager.openUartDevice(name);
mDevice.setBaudrate(9600);
mDevice.setDataSize(8);
mDevice.setParity(UartDevice.PARITY_NONE);
mDevice.setStopBits(1);
mDevice.registerUartDeviceCallback(mUartCallback);
I am reading from its buffer in that callback as follows:
public void readUartBuffer(UartDevice uart) throws IOException {
// "The output is an ASCII capital “R”, followed by three ASCII character digits
// representing the range in inches up to a maximum of 255, followed by a carriage return
// (ASCII 13)
ByteArrayOutputStream bout = new ByteArrayOutputStream();
final int maxCount = 1024;
byte[] buffer = new byte[maxCount];
int total = 0;
int cycles = 0;
int count;
bout.write(23);
while ((count = uart.read(buffer, buffer.length)) > 0) {
bout.write(buffer, 0, count);
total += count;
cycles++;
}
bout.write(0);
byte[] buf = bout.toByteArray();
String bufStr = Arrays.toString(buf);
Log.d(TAG, "Got " + total + " in " + cycles + ":" + buf.length +"=>" + bufStr);
}
private UartDeviceCallback mUartCallback = new UartDeviceCallback() {
#Override
public boolean onUartDeviceDataAvailable(UartDevice uart) {
// Read available data from the UART device
try {
readUartBuffer(uart);
} catch (IOException e) {
Log.w(TAG, "Unable to access UART device", e);
}
// Continue listening for more interrupts
return true;
}
When I connect the sensor and use this code I get readings back of the form:
05-10 03:59:59.198 1572-1572/org.tomhume.blah D/LVEZ0: Got 7 in 1:9=>[23, 43, 0, 6, -77, -84, 15, 0, 0]
05-10 03:59:59.248 1572-1572/org.tomhume.blah D/LVEZ0: Got 7 in 1:9=>[23, 43, 0, 6, 102, 101, 121, 0, 0]
05-10 03:59:59.298 1572-1572/org.tomhume.blah D/LVEZ0: Got 7 in 1:9=>[23, 43, 0, 6, 102, 99, 121, 0, 0]
The initial 23 and the final 0 on each line are values I have added. Instead of an expected R\d\d\d\13 I expect between them, I'm getting 7 signed bytes. The variance in some of these byte values appears when I move my hand towards and away from the sensor - i.e. the values I'm getting back vary in a way I might expect, even though the output is completely wrong in form and size.
Any ideas what I'm doing wrong here? I suspect it's something extremely obvious, but I'm stumped. Examining the binary values themselves it doesn't look like bits are shifted around by e.g. a mistake in protocol configuration.
I can't test it on a hardware now, but seems algorithm for reading serial data from a LV-MaxSonar LV-EZ0 rangefinder should be slightly different: you should find "an ASCII capital “R” in UART sequence buffer (because LV-EZ0 starts sending data after power on and not synchronized with Raspberry Pi board and first received byte may be not “R” and quantity of received bytes can be less or more then 5 bytes of LV-MaxSonar response^ in one buffer can be several (or part)of LV-MaxSonar responses) and THAN try to parse 3 character digits representing the range in inches and ending CR symbol (if there is no all 3 bytes for digits and CR symbol - than start to find capital “R” again because LV-MaxSonar response is broken (it also can be)). For example take a look at GPS contrib driver especially at processBuffer() of NmeaGpsModule.java and processMessageFrame() of NmeaParser.java - the difference is in your have only one "sentence" and its start is not "GPRMC", but only "R" symbol.
And other important thing: seems there no capital “R” (ASCII 82) in your response buffers - may be some issues in LV-MaxSonar LV-EZ0 connection. Your schematics should be exactly like top right picture on page 6 of datasheet: Independent Sensor Operation: Serial Output Sensor Operation. Also try, for example, to connect LV-MaxSonar LV-EZ0 to USB<->UART (with power output) cable (e.g. like that) and test it in terminal with your PC.

how to prove that one certificate is issuer of another certificates

I have two certificate. One certificate is issuer of another certificate.
how can I see with java code that, my issuer certificate is really issuer?
I know that AuthorityKeyIdentifier of my certificate and SubjectKeyIdentifie of issuer certifiactes must be the same. I checked and they are the same.
but with java code I have that result:
CertificateFactory certFactory = CertificateFactory.getInstance("X.509");
InputStream usrCertificateIn = new FileInputStream("/usr.cer");
X509Certificate cert = (X509Certificate) certFactory.generateCertificate(usrCertificateIn);
InputStream SiningCACertificateIn = new FileInputStream("/siningCA.cer");
X509Certificate issuer = (X509Certificate) certFactory.generateCertificate(SiningCACertificateIn);
byte[] octets = (ASN1OctetString.getInstance(cert.getExtensionValue("2.5.29.35")).getOctets());
System.out.println(Arrays.toString(octets) + " bouncycastle, AuthorityKeyIdentifier");
System.out.println(Arrays.toString(cert.getExtensionValue("2.5.29.35")) + "java.security, AuthorityKeyIdentifier");
octets = ASN1OctetString.getInstance(issuer.getExtensionValue("2.5.29.14")).getOctets();
System.out.println((Arrays.toString(octets) + "bouncycastle, SubjectKeyIdentifie "));
System.out.println(Arrays.toString(issuer.getExtensionValue("2.5.29.14")) + "java.security, SubjectKeyIdentifie ");
adn the result is that:
[48, 22, -128, 20, 52, -105, 49, -70, -24, 78, 127, -113, -25, 55, 39, 99, 46, 6, 31, 66, -55, -86, -79, 113] bouncycastle, AuthorityKeyIdentifier
[4, 24, 48, 22, -128, 20, 52, -105, 49, -70, -24, 78, 127, -113, -25, 55, 39, 99, 46, 6, 31, 66, -55, -86, -79, 113]java.security, AuthorityKeyIdentifier
and another byte array that MUST BE THE SAME, BUT IT IS NOT another bytes is added at the beginning of the array.
[4, 20, 52, -105, 49, -70, -24, 78, 127, -113, -25, 55, 39, 99, 46, 6, 31, 66, -55, -86, -79, 113]bouncycastle, SubjectKeyIdentifie
[4, 22, 4, 20, 52, -105, 49, -70, -24, 78, 127, -113, -25, 55, 39, 99, 46, 6, 31, 66, -55, -86, -79, 113]java.security, SubjectKeyIdentifie
question 1)
Can I calculates the key Identifiers to get the same Arrays?
question 2)
is there another way in order to prove that one certificate is issuer of another certificates.
AuthorityKeyIdentifier and SubjectKeyIdentifier are defined differently:
AuthorityKeyIdentifier ::= SEQUENCE {
keyIdentifier [0] KeyIdentifier OPTIONAL,
authorityCertIssuer [1] GeneralNames OPTIONAL,
authorityCertSerialNumber [2] CertificateSerialNumber OPTIONAL }
SubjectKeyIdentifier ::= KeyIdentifier
KeyIdentifier ::= OCTET STRING
(sections 4.2.1.1 and 4.2.1.2 of RFC 5280)
Thus, merely comparing the extension values cannot work, instead you have to extract the KeyIdentifier contents and compare them, e.g. using BouncyCastle ASN.1 helper classes.
BTW, the actual key identifier bytes are only
52, -105, 49, -70, -24, 78, 127, -113, -25, 55, 39, 99, 46, 6, 31, 66, -55, -86, -79, 113
The 4, 20 before that indicates an OCTET STRING, 20 bytes long. In the AuthorityKeyIdentifier the 4 is replaced by the tag [0] (byte -128) due to implicit tagging.
The 48, 22 before that in your AuthorityKeyIdentifier means (constructed) SEQUENCE, 22 bytes long.
Etc. etc.
Thus,
Can I calculates the key Identifiers to get the same Arrays?
Yes, drill down to the actual KeyIdentifier OCTET STRING values.
is there another way in order to prove that one certificate is issuer of another certificates
Well, you can check whether the signature in the certificate is signed by the private key associated with the assumed issuer certificate by verifying against that certificate's public key.
PS: Concerning the question in the comments
is key identifier's length always 20? is it fixed? may be no, is not it?
No, it is not. The before mentioned RFC 5280 says:
For CA certificates, subject key identifiers SHOULD be derived from
the public key or a method that generates unique values. Two common
methods for generating key identifiers from the public key are:
(1) The keyIdentifier is composed of the 160-bit SHA-1 hash of the
value of the BIT STRING subjectPublicKey (excluding the tag,
length, and number of unused bits).
(2) The keyIdentifier is composed of a four-bit type field with
the value 0100 followed by the least significant 60 bits of
the SHA-1 hash of the value of the BIT STRING
subjectPublicKey (excluding the tag, length, and number of
unused bits).
Other methods of generating unique numbers are also acceptable.
I assume your CA uses method 1 (160 bit = 20 bytes) but this is merely a common method, not even something explicitly recommended, let alone required. Thus, no, you cannot count on the identifier being 20 bytes long.
PPS: Concerning the question in the comments
Isn't the signature the only way to truly prove one cert is issued by another?
That is no prove of the issuer-issuedBy relation either, it merely proves (at least to a certain degree) that the private key associated with the assumed issuer certificate signed the inspected certificate but there are situations where the same private-public key pair is used in multiple certificates.
In essence you need to do multiple complementary test and even then have to trust the CA not to do weird stuff. Not long ago e.g. the Swisscom changed one of their CA certificates to include an additional extension or critical attribute (I'd have to look up the details; I think someone certifying them demanded that change), and by the certificate signature verification test old signer certificates now seem to be issued by the new CA certificate even though the owners of the signer certificates might not be aware of that new extension/critical attribute.
So eventually real life simply is not as simple as one would like it to be...
To prove one certificate was issued by another, you should prove it was signed by the private key corresponding to the public key in the issuing certificate.
Let's call 2 certificates caCert and issuedCert. These are of type X509Certificate.
The Java code to prove issuedCert is signed by the entity represented by caCert is quite simple.
PublicKey caPubKey = caCert.getPublicKey();
issuedCert.verify(caPubKey);
If the verify method returns without throwing an exception, then issuedCert was signed by the private key corresponding to the public key in caCert.