2048 bit-RSA signature on RF type - rsa

I'm conducting a key generation test for RSA signature.
An Error occur in case of 2048 bit key generation over contactless interface and I want to know the reason.
My test changes two factors
key size : 1024 bit, 2048 bit
interface : contact, contactless
The result of my test is as follows
1024 bit, contact interface -> Success
1024 bit, contactless interface -> Success
2048 bit, contact interface -> Success
2048 bit, contactless interface -> FAIL (Sometimes, Success.)
In the fourth test, the card returns SW 6300 which JCRE returns.
Someone said that an contactless terminal shut off the power when generating 2048 bit key but I'm not sure. Is there anyone who heard about that?

Related

Question about Message Signaled Interrupts (MSI) on x86 LAPIC system

Hi I'm writing a kernel and plan to use MSI interrupt for PCI devices.
However, I'm also quite confused by the documentations.
My understanding about MSI are as follow:
From PCI device point of view:
Documentations indicate that I
need to find Capabillty ID = 0x05 to locate 3 registers: Message control (MCR), Message Address (MAR) and Message Data (MDR) registers
MCR provide control functionality for MSI interrupt,
MAR provide the physical address the PCI device
will write once interrupt occurs
MDR forms out the actual data it will write into the physical address
From CPU point of view:
Documentation shows that Message Address register contains fixed top of 0xFEE, and following by destination ID (LAPIC ID) and other controlling bits as follow:
The Message Data register will contain the following information, including the interrupt vector:
After reading all of these, I am thinking if the APIC_ID is 0x0h would the Message Address conflict with the Local APIC memory mapping? Although the address of FEE00000~FEE00010 are reserved.
In addition, is it true that the vector number in MDR is corresponding to the IDT vector number. In other words, if I put MAR = 0xFEE0000C (Destination ID = 0, Using logical APIC ID) and MDR = 0x0032 (edge trigger, Vector = 50) and enable the MSI interrupt, then once the device issues an interrupt CPU would correspondingly run the function pointed by IDT[50]? After that I write 0h to EOI register to end it?
Finally, according to the documentation, the upper 32 bit of MAR is not used? Can anyone help on this?
Thanks a lot!
Your understanding of how to detect and program MSI in a PCI (or PCIe) device is correct.*
The message address controls the destination (which CPU the interrupt is sent to), while the message data contains the vector number. For normal interrupts, all bits of the message data should be 0 except for the low 8 bits, which contain the vector.
The vector is an index into the IDT, so if the message data is 0x0032, the interrupt is delivered through entry 50 of the IDT.**
If the Destination ID in an interrupt message is 0, the Message Address of the MSI does match the default address of the local APIC, but they do not conflict, because the APIC can only be written by the CPU and MSIs can only be written by devices.
On x86 platforms, the upper 32 bits of the message address must be 0. This can be done by setting the upper part of the message address to 0 or by programming the device to use a 32-bit message address (in which case the upper message address register is not used). The PCI spec was designed to work with systems where 64-bit MSI addresses are used, but x86 systems never use the upper 32 bits of the message address.
Reprogramming the APIC base address by writing to the APIC_BASE MSR does not affect the address range used for MSI; it is always 0xFEExxxxx.
* You should also look at the MSI-X capability, because some devices support MSI-X but not MSI. MSI-X is a bit more flexible, which inevitably makes it a bit more complicated.
** When using the MSI capability, the message data isn't exactly the value in the Message Data Register (MDR). The MSI capability allows the device to use several contiguous vectors. When the device sends an interrupt message, it replaces the low bits of the MDR with a different value depending on the interrupt cause within the device.

What is the correct behavior of C_Decrypt in pkcs#11?

I am using C_Decrypt with the CKM_AES_CBC_PAD mechanism. I know that my ciphertext which is 272 bytes long should actually decrypt to 256 bytes, which means a full block of padding was added.
I know that according to the standard when invoking C_Decrypt with a NULL output buffer the function may return an output length which is somewhat longer than the actual required length, in particular when padding is used this is understandable, as the function can't know how many padding bytes are in the final block without carrying out the actual decryption.
So the question is whether if I know that I should get exactly 256 bytes back, such as in the scenario I explained above, does it make sense that I am still getting a CKR_BUFFER_TOO_SMALL error as a result, despite passing a 256 bytes buffer? (To make it clear: I am indicating that this is the length of the output buffer in the appropriate output buffer length parameter, see the parameters of C_Decrypt to observe what I mean)
I am encountering this behavior with a Safenet Luna device and am not sure what to make of it. Is it my code's fault for not querying for the length first by passing NULL in the output buffer, or is this a bug on the HSM/PKCS11 library side?
One more thing I should perhaps mention is that when I provide a 272 (256+16) bytes output buffer, the call succeeds and I am noticing that I am getting back my expected plaintext, but also the padding block which means 16 final bytes with the value 0x10. However, the output length is updated correctly to 256, not 272 - this also proves that I am not using CKM_AES_CBC instead of CKM_AES_CBC_PAD accidentally, which I suspected for a moment as well :)
I have used CKM.AES_CBC_PAD padding mechanism with C_Decrypt in past. You have to make 2 calls to C_Decrypt (1st ==> To get the size of the plain text, 2nd ==> Actual decryption). see the documentation here which talks about determining the length of the buffer needed to hold the plain-text.
Below is the step-by-step code to show the behavior of decryption:
//Defining the decryption mechanism
CK_MECHANISM mechanism = new CK_MECHANISM(CKM.AES_CBC_PAD);
//Initialize to zero -> variable to hold size of plain text
LongRef lRefDec = new LongRef();
// Get ready to decrypt
CryptokiEx.C_DecryptInit(session_1, mechanism, key_handleId_in_hsm);
// Get the size of the plain text -> 1st call to decrypt
CryptokiEx.C_Decrypt(session_1, your_cipher, your_cipher.length, null, lRefDec);
// Allocate space to the buffer to store plain text.
byte[] clearText = new byte[(int)lRefDec.value];
// Actual decryption -> 2nd call to decrypt
CryptokiEx.C_Decrypt(session_1, eFileCipher, eFileCipher.length, eFileInClear,lRefDec);
Sometimes, decryption fails because your input encryption data was misleading (however, encryption is successful but corresponding decryption will fail) the decryption algorithm. So it is important not to send raw bytes directly to the encryption algorithm; rather encoding the input data with UTF-8/16 schema's preserves the data from getting misunderstood as network control bytes.

Counter block problem using aes-ctr decrypt from pycryptodome

So I am trying to decrypt a connection over SSH using pycryptodome.
I have the key and the IV extracted from memory (I am working inside a virtual environment), which are 100% correct, which were used for encrypting the data.
Now I want to decrypt the stuff afterwards.
My code looks as follows:
key="1A0A3EBF96277C6109632C5D96AC5AF890693AC829552F33769D6B1A4275EAE2"
iv="EB6444718D73887B1DF8E1D5E6C3ECFC"
key_hex=binascii_a2b_hex(key)
iv_hex=binascii_a2b_hex(iv)
ctr = Counter.new(128, prefix=iv_hex, initial_value = 0)
aes = AES.new(key, AES.MODE_CTR, counter = ctr)
decrypted = aes.decrypt(binascii.a2b_hex(cipher).rstrip())
print(decrypted)
The problem is now that the counter is too big (32 bytes) for the blocksize which is 16 byte in AES. However, I found out that you need the IV as the prefix in your counter if you want to decrypt AES-CTR plus the initial_value set to 0.
Therefore I already have 16 Byte with only the Prefix. When I know want to set the first value in the counter object to 0 it does not work.
Is it even possible to decrypt AES-CTR with a 16 Byte IV using pycryptodome? Or maybe someone of you sees my error.
Any help would be much appreciated.
Thanks in advance!
Edit: Thanks to SquareRootOfTwentyThree I solved the pycryptodome problem. Unfortunately the decryption is still not working so I opened a new Thread. openssh/opensshportable, which key should I extract from memory?
As per Chapter 4 in RFC4344, SSH uses SDCTR mode (stateful-decryption CTR mode), which means that the counter block is a 128-bit counter, starting with a value represented in the IV as encoded in network order, and with no fixed parts (unlike NIST CTR mode).
With PyCryptodome, you do that with:
aes = AES.new(key_hex, AES.MODE_CTR, initial_value=iv_hex, nonce=b'')
Note: there seems to be an error in your code - you initialize the cipher with key (hexadecimal string) and not key_hex (bytes).

Device tree address and reg and property

I'm struggling to understand where to get the address of a device on a device tree? As an example how do I know that I should set <0x00900000 0x20000> in here.
Is memory mapped IO done in the hardware (the processor itself) or in software and do I just have to pass the right address in the device tree?
Is the address hardcoded on the processor or can I just set an arbitrary address? I cannot find anything in my reference manual about setting a certain address in the device tree
These kind of addresses can be found in the Reference Manual of the processor.
You can find the link here.
Take a look at the chapter 48 (OCRAM On-chip RAM Memory Controller) and more specifically at the section 48.2.1 (page 4118):
The total on-chip RAM size for the chip is 128 Kbytes, organized as 16K x 64 bits,mapped from 0x00900000 to 0x0091FFFF
This is where come from the values <0x00900000 0x20000> from the dtsi file, corresponding to the base address and the offset.
These values are in dts/dtsi file provided by the chip maker.

iPhone 4 Unlocking. NCK-Bruteforce Research

Every iPhone has a NORID (8 bytes) & CHIPID (12 bytes) unique to each phone.
Where is this stored? NOR? seczone? Can it be dumped?
An iPhone requires a NCK to unlock. From what I understand the NCK is 15 characters.
Is it numeric, alpha or alphanumeric?
The security token for check if the NCK is valid is stored encrypted at +0x400 in the seczone.
Is this correct?
Based on what I've read from dogbert's blog, the security token is created using a method similar to the following pseudo code:
deviceKey = SHA1_hash(norID+chipID)
nckKey = custom_hash(norID, chipID, SHA1_hash(NCK), deviceKey)
rawSignature = generateSignature(SHA1_hash(norID+chipID), SHA1_hash(chipID))
Signature = RSA_encrypt(rawSignature, RSAkey)
security token = TEA_encrypt_cbc(Signature, nckKey)
Is the pseudocode correct? If it is then what is the custom hash that is being used? What is being used to generate the rawSignature? What is the RSAKey that is being used? Is it a public key that can be found in the phone?
If the above pseudocode is CORRECT. Then we would have to bruteforce all 15 character combinations to find the correct NCK key right? Because, even though we are able to recover the NORID and CHIPID, we will not be able to use that information to shorten the amount of characters which we need to find.
Correct?
New generations of iPhone OS contains a wildcardticket that is generated during activation process.
but this should be no problem generating once we have the NCK right? Correct?
The NOR ID is the hardware chip id burned into the baseband chip of the device. I don't know where you are getting the 8 bytes from but it is actually burned into the chip and the size is 64 bytes for iPhone 3G and 128 bytes for the iPhone 3GS.
The NCK is a 15 digit (base 10 so it is not alpha-numeric). ie. the max NCK would be 999999999999999
Your device key is wrong.
It should read:
deviceUniqueKey = SHA(NCK + CHIPID + NORID)
teaEncryptedData = &seczone[0x400]
rsaEncryptedData = TEA_DECRYPT(teaEncryptedData, deviceUniqueKey)
validRSAMessage = RSA_DECRYPT(rsaEncryptedData, rsaKey)
When your NCK produces a valid RSA message, you have found the correct NCK to unlock your device.
Here is the python script that can decrypt iPhone baseband memory so you will be able to get all NCK tokens like
CHIP ID
NOR ID
IMEI hushes
Tea hashes
But this script was used only for old basebands (S-Gold chipset) but you can always make your own.
Also here are some ways to dump iphone baseband into the file by using iPhone core dump function or by other script like NOR dumper. Hope this help