APDU: "Conditions of use not satisfied" (69 85) while calculate signature - certificate

With a smart card Gemalto (IAS ECC), I would to calculate a signature by using private key stored on smart card. For this, I use APDU commands:
// Verify PIN
00 20 00 01 04 31 32 33 34
-> 90 00
// Create a context for security operation
00 22 41 B6 06 84 01 84 80 01 12
-> 90 00
// Set the hash of the document
00 2A 90 A0 14 HASH OF DOCUMENT
-> 69 85
// Calculating the signature
00 2A 9E 9A 80
-> 69 85
My problem is the following: the las two commands return the error code "69 85", meaning "Conditions of use not satisfied".
I have already tried several solutions, but I obtain always the same error. How to resolve it? What does this code can mean?
After some tests, I discovered something interesting. When I replace cla "00" by "10", smart card returns a different response:
// Create a context for security operation
00 22 41 B6 06 84 01 84 80 01 12
// Verify PIN
00 20 00 01 04 31 32 33 34
// Calculating the signature (I replace "00" by "10")
10 2A 9E 9A 23 30 21 30 09 06 05 2B 0E 03 02 1A 05 00 04 14 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 12 13 14 15
I don't know if it's the good solution because smart card returns "90 00". But, it would return the content of my signature!
Thank you for your help!
Best regards

You are getting SW 6985 for
// Set the hash of the document
00 2A 90 A0 14 HASH OF DOCUMENT
-> 69 85
Since you have not set the correct context in current security environment.
Let me explain this below
First you performed VERIFY PIN command which was successful
// Verify PIN
00 20 00 01 04 31 32 33 34
-> 90 00
Then you performed MSE SET command,Where you set the security context.For this you have to understood how SE works(Please refer to section 3.5 fron IAS ECC v1.01).
At the time of personalisation, the Personaliser agent create SDO(Secure Data Object) inside the card.The reference to this SDO are mentioned in SE(Security Environment) in form of CRT(Control reference template).
// Create a context for security operation
00 22 41 B6 06 84 01 84 80 01 12
-> 90 00
Generally speaking, MSE SET command will always return SW 900 even if the SDO reference is wrong. Since it only return SW 6A80 when the template is wrong not when the reference is wrong.(The SDO reference is passed in tag 84)
After that you performed PSO HASH command
// Set the hash of the document
00 2A 90 A0 14 HASH OF DOCUMENT
-> 69 85
where the card return SW 6985(Condition of use not satisfied), This indicate the algorithm and SDO reference used for calculating Hash may wrong. Which is probably happening since the SDO reference which was sent during the time of MSE SET command is not available
Detecting error coming from MSE SET could be tricky since it return SW 9000.
For these type of situation you have to check the personalisation file carefully and need to match the MSE SET command with regard to SDO reference and supported ALGOs.
It may be useful to put the default context (e.g., cryptographic algorithms or
security operations) into the current SE in order to have few exchanges of MSE set commands.

Related

How to retrieve details of the console port used by BIOS using efivars?

As part of installation of linux, I would like to set the "console device properties"(example, console=ttyS0,115200n1) via the kernel cmdline for Intel based platform.
There is No VGA console, only serial consoles via COM interface.
On these systems BIOS already has the required settings to interact using the appropriate serial port.
I see that EFI has variables ConIn, ConOut, ConErr which I am able to see from /sys/firmware/efi but unable to decode the contents of it.
Is it possible to identify which COM port is being used by the BIOS by examining the efi variables.
Example, of the EFI var on my box.
root#linux:~# efivar -p -n 8be4df61-93ca-11d2-aa0d-00e098032b8c-ConOut
GUID: 8be4df61-93ca-11d2-aa0d-00e098032b8c
Name: "ConOut"
Attributes:
Non-Volatile
Boot Service Access
Runtime Service Access
Value:
00000000 02 01 0c 00 d0 41 03 0a 00 00 00 00 01 01 06 00 |.....A..........|
00000010 00 1a 03 0e 13 00 00 00 00 00 00 c2 01 00 00 00 |................|
00000020 00 00 08 01 01 03 0a 18 00 9d 9a 49 37 2f 54 89 |...........I7/T.|
00000030 4c a0 26 35 da 14 20 94 e4 01 00 00 00 03 0a 14 |L.&5.. .........|
00000040 00 53 47 c1 e0 be f9 d2 11 9a 0c 00 90 27 3f c1 |.SG..........'?.|
00000050 4d 7f 01 04 00 02 01 0c 00 d0 41 03 0a 00 00 00 |M.........A.....|
00000060 00 01 01 06 00 00 1f 02 01 0c 00 d0 41 01 05 00 |............A...|
00000070 00 00 00 03 0e 13 00 00 00 00 00 00 c2 01 00 00 |................|
00000080 00 00 00 08 01 01 03 0a 18 00 9d 9a 49 37 2f 54 |............I7/T|
00000090 89 4c a0 26 35 da 14 20 94 e4 01 00 00 00 03 0a |.L.&5.. ........|
000000a0 14 00 53 47 c1 e0 be f9 d2 11 9a 0c 00 90 27 3f |..SG..........'?|
000000b0 c1 4d 7f ff 04 00 |.M.... |
root#linux:~#
The contents of the ConOut variable are described in the UEFI specification - current version (2.8B):
3.3 - globally defined variables:
| Name | Attribute | Description |
|---------|------------|------------------------------------------------|
| ConOut | NV, BS, RT | The device path of the default output console. |
For information about device paths, we have:
10 - Protocols — Device Path Protocol:
Apart from the initial description of device paths, table 44 shows you the Generic Device Path Node structure, from which we can start decoding the contents of the variable.
The type of the first node is 0x02, telling us this node describes an ACPI device path, of 0x000c bytes length. Now jump down to 10.3.3 - ACPI Device Path and table 52, which tells us 1) that this is the right table (subtype 0x01) and 2) that the default ConOut has a _HID of 0x0a03410d and a _UID of 0.
The next node has a type of 0x01 - a Hardware Device Path, described further in 10.3.2, in this case table 46 (SubType is 0x01) for a PCI device path.
The next node describes a Messaging Device Path of type UART and so on...
Still, this only tells you what UEFI considers to be its default console, SPCR is what an operating system is supposed to be looking at for serial consoles. Unfortunately, on X86 the linux kernel handily ignores SPCR apart from for earlycon. I guess this is what you're trying to work around. It might be good to start some discussion on kernel development lists about whether to fix that and have X86 work like ARM64.
In my case since I know that console port is a "Serial IOPORT",
I could get the details now as follows.
a. Get hold of the /sys/firmware/acpi/tables/SPC table.
b. Read the Address offset 44-52. Actually one the last two bytes suffice.
Reference:
a. https://learn.microsoft.com/en-us/windows-hardware/drivers/serports/serial-port-console-redirection-table states that
Base Address 12 40
The base address of the Serial Port register set described using the ACPI Generic Address Structure.
0 = console redirection disabled
Note:
COM1 (0x3F8) would be:
Integer Form: 0x 01 08 00 00 00000000000003F8
Viewed in Memory: 0x01080000F803000000000000
COM2 (Ox2F8) would be:
Integer Form: 0x 01 08 00 00 00000000000002F8
Viewed in Memory: 0x01080000F802000000000000

Resolving contents of MiFare Ultralight NFC tag

I'm currently working with NFC/NDEF and I'm running into an issue where I'm unable to understand the data coming in. I have a general understanding of the NDEF standard and have looked over the MIFARE datasheet, so I'm able to pick out a few things, but there are a few bytes that are seemingly out of place and are puzzling me.
Here is the hexdump of the data on the tag, collected via nfc-mfultralight r:
00000000 04 02 2f a1 d2 11 5f 81 1d 48 00 00 e1 10 12 00 |../..._..H......|
00000010 01 03 a0 0c 34 03 1b 91 01 05 54 02 65 6e 68 69 |....4.....T.enhi|
00000020 11 01 05 54 02 65 6e 68 69 51 01 05 54 02 65 6e |...T.enhiQ..T.en|
00000030 68 69 fe 00 00 00 00 00 00 00 00 00 00 00 00 00 |hi..............|
I know the first 16 bytes (04 02 2f a1 d2 11 5f 81 1d 48 00 00 e1 10 12 00) are the NFC/MIFARE header (first 9 being the serial number/check bytes, 1 byte for internal, 2 for lock, and then final 4 are OTP bytes.)
Starting at byte 21 I can see the start of a TLV record with the Terminator TLV flag at the end (03 1b ... fe), indicating a record of NDEF type with length 27. This matches the length of the expected NDEF record.
However, I'm confused by bytes 16..20 (01 03 a0 0c 34). What are these?
It appears these are a part of the Lock Control TLV, a part of the NFC Type 2 Tag standard (pages 10-11).
The bytes are laid out as such:
0x01 - Lock Control TLV block name
0x03 - Length is 3 bytes
0xa0 - Encodes the position within the tag the lock area is at, composed of two nibbles:
0b0000 - Higher 4 bits represent the number of pages, while the lower 4 bits are the number of bytes
0b1100 - The number of bits used in the lock area.
0x0c - Indicates size in bits of the lock area
0x34 - Provides number of bytes in a page and the number of bytes each dynamic lock bit is able to lock.

Akka Stream TLS Server Logging & Troubleshooting

I'm using Akka Streams to create a TCP server using akka.stream.scaladsl.TLS with client certificate authentication. I'm working on creating an echo server as a first proof of concept.
In the meantime, I'm new to Scala/Akka/Akka Streams and so I created a similar server and TCP client in Python to provide tooling in testing my work in Scala. The Python server/client are functional using client cert authentication. When connecting to the server, the client takes the following steps:
Creates and configures an SSLContext
Creates a socket using socket.create_connection()
Wraps the socket with the SSLContext using SSLContext.wrap_socket(). This creates the peer connection
Once connected, prints the server certificate
Infinite loop asking for input and sending each input to the server
I believe I have the server completed using Akka Streams and akka.stream.scaladsl.TLS, but when I attempt to connect using my Python client the client never gets past connecting to the peer using context.wrap_socket(sock, server_hostname=host). The server successfully binds the TCP connection and creates the corresponding IncomingConnection object. The client/server also never timeout (the client just sits awaiting the handshake?).
My biggest problem is that I see no information from my TLS BidiFlow, akka.stream.scaladsl.TLS. I have no idea what step in the handshake I'm stuck at, which makes troubleshooting very difficult.
Is there any way to output some information throughout the TLS handshake process? It seems as though all of the functionality is encapsulated and I don't know if there's any way to troubleshoot.
Otherwise, I'm attempting to troubleshoot with openssl and get the following:
bash$ openssl s_client -connect myserver.com:443 -state -debug
CONNECTED(00000003)
SSL_connect:before/connect initialization
write to 0x7fd914100080 [0x7fd915001000] (318 bytes => 318 (0x13E))
0000 - 16 03 01 01 39 01 00 01-35 03 03 e3 ff 5d fb 26 ....9...5....].&
0010 - 15 e3 32 89 37 e2 cb 95-f5 00 bd df 13 3d ae a6 ..2.7........=..
0020 - d7 37 db 4e 80 19 63 ad-d6 6c f1 00 00 98 cc 14 .7.N..c..l......
0030 - cc 13 cc 15 c0 30 c0 2c-c0 28 c0 24 c0 14 c0 0a .....0.,.(.$....
0040 - 00 a3 00 9f 00 6b 00 6a-00 39 00 38 ff 85 00 c4 .....k.j.9.8....
0050 - 00 c3 00 88 00 87 00 81-c0 32 c0 2e c0 2a c0 26 .........2...*.&
0060 - c0 0f c0 05 00 9d 00 3d-00 35 00 c0 00 84 c0 2f .......=.5...../
0070 - c0 2b c0 27 c0 23 c0 13-c0 09 00 a2 00 9e 00 67 .+.'.#.........g
0080 - 00 40 00 33 00 32 00 be-00 bd 00 45 00 44 c0 31 .#.3.2.....E.D.1
0090 - c0 2d c0 29 c0 25 c0 0e-c0 04 00 9c 00 3c 00 2f .-.).%.......<./
00a0 - 00 ba 00 41 c0 11 c0 07-c0 0c c0 02 00 05 00 04 ...A............
00b0 - c0 12 c0 08 00 16 00 13-c0 0d c0 03 00 0a 00 15 ................
00c0 - 00 12 00 09 00 ff 01 00-00 74 00 0b 00 04 03 00 .........t......
00d0 - 01 02 00 0a 00 3a 00 38-00 0e 00 0d 00 19 00 1c .....:.8........
00e0 - 00 0b 00 0c 00 1b 00 18-00 09 00 0a 00 1a 00 16 ................
00f0 - 00 17 00 08 00 06 00 07-00 14 00 15 00 04 00 05 ................
0100 - 00 12 00 13 00 01 00 02-00 03 00 0f 00 10 00 11 ................
0110 - 00 23 00 00 00 0d 00 26-00 24 06 01 06 02 06 03 .#.....&.$......
0120 - ef ef 05 01 05 02 05 03-04 01 04 02 04 03 ee ee ................
0130 - ed ed 03 01 03 02 03 03-02 01 02 02 02 03 ..............
SSL_connect:unknown state
At which point openssl just hangs.
The Akka TLS support uses the built in Java TLS support behind the scenes, so to get debug output for TLS you'll have to enable debugging for that. It can be done through passing a system property to the JVM when starting it like so -Djavax.net.debug=all
Ultimately I found that the ssl-config logging is very sparse and wasn't helpful to resolving my issue. It does provide some debugging information but not much. Much better for debugging the TLS handshake is to use the -Djavax.net.debug=all flag when running the JVM. However, even this provides mixed results. For example, the resulting error I received is that the server couldn't find a matching cipher suite. Eventually I resolved my issue by realizing that when creating the input streams for my keystore/truststore I was specifying my path incorrectly.
Note for anyone coming across this: if you specify your keystore and truststore incorrectly the resulting input streams will be null and SSLContext.init will happily use these and provide an error that is unrelated to the keystore/truststore! This was very difficult to troubleshoot due to the incorrect error handling in SSLContext.

determining hash function used in digital signature

I have a digital signature (RSA - PKCS#1). After decrypting it with the RSA public key I get the following 128 bytes
00 01 ff ff ff .. ff 00 30 31 30 0d 06 09 60 86 48 01 65 03 04 02 01 05 00 04 20 77 51 1b f4 d7 17 d7 ad 8c 2d e5 89 2a ca e0 6d a3 c0 7d 13 4d d7 b8 01 14 87 03 00 69 e4 9b b3
PKCS#1 padding removed, 51 bytes left:
30 31 30 0d 06 09 60 86 48 01 65 03 04 02 01 05 00 04 20 77 51 1b f4 d7 17 d7 ad 8c 2d e5 89 2a ca e0 6d a3 c0 7d 13 4d d7 b8 01 14 87 03 00 69 e4 9b b3
I would like two things about this:
Is it possible to determine the hash function used? Encoded algorithm ID should be prepended to the actual body of the digest, is it possible to tell what algorithm it is from the raw bytes?
Where does the actual digest start (how long the head / digest is)?
This appears to be EMSA-PKCS1-v1_5 as described in RFC 3447, which means that after removing the header and padding, you have a DER encoding of an AlgorithmIdentifier followed by the hash value itself.
From the RFC:
For the six hash functions mentioned in Appendix B.1, the DER
encoding T of the DigestInfo value is equal to the following:
[...]
SHA-256: (0x)30 31 30 0d 06 09 60 86 48 01 65 03 04 02 01 05 00 04 20 || H.
So in your example, the hash value is the SHA-256 hash starting 77511bf4d7....

Webbit websocket ws:// connection works but wss:// handshake fails silently without any error?

Upgraded Webbit to 0.4.6 to use the new SSL support but immediately realized that all wss:// handshakes are failing silently and I don't have any errors to show for it. Chrome only reports a "success" for a response without a HTTP code or any other headers. I check server logs and it doesn't even register an "open" event.
The catch here is that any ws:// connection works great. So what could be possible problems and how can I get an error out of it? Could it be something wrong with the java keystore and SSL handshake?
Edit
I was able to find an openSSL command for a test handshake. Here's the output:
SSL_connect:before/connect initialization
SSL_connect:SSLv2/v3 write client hello A
SSL_connect:error in SSLv2/v3 read server hello A
Edit 2
I realized I could debug this further
CONNECTED(0000016C)
SSL_connect:before/connect initialization
write to 0x1f57750 [0x1f6a730] (210 bytes => 210 (0xD2))
0000 - 16 03 01 00 cd 01 00 00-c9 03 01 4f 6b 8d 68 63 ...........Ok.hc
0010 - 99 06 08 30 93 2a 42 88-f8 f1 c4 c5 dc 89 71 0b ...0.*B.......q.
0020 - b6 04 42 4e 11 79 b4 76-6c f7 66 00 00 5c c0 14 ..BN.y.vl.f..\..
0030 - c0 0a 00 39 00 38 00 88-00 87 c0 0f c0 05 00 35 ...9.8.........5
0040 - 00 84 c0 12 c0 08 00 16-00 13 c0 0d c0 03 00 0a ................
0050 - c0 13 c0 09 00 33 00 32-00 9a 00 99 00 45 00 44 .....3.2.....E.D
0060 - c0 0e c0 04 00 2f 00 96-00 41 00 07 c0 11 c0 07 ...../...A......
0070 - c0 0c c0 02 00 05 00 04-00 15 00 12 00 09 00 14 ................
0080 - 00 11 00 08 00 06 00 03-00 ff 01 00 00 44 00 0b .............D..
0090 - 00 04 03 00 01 02 00 0a-00 34 00 32 00 01 00 02 .........4.2....
00a0 - 00 03 00 04 00 05 00 06-00 07 00 08 00 09 00 0a ................
00b0 - 00 0b 00 0c 00 0d 00 0e-00 0f 00 10 00 11 00 12 ................
00c0 - 00 13 00 14 00 15 00 16-00 17 00 18 00 19 00 23 ...............#
00d2 - <SPACES/NULS>
SSL_connect:SSLv2/v3 write client hello A
read from 0x1f57750 [0x1f6fc90] (7 bytes => 0 (0x0))
12488:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:.\ssl\s23_lib.c:177:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 210 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
---
Edit 3
Ok I've nailed the problem to Webbit initialization, but it doesn't throw any errors so I could use some input to getting getResourceAsStream functioning properly. Here's how the server is initialized:
def startWebSocketServer(webSocketHandler:PartialFunction[WebSocketEvent, Unit]) {
val webServer = WebServers.createWebServer(port)
try {
webServer.setupSsl(getClass.getResourceAsStream("/keystore"), "webbit")
webServer.add("/", new WebSocketEventAdapter(webSocketHandler))
webServer.start
} catch {
case e => e.printStackTrace()
}
}
Unfortunately setupSsl won't output any information, and I've tried both what I thought would be the path and inserting a fake path. In either case, I can't get an error. How on earth would I properly locate the path? Thanks!
The OMFG Answer
In a hysterical twist of fate, I found the problem. This particular issue took up 48 hours of my time, but the cause was not even code related and a funny miscommunication.
So as it turns out, another developer had copied our websocket code into a new file he was working on for development. All this time we were trying to debug code in a file that wasn't even executing at run-time. So upon further investigation we scrolled to the bottom of a very long and different file, and found the webbit init code and excuted it perfectly.
Moral of the story: don't commit an incomplete file to the master branch and point everyone there for debugging ;)