determine HSM_KX header length from the prefix - encoding

I have a sample HSM_KW message. It has a prefix of 3 characters that determines the length of the message header. In this sample message it is 00w. So how can I convert this into a number to know the length of the header?
Sample message:
\00w2551800114193044KW13U2B9989708BC34A58AE7F2A619DEC5C0008%E\00\01IPH\00

I would suspect \00 is a 0x00 byte and that's part of message length. Header length is configurable from console and not sent in the message. I would advise to consult with Host Programmer guide and you security personnel responsible for HSM configuration.

Related

STUN MESSAGE-INTEGRITY dummy definition

in RFC5389 MESSAGE-INTEGRITY calculation includes itself but with dummy content
dummy content is not defined
how can MESSAGE-INTEGRITY be verified without knowing dummy content value?
why would MESSAGE-INTEGRITY calculation include itself?
is't it faster to calculate MESSAGE-INTEGRITY and equally secure if it didn't include itself?
Since the MESSAGE-INTEGRITY attribute itself is not part of the hash, you can append whatever you want for the last 20 bytes. Just replace it with the hash of all the bytes leading up to the attribute itself.
The algorithm is basically this:
Let L be the original size of the STUN message byte stream. Should be the same as the value for MESSAGE LENGTH in the STUN message header.
Append a 4 byte header onto the STUN message followed by 20 null bytes
Adjust the LENGTH field of the STUN message to account for these 24 new bytes.
Compute the HMAC/SHA1 of the first L bytes of the message (all but the 24 bytes you just appended).
replace the 20 null bytes with the 20 bytes of the computed hash
And as discussed in comments, the bytes don't have to be null bytes, they can be anything - since they aren't included in the hash computation.
There's an implementation of MESSAGE-INTEGRITY for both short-term and long-term credentials on my Github: here and here

What to sign for DTLSv1.0 Certificate Verify Message with RSA

I'm using DTLS v1.0 to communicate with a server. I'm having some trouble figuring out exactly what to do to generate the certificate verify message. I've been reading the RFCs (DTLSv1.0 and TLS1.1, which DTLS v1.0 is based on) but they're somewhat non-specific when it comes to this particular message.
I see the structure of the message is as below, and I know the signature type is RSA.
struct {
Signature signature;
} CertificateVerify;
The Signature type is defined in 7.4.3.
CertificateVerify.signature.md5_hash
MD5(handshake_messages);
CertificateVerify.signature.sha_hash
SHA(handshake_messages);
Based on what I've read it seems to be a concatenation of the sha1 hash and the md5 hash of all the previous messages sent and received (up to and excluding this one) and then RSA signed.
The piece that's got me a bit confused though is how to assemble the messages to hash them.
Does it use each fragment piece or does it use the re-assembled messages? Also, what parts of the messages does it use?
The RFC for TLS 1.1 says
starting at client hello up to but not including this message,
including the type and length fields of the handshake messages
but what about the DTLS specific parts like message_seq, fragment_offset, and fragment_length, do I include them?
UPDATE:
I have tried doing as the RFC for DTLS 1.2 shows (meaning keeping the messages fragmented, using all the handshake fields including DTLS specific fields, and not including the initial Client Hello or Hello Verify Request messages) but I am still receiving "Bad Signature". I do believe I'm signing properly, so it's my belief that I'm concatenating the data improperly to be signed.
For DTLS 1.2 it is defined. And reading RFC 4347, my impression is, RFC 6347 doesn't differ, it clarifies the calculations.
RFC 6347, 4.2.6. CertificateVerify and Finished Messages
RFC 4347, 4.2.6. Finished Messages

DKIM - partial signing (`l` parameter); reduce computational overhead

Does anyone have any experience signing only a part of a message body with DKIM?
The DKIM specification allows one to limit the amount of message body used in computing a signature; a parameter in the email's DKIM-Header ('l') can be used to indicate to recipient servers how much of the body is actually signed. I can reduce the amount of CPU consumed by my MTAs if I can limit this, but it's obviously not worth the trouble if it causes deliverability problems.
In practice, are mail providers requiring the entire body be signed?
References:
rfc4871 - DomainKeys Identified Mail (DKIM)
...
A body length specified in the "l=" tag of the signature limits the
number of bytes of the body passed to the verification algorithm.
All data beyond that limit is not validated by DKIM. Hence,
verifiers might treat a message that contains bytes beyond the
indicated body length with suspicion, such as by truncating the
message at the indicated body length, declaring the signature invalid
(e.g., by returning PERMFAIL (unsigned content)), or conveying the
partial verification to the policy module.
.
opendkim.8
-L min[%+] - Instructs the verification code to fail messages for
which a partial signature was received.
Thanks for your input!

How much data to receive from server in SSL handshake before calling InitializeSecurityContext?

In our Windows C++ application I am using InitializeSecurityContext() client side to open an schannel connection to a server which is running stunnel SSL proxy. My code now works, but only with a hack I would like to eliminate.
I started with this sample code:http://msdn.microsoft.com/en-us/library/aa380536%28v=VS.85%29.aspx
In the sample code, look at SendMsg and ReceiveMsg. The first 4 bytes of any message sent or received indicates the message length. This is fine for the sample, where the server portion of the sample conforms to the same convention.
stunnel does not seem to use this convention. When the client is receiving data during the handshake, how does it know when to stop receiving and make another call to InitializeSecurityContext()?
This is how I structured my code, based on what I could glean from the documentation:
1. call InitializeSecurityContext which returns an output buffer
2. Send output buffer to server
3. Receive response from server
4. call InitializeSecurityContext(server_response) which returns an output buffer
5. if SEC_E_INCOMPLETE_MESSAGE, go back to step 3,
if SEC_I_CONTINUE_NEEDED go back to step 2
I expected InitializeSecurityContext in step 4 to return SEC_E_INCOMPLETE_MESSAGE if not enough data was read from the server in step 3. Instead, I get SEC_I_CONTINUE_NEEDED but an empty output buffer. I have experimented with a few ways to handle this case (e.g. go back to step 3), but none seemed to work and more importantly, I do not see this behavior documented.
In step 3 if I add a loop that receives data until a timeout expires, everything works fine in my test environment. But there must be a more reliable way.
What is the right way to know how much data to receive in step 3?
SChannel is different than the Negotiate security package. You need to receive at least 5 bytes, which is the SSL/TLS record header size:
struct {
ContentType type;
ProtocolVersion version;
uint16 length;
opaque fragment[TLSPlaintext.length];
} TLSPlaintext;
ContentType is 1 byte, ProtocolVersion is 2 bytes, and you have 2 byte record length. Once you read those 5 bytes, SChannel will return SEC_E_INCOMPLETE_MESSAGE and will tell you exactly how many more bytes to expect:
SEC_E_INCOMPLETE_MESSAGE
Data for the whole message was not read from the wire.
When this value is returned, the pInput buffer contains a SecBuffer structure with a BufferType member of SECBUFFER_MISSING. The cbBuffer member of SecBuffer contains a value that indicates the number of additional bytes that the function must read from the client before this function succeeds.
Once you get this output, you know exactly how much to read from the network.
I found the problem.
I found this sample:
http://www.codeproject.com/KB/IP/sslsocket.aspx
I was missing the handling of SECBUFFER_EXTRA (line 987 SslSocket.cpp)
The SChannel SSP returns SEC_E_INCOMPLETE_MESSAGE from both InitializeSecurityContext and DecryptMessage when not enough data is read.
A SECBUFFER_MISSING message type is returned from DecryptMessage with a cbBuffer value of the amount of desired bytes.
But in practice, I did not use the "missing data" value. The documentation indicates the value is not guaranteed to be correct, and is only a hint for developers can use to reduce calls.
InitalizeSecurityContext MSDN doc:
While this number is not always accurate, using it can help improve performance by avoiding multiple calls to this function.
So I unconditionally read more data into the same buffer whenever SEC_E_INCOMPLETE_MESSAGE was returned. Reading multiple bytes at a time from a socket.
Some extra input buffer management was required to append more read data and keep the lengths right. DecryptMessage will modify the input buffers' cbBuffer properties when it fails, which surprised me.
Printing out the buffers and return result after calling InitializeSecurityContext shows the following:
read socket:bytes(5).
InitializeSecurityContext:result(80090318). // SEC_E_INCOMPLETE_MESSAGE
inBuffers[0]:type(2),bytes(5).
inBuffers[1]:type(0),bytes(0). // no indication of missing data
outBuffer[0]:type(2),bytes(0).
read socket:bytes(74).
InitializeSecurityContext:result(00090312). // SEC_I_CONTINUE_NEEDED
inBuffers[0]:type(2),bytes(79). // notice 74 + 5 from before
inBuffers[1]:type(0),bytes(0).
outBuffer[0]:type(2),bytes(0).
And for the DecryptMessage Function, input is always in dataBuf[0], with the rest zeroed.
read socket:bytes(5).
DecryptMessage:len 5, bytes(17030201). // SEC_E_INCOMPLETE_MESSAGE
DecryptMessage:dataBuf[0].BufferType 4, 8 // notice input buffer modified
DecryptMessage:dataBuf[1].BufferType 4, 8
DecryptMessage:dataBuf[2].BufferType 0, 0
DecryptMessage:dataBuf[3].BufferType 0, 0
read socket:bytes(8).
DecryptMessage:len 13, bytes(17030201). // SEC_E_INCOMPLETE_MESSAGE
DecryptMessage:dataBuf[0].BufferType 4, 256
DecryptMessage:dataBuf[1].BufferType 4, 256
DecryptMessage:dataBuf[2].BufferType 0, 0
DecryptMessage:dataBuf[3].BufferType 0, 0
read socket:bytes(256).
DecryptMessage:len 269, bytes(17030201). // SEC_E_OK
We can see my TLS Server peer is sending TLS headers (5 bytes) in one packet, and then the TLS message (8 for Application Data), then the Application Data payload in a third.
You must read some arbitrary amount the first time, and when you receive SEC_E_INCOMPLETE_MESSAGE, you must look in the pInput SecBufferDesc for a SECBUFFER_MISSING and read its cbBuffer to find out how many bytes you are missing.
This problem was doing my head in today, as I was attempting to modify my handshake myself, and having the same problem the other commenters were having, i.e. not finding a SECBUFFER_MISSING. I do not want to interpret the tls packet myself, and I do not want to unconditionally read some unspecified number of bytes. I found the solution to that, so I'm going to address their comments, too.
The confusion here is because the API is confusing. Ordinarily, to read the output of InitializeSecurityContext, you look at the content of the pOutput parameter (as defined in the signature). It's that SecBufferDesc that contains the SECBUFFER_TOKEN etc to pass to AcceptSecurityContext.
However, in the case where InitializeSecurityContext returns SEC_E_INCOMPLETE_MESSAGE, the SECBUFFER_MISSING is returned in the pInput SecBufferDesc, in place of the SECBUFFER_ALERT SecBuffer that was passed in.
The documentation does say this, but not in a way that clearly contrasts this case against the SEC_I_CONTINUE_NEEDED and SEC_E_OK cases.
This answer also applies to AcceptSecurityContext.
From MSDN, I'd presume SEC_E_INCOMPLETE_MESSAGE is returned when not enough data is received from server at the moment. Instead, SEC_I_CONTINUE_NEEDED returned with InBuffers[1] indicating amount of unread data (note that some data is processed and must be skipped) and OutBuffers containing nothing.
So the algorithm is:
If SEC_I_CONTINUE_NEEDED returned, check type of InBuffers[1]
If it is SECBUFFER_EXTRA, handle it (move InBuffers[1].cbBuffer bytes to the beginning of input buffer) and jump to next recv & InitializeSecurityContext iteration
If OutBuffers is not empty, send its contents to server

What are those "garbage" 16 bytes at the beginning of an unencrypted EncryptedData tag from an encrypted ws-security SOAP message? (WCF)

I'm inspecting a WCF request message in order to implement part of the WS-Security standard to have iPhone <-> WCF intercommunication (I'm using certificate security over basicHttpBinding).
After reading the standard xmlenc-core I could decrypt both the SignedInfo and the Body tags, but I see 16 bytes at the beginning of both unencrypted tags from which I have no idea.
I create a sample application according to the standard in order to send request from the iPhone to a self hosted WCF but it continues responding "An error occurred when verifying security for the message".
The only thing I don't know how to implement are those 16 bytes, does anybody knows what to use on those 16 bytes?
Thanks
When using Triple-DES and AES the cipher-text is prefixed by the IV. So when decrypting, you should use the first 16 bytes of the value as the IV and then perform the AES-CBC decryption on the remaining bytes. My guess is that you have forgotten this and thus are decrypting the IV also (which will yield garbage).