TLS sequence number - numbers

I'm working on a college paper about TLS and I am asked why TLS sequence number counter is a 64-bit number when TLS only uses 32-bit sequence number in its messages. I've looked around for a while, even checked the RFC and I have found nothing so far. Can anyone help me?

Looks to me like the question is just plain wrong. TLS uses 64-bit sequence numbers, and these are implicit (i.e. not transmitted as part of TLS messages).
Maybe the original questions is confusing SQNs in TLS with SQNs in IPsec: there, 32-bit sequence numbers are included in ESP and AH header fields, but 64-bit extended sequence numbers (ESNs) are permitted by the relevant RFCs.

I take it the following quite from RFC2246, page 74, first paragraph, fifth sentence is an insufficient answer?
Since sequence numbers are 64-bits
long, they should never overflow.

There can be - and often are - differences between wording of the specification and any particular conforming implementation. English is an imprecise language for algorithm specification.
You fail to specify whether the implementation you are looking at never overflows in to bit 33, or if you've just not seen it happen. Claiming that you have seen the counter wrap modulo 2^32 would be a different claim altogether.

Please first understand what you are asking. What is a TLS message? Are you referring to TLS records? TLS uses a 64-bit counter for record messages and this number is not included in the TLS records. It is used implicitly.

Related

Is the Minecraft protocol between servers and clients based on any standard?

TL;DR
I'm trying to talk to a Minecraft server with a client written in Scala, using Akka I/O over TCP, and would like to know if Minecraft uses an official, standardised protocol in it's network communication?
Minecraft's own documentation covers the contents of each packet, but fails to explain how the packets themselves are encoded, or even how they should be formed.
A little back story
As part of a personal project that I'm working on, written in Scala, I need to create an interface capable of mocking a Minecraft client, and performing actions against a Minecraft server. After weeks of research, I came across a lot of Java libraries that were almost what I was looking for, but none that quite suited my exact needs; long story short, I did the classic, "Oh well, why not write it myself and enjoy the learning curve"...
The issue
The Minecraft protocol documentation is thorough in some respects, but lacking in others, many assumptions are made throughout and a lot of key information is missing or even incorrect; a detailed network specification being the most notable in my case.
One attempt to talk to the Minecraft server had me playing around with Google's protocol buffers, using ScalaPB to compile them to usable case classes, but the data types were a pain to resolve between Google's own documentation and Minecraft's.
message Handshake {
<type?> protocolVersion = 1;
<type?> host = 2;
<type?> port = 3;
<type?> nextState = 4;
}
The host is a string, so that's an easy win, but both the protocolVersion and nextState are variable integers, which are not encoded as expected when I compared them with valid packets generated by another client with identical contents (I've been using a third-party library to compare the hexadecimal output of encoded packets).
My hacky solution
In a ditch attempt to achieve my goals, I've simply written methods like the one below (this is also a first iteration, so be kind!) to generate the desired encoding for each of the types declared in Minecraft's documentation that are not supported natively in Scala, and although this works, it just smells like I'm missing something potentially obvious that others might know about.
def toVarint(x: Int): Array[Byte] = {
var number = x
var output = ArrayBuffer[Int]()
while (number >= Math.pow(2, 31)) {
output += number & 0xFF | 0x80
number /= 128
}
while ((number & ~0x7F) > 0) {
output += number 0xFF | 0x80
number >>>= 7
}
output += number | 0
output.map(_.toByte).toArray
}
This is old, but I'll answer anyway! The wiki you referenced has (at least now) a section on the packet format (both before & after compression), as well as explanations including example code for how to handle their varint & varlong types!
To summarise, each packet is length prefixed with a varint (in both compression modes), so you just need to read a varint from the stream, allocate that much space & read that many bytes from the stream to the buffer.
Each byte of Minecraft varints have a "another byte to follow?" flag bit, followed by 7 bits of actual data. Where those 7 bits are just the bits from a standard int (so on receive, you essentially just omit those flag bits & write the rest to a standard int type)

What is the data format for the device address using libMPSSE I2C?

I am attempting to use libMPSSE to perform I2C communications. The example code listed in the attached document connects to a 24LC024H EEPROM device.
The address for the device used in the example as defined in it's documentation is 1010XXX_ where the X's are configurable. In the examples associated diagram you can see the values are configured to be 1. It also states that the R/W bit (_) should not be included meaning the address passed to the library should be 10101110. The address actually used in the example code is 0x57 which is 01010111.
I do not see how we got from A to B here. I cannot figure out how to format the address of the device I am trying to communicate with nor can I find any documentation spelling it out. The only documenation on the address parameter says:
Address of the I2C slave. This is a 7bit value and it
should not contain the data direction bit, i.e. the
decimal value passed should be always less than 128
This confusing since the data direction bit is usually the LSB.
I was updating my question to clarify what the address should be and a coincidence in the editor cause the answer to smack me in the face.
By "should not be included" they do not mean that the bit should be zero but rather by completely nonexistent. To them this means shifting the address bits down to remove it as the LSB. It also implies that the MSB should always be zero even though it's not explicitly defined anywhere.

Length of ciphertext produced by RSAES_OAEP_Encryptor?

My use of the Crypto++ library has gone very well, but I have a small question...
If I use RSAES_OAEP_Encryptor & RSAES_OAEP_Decryptor everything is fine. (I'm using a 2048-bit key from PEM files generated by OpenSSL).
My question is this: Will the length of ciphertext produced by encryptor.Encrypt(...) always equal decryptor.FixedCiphertextLength() or can be be less than that? I only ask as this is in a library used by a number of applications and I need to sanity check parameters.....
BTW. Is there any faster was to encrypt/decrypt using RSA which maintains at least the same level of security provided by OAEP? With a 1024 bit key, on an example test box, averaged over 1000 iterations, I'm finding it takes about 80uS to encode a short string and 1.03mS (12 times longer) to decrypt; with a 2048-bit key encryption takes 190uS and decryption, 4.3mS (22 times longer). I know that RSA decryption is slow, but.... the system is running XP Pro SP3/Xeon E5520 and was compiled with VS2008 with /MD rather than /MT. I can't use a shorter key than 2048-bits for compliance reasons...
Many thanks
Nick
Length of ciphertext produced by RSAES_OAEP_Encryptor?
In the case of RSA, I believe FixedPlaintextLength() and FixedCiphertextLength() call MaxPreImage() and MaxImage(). MaxPreImage() and MaxImage(), in turn, returns n - 1.
Will the length of ciphertext produced by encryptor.Encrypt(...) always equal decryptor.FixedCiphertextLength() or can be be less than that?
It depends on the cryptosystem being used. Usually, its the size of the key that determines if FixedCiphertextLength() holds (and not the size of the plain text). In the case of RSA/OAEP and others like ElGamal, I believe it holds.
I think the class of interest here is the PK_CryptoSystem Class Reference. Classes like RSAES_OAEP_Encryptor inherit from PK_CryptoSystem, and that's where FixedCiphertextLength() and friends come from.
With a 1024 bit key, on an example test box, averaged over 1000 iterations, I'm finding it takes about 80uS to encode a short string and 1.03mS (12 times longer) to decrypt; with a 2048-bit key encryption takes 190uS and decryption, 4.3mS (22 times longer)
This is a different question, but...
In the case of encryption or verification, the public exponent is used. The public exponent is, by default, 65537 (IIRC). That's got a low hamming weight (high density of 0's), so square and multiply exponentiation routines run relatively fast.
On the other hand, decryption and signing use the private exponent, which should have about a normal distribution of 1's and 0's. There's lots of squares and multiplies to perform, and those take extra time.
Taking advantage of those little timing differences is where your side channels come from if you are not careful. If you are not careful, then the NSA will thank you.
I can't use a shorter key than 2048-bits for compliance reasons
A 2048-bit modulus is about 10x slower than a 1024-bit modulus. Its why so many folks were reluctant to move from 1024-bit, and why 1024-bit is still kind of preferred.
Peter Gutmann has this to say about it in his Engineering Security book (p. 229):
Another example [of broken threat models] occurred with keys in
certificates, for which browser vendors (in response to NIST
requirements) forced CAs to switch from 1024-bit to 2048-bit keys,
with anything still using 1024-bit keys being publicly denounced as
insecure. As discussed in “Problems” on page 1, the bad guys didn’t
even notice whether their fraudulent certificates were being signed
with, or contained, 2048-bit keys or not.

What potential error can be caused by retrieving and changing of a value in the same line?

I was asked question about what would happen if I try to retrieve a reference value and then try to change it within the same line of code. My answer was that nothing will happen because as I tried to do this before I did not encounter any compiler errors (at least in C# or Java).
What is the real answer to this?
This is example with the pseudo code:
Module main()
Call changeNumber(10)
End Module
Module changeNumber(Integer Ref number)
Set number = number * number
Display number
End Module
(PS. Sorry for not formatting/creting this post correctly. I'm having bit of an issue here.)
There would be no unusual side effects, if that's what you're asking. The language specifications dictate a specific order of execution (number * number is evaluated, then set to number), which prevents any issues from occurring.
Nothing would happen, in your particular pseudo code.
In reference to a question you asked after this question -
Actually there could be issues in some rare instance, but it would depend on how you are allocating space for the number and your coding language. Consider this. you are explicitly naming the data type as an int, well what if the accepted input is a larger number than an int can handle (or a negative number), that ends up being x integers long and then you multiply it by that same number, your allocated space (which usually has padding with integer) could also have insufficient space that is too small for this particular instance, which would cause a segmentation fault in C. depending on which language you're using, whether it's higher level than C, it may have compilation checks for this case, but not always.

Socket Protocol Fundamentals

Recently, while reading a Socket Programming HOWTO the following section jumped out at me:
But if you plan to reuse your socket for further transfers, you need to realize that there is no "EOT" (End of Transfer) on a socket. I repeat: if a socket send or recv returns after handling 0 bytes, the connection has been broken. If the connection has not been broken, you may wait on a recv forever, because the socket will not tell you that there's nothing more to read (for now). Now if you think about that a bit, you'll come to realize a fundamental truth of sockets: messages must either be fixed length (yuck), or be delimited (shrug), or indicate how long they are (much better), or end by shutting down the connection. The choice is entirely yours, (but some ways are righter than others).
This section highlights 4 possibilities for how a socket "protocol" may be written to pass messages. My question is, what is the preferred method to use for real applications?
Is it generally best to include message size with each message (presumably in a header), as the article more or less asserts? Are there any situations where another method would be preferable?
The common protocols either specify length in the header, or are delimited (like HTTP, for instance).
Keep in mind that this also depends on whether you use TCP or UDP sockets. Since TCP sockets are reliable you can be sure that you get everything you shoved into them. With UDP the story is different and more complex.
These are indeed our choices with TCP. HTTP, for example, uses a mix of second, third, and forth option (double new-line ends request/response headers, which might contain the Content-Length header or indicate chunked encoding, or it might say Connection: close and not give you the content length but expect you to rely on reading EOF.)
I prefer the third option, i.e. self-describing messages, though fixed-length is plain easy when suitable.
If you're designing your own protocol then look at other people's work first; there might already be something similar out there that you could either use 'as is' or repurpose and adjust. For example; ISO-8583 for financial txns, HTTP or POP3 all do things differently but in ways that are proven to work... In fact it's worth looking at these things anyway as you'll learn a lot about how real world protocols are put together.
If you need to write your own protocol then, IMHO, prefer length prefixed messages where possible. They're easy and efficient to parse for the receiver but possibly harder to generate if it is costly to determine the length of the data before you begin sending it.
The decision should depend on the data you want to send (what it is, how is it gathered). If the data is fixed length, then fixed length packets will probably be the best. If data can be easily (no escaping needed) split into delimited entities then delimiting may be good. If you know the data size when you start sending the data piece, then len-prefixing may be even better. If the data sent is always single characters, or even single bits (e.g. "on"/"off") then anything different than fixed size one character messages will be too much.
Also think how the protocol may evolve. EOL-delimited strings are good as long as they do not contain EOL characters themselves. Fixed length may be good until the data may be extended with some optional parts, etc.
I do not know if there is a preferred option. In our real-world situation (client-server application), we use the option of sending the total message length as one of the first pieces of data. It is simple and works for both our TCP and UDP implementations. It makes the logic reasonably "simple" when reading data in both situations. With TCP, the amount of code is fairly small (by comparison). The UDP version is a bit (understatement) more complex but still relies on the size that is passed in the initial packet to know when all data has been sent.