Should I implement packet delimiter with AES encrypted TCP stream? - aes

Recently I am trying to design a custom protocol for IoT devices with Python Twisted. Since most of the IoT devices (end node) are not powerful enough to support TLS, I have to implement a modified light weight transport security. However AES128/256 is used for data encryption as same as standrad TLS.
It is well known that TCP is a stream, and delivered messages may need delimiter in the TCP stream. In text oriented message like HTTP/FTP, \n\r is used. In binary message, we should define our own structure such as TLV and V stands for payload data. That can be implemented by Netty in Java, Twisted in Python.
When TCP is encrypted, things become complicated. In theory, plain text should be encrypted with AES, then carried on TCP stream. It is easy for text message, but in binary message, Type/Length fields are encrypted as well. Furthermore, AES is block oriented algorithm, which means a message may need next message to combile to be encrypted/decrypted. Although the AES could be transparent transport, but it is hard to implement for binary message slicing and parsing.
There is another way, we can leave TL fields in TLV as plain, while keep V encrypted by AES. But it only works for binary data. and it need padding in AES as well.
Is there any suggestions or references, including code or project ? Thanks!

AES is a block cipher, so read data in chunks of the block size. On the transmitter side, your AES encryption code should add padding as necessary, it should be part of the AES implementation. (I would recommend that you use AES-CBC for this application). On the transmitter, just read data in chunks of the block size.
In java I would use an DataInputStream from my socket, and do a readFully().
Socket socket;
DataInputStream ins = new DataInputStream(socket.getInputStream());
while(connectionOpen) {
byte[] aes = new Byte[blockSize];
ins.readFully(aes);
//Decrypt received data aes
...
}
Since you use readFully() above, you ensure you always receive data of the correct size, i.e. the block size.

Related

Do I need multiple EVP_CIPHER_CTX structures?

I have a single-threaded client/server application that needs to do both encryption and decryption of their network communication. I plan on using OpenSSL's EVP API and AES-256-CBC.
Some sample pseudo-code I found from a few examples:
// key is 256 bits (32 bytes) when using EVP_aes_256_*()
// I think iv is the same size as the block size, 128 bits (16 bytes)...is it?
1: EVP_CIPHER_CTX *ctx = EVP_CIPHER_CTX_new();
2: EVP_CipherInit_ex(ctx, EVP_aes_256_cbc(), NULL, key, iv, 1); //0=decrypt, 1=encrypt
3: EVP_CipherUpdate(ctx, outbuf, &outlen, inbuf, inlen);
4: EVP_CipherFinal_ex(ctx, outbuf + outlen, &tmplen));
5: outlen += tmplen;
6: EVP_CIPHER_CTX_cleanup(ctx);
7: EVP_CIPHER_CTX_free(ctx);
The problem is from all these examples, I'm not sure what needs to be done at every encryption/decryption, and what I should only do once on startup.
Specifically:
At line 1, do I create this EVP_CIPHER_CTX just once and keep re-using it until the application ends?
Also at line 1, can I re-use the same EVP_CIPHER_CTX for both encryption and decryption, or am I supposed to create 2 of them?
At line 2, should the IV be re-set at every packet I'm encrypting? Or do I set the IV just once, and then let it continue forever?
What if I'm encrypting UDP packets, where a packet can easily go missing or be received out-of-order: am I correct in thinking CBC wont work, or is this where I need to reset the IV at the start of every packet I send out?
Sorry for reviving an old thread, but I noticed the following error in the accepted answer:
At line 1, do I create this EVP_CIPHER_CTX just once and keep re-using it until the application ends?
You create it once per use. That is, as you need to encrypt, you use the same context. If you need to encrypt a second stream, you would use a second context. If you needed to decrypt a third stream, you would use a third context.
Also at line 1, can I re-use the same EVP_CIPHER_CTX for both encryption and decryption, or am I supposed to create 2 of them?
No, see above.
This is not necessary. From the man page for OpenSSL:
New code should use EVP_EncryptInit_ex(), EVP_EncryptFinal_ex(), EVP_DecryptInit_ex(), EVP_DecryptFinal_ex(),
EVP_CipherInit_ex() and EVP_CipherFinal_ex() because they can reuse an existing context without allocating and freeing it up on each call.
In other words, you need to re-initialize the context each time before you use it, but you can certainly use the same context over and over again without creating (allocating) a new one.
I have a single-threaded client/server application that needs to do both encryption and decryption of their network communication. I plan on using OpenSSL's EVP API and AES-256-CBC.
If you are using the SSL_* functions from libssl, then you will likely never touch the EVP_* APIs.
At line 1, do I create this EVP_CIPHER_CTX just once and keep re-using it until the application ends?
You create it once per use. That is, as you need to encrypt, you use the same context. If you need to encrypt a second stream, you would use a second context. If you needed to decrypt a third stream, you would use a third context.
Also at line 1, can I re-use the same EVP_CIPHER_CTX for both encryption and decryption, or am I supposed to create 2 of them?
No, see above.
The ciphers will have different states.
At line 2, should the IV be re-set at every packet I'm encrypting? Or do I set the IV just once, and then let it continue forever?
No. You set the IV once and then forget about it. That's part of the state the context object manages for the cipher.
What if I'm encrypting UDP packets, where a packet can easily go missing or be received out-of-order: am I correct in thinking CBC wont work...
If you are using UDP, its up to you to detect these sorts of problems. You'll probably end up reinventing TCP.
Encryption alone is usually not enough. You also need to ensure authenticity and integrity. You don't operate on data that's not authentic. That's what keeps getting SST/TLS and SSH in trouble.
For example, here's the guy who wrote the seminal paper on authenticated encryption with respect to IPSec, SSL/TLS and SSH weighing in on the Authenticate-Then-Encrypt (EtA) scheme used by SSL/TLS: Last Call: (Encrypt-then-MAC for TLS and DTLS) to Proposed Standard:
The technical results in my 2001 paper are correct but the conclusion
regarding SSL/TLS is wrong. I assumed that TLS was using fresh IVs and
that the MAC was computed on the encoded plaintext, i.e.
Encode-Mac-Encrypt while TLS is doing Mac-Encode-Encrypt which is
exactly what my theoretical example shows is insecure.
For authenticity, you should forgo CBC mode and switch to GCM mode. GCM is an authenticated encryption mode, and it combines confidentiality and authenticity into one mode so you don't have to combine primitives (like AES/CBC with an HMAC).
or is this where I need to reset the IV at the start of every packet I send out?
No, you set the IV once and then forget about it.
The problem is from all these examples, I'm not sure what needs to be done at every encryption/decryption, and what I should only do once on startup.
Create this once: EVP_CIPHER_CTX
Call this once for setup: EVP_CipherInit
Call this as many times as you'd like: EVP_CipherUpdate
Call this once for cleanup: EVP_CipherFinal
The OpenSSL wiki has quite a few examples of using the EVP_* interfaces. See EVP Symmetric Encryption and Decryption, EVP Authenticated Encryption and Decryption and EVP Signing and Verifying.
All the examples use the same pattern: Init, Update and then Final. It does not matter if its encryption or hashing.
Related: this should be of interest to you: EVP Authenticated Encryption and Decryption. Its sample code from the OpenSSL wiki.
Related: you can find copies of Viega, Messier and Chandra's Network Security with OpenSSL online. You might consider hunting down a copy and getting familiar with some of its concepts.
Sorry for reviving an old thread too, but LazerSharks asked twice about evp cipher context in comments. I don't have enough reputation points here to add some comments that's why I'll take an answer here. (Google search even now doesn't show needful information)
From the "Network Security with OpenSSL" book by Pravir Chandra, Matt Messier, John Viega:
Before we can begin encrypting or decrypting, we must allocate and initialize a cipher context. The cipher context is a data structure that keeps track of all relevant state for the purposes of encrypting or decrypting data over a period of time. For example, we can have multiple streams of
data encrypted in CBC mode. The cipher context will keep track of the key associated with each
stream and the internal state that needs to be kept between messages for CBC mode. Additionally, when encrypting with a block-based cipher mode, the context object buffers data that doesn't exactly align to the block size until more data arrives, or until the buffer is explicitly flushed, at which point the data is usually padded as appropriate.
The generic cipher context type is EVP_CIPHER_CTX. We can initialize one, whether it was allocated dynamically or statically, by calling EVP_CIPHER_CTX_init, like so:
EVP_CIPHER_CTX *x = (EVP_CIPHER_CTX *)malloc(sizeof(EVP_CIPHER_CTX));
EVP_CIPHER_CTX_init(x);
Just to complement bacchuswng's answer I have been recently experimenting with LibreSSL and it seems there is no problem with reusing an EVP_CIPHER_CTX but you need to make sure you call EVP_CIPHER_CTX_reset or EVP_CIPHER_CTX_cleanup before starting another encryption / decryption. This is because EVP_EncryptInit / EVP_DecryptInit will clear the context's memory effectively leaking the context's cipher_data previously being used.
Some ciphers don't need to allocate cipher_data but the one I was testing (EVP_aes_128_gcm) does need 680 bytes which can grow out of control quite rapidly.
I can't tell if this is the same behavior with OpenSSL but since documentation for this library is way too hard to come by, I figured I'd share this little quirk (bug perhaps?).

How do I retrieve file data over a socket in Go?

I've got two small programs communicating nicely over a socket where the receiving side is in Go. Everything works peachy when my messages are tiny enough to fit in the 1024 byte buffer and can be received in a single Read from the connection but now I want to transfer data from an image that is 100k+ or more. I'm assuming the correct solution is not to increase the buffer until any image can fit inside.
Pseudo-go:
var buf = make([]byte,1024)
conn, err := net.Dial("tcp", ":1234")
for {
r, err := conn.Read(buf[0:])
go readHandler(string(buf[0:r]),conn)
}
How can I improve my socket read routine to accept both simple messages of a few bytes and also larger data? Bonus points if you can turn the total image data into an io.Reader for use in image.Decode.
I have no direct experience with TCP in Go but to me it seems that you fell victim of a quite typical misunderstanding of what guarntees TCP offers.
The thing is, in contrast with, say, UDP and SCTP, TCP does not have the concept of message boundaries because it's stream-oriented. It means, TCP transports opaque streams of bytes and you have very little control of "chunking" that stream with regard to the receiving side.
I suspect what you observe as "sending a 100k+ message" is the runtime/network library on the sender side typically "deceiving" you by consuming your "message" into its internal buffers and then streaming it in whatever chunks OS's TCP stack allows it to (on ubiquitous hardware/software it's usually about 8k). The size of pieces the receiver gets that stream is completely undefined; the only thing defined is ordering of the bytes in the stream, which is preserved.
Hence it might turn out you have to resonsider your approach to receiving data. The exact approach varies depending on the nature of the data being streamed:
The easiest way (if you have the control over the application-level protocol) is to pass the length of the following "message payload" in a special length field of fixed format. Then destreaming the whole message is a two-step process: 1) receive that many bytes to get the length field, read it, check the value for sanity, then 2) read that many following bytes and be done with it.
If you have no control over the app-level protocol, parsing messages becomes more involved and usually requires some sort of complicated state machine.
For more info, look at this and this.
You can use io.ReadFull to read a []byte of a specific length. This assumes that you know beforehand how many bytes you need to read.
As for image.Decode, it should be possible to pass the conn directly to the image.Decode function. This assumes that you do not perform any reads from the connection until the image is decoded.
Your code
for {
r, err := conn.Read(buf[0:])
go readHandler(string(buf[0:r]),conn)
}
seems to be suggesting that the goroutine you are starting is reading from conn This doesn't seem like a good idea, because you will end up having multiple concurrent reads from the connection (without having control over the order in which the reads will happen): one in the for-loop, another one in readHandler.

Erlang get_tcp:recv data length

I user gen_tcp:recv(Socket, 0). for data receiveng, but i can receive only 1418 bytes for 1 time. How can I receive how much data was sent?
in gen_tcp:recv(Socket, 0) you are asking the kernel: "Give me all data there is available right now in the receive buffer". The kernel is also free to give you less however. Even for a rather fast link, you will probably hit slow start on the TCP connection so in the beginning you will not get much data.
The solution is to do your own buffering. You will have to eat data from the underlying socket until you have enough to construct a message. It is quite common for binary protocols to implement their own kind of messaging on top of the stream due to this.
For the longer term record: A common message format is to encode a message as:
decode(Bin) when is_binary(Bin) ->
<<Len:32/integer, R/binary>> = Bin,
<<Payload:Len/binary, Remain/binary>>,
{msg, {Len, Payload}, Remaining}.
That is, messages are 4 bytes representing a 32-bit bigendian integer followed by the payload, where the length is given by the integer. This format, and others like it, are so common Erlang includes optimized parsers for it directly in the C-layer. To get access to these, you set options on the socket through inet/setops/2, in our case we set {packet, 4}. Then we can get messages by setting {active, once} on the socket and wait for the next message. When it arrives, we can {active, once} again on the socket to get the next message, and so on. There is an example in the documentation of gen_tcp (erl -man gen_tcp if you have the Erlang man-pages installed appropriately).
Other common formats are asn.1 or even http headers(!).
Tricks
It is often beneficial to create a process which is separate that can encode and decode your message format and then send on data to the rest of the system. Usually a good solution in Erlang is to demux incoming data as fast as possible and get the data to a process which can then handle the rest of the problem.

using crypto++ on iphone sdk with pycrypto on app engine

I'm trying to encrypt http requests using crypto++ and decrypt them with pycrypto on the app engine server end. Using Arc4 encryption, I can successfully encrypt and decrypt on the iphone end but when I try decrypting on app engine, the result is garbled. The ciphertext after encrypting on the client is the same as the text received on the server when I check logging, so if they are visually the same, why would decrypting fail?
I thought maybe it has something to do with the encoding of the NSString, as I find I need to call encode() on the cipher on the server end before decrypting just to avoid decrypt() failing on it attempting to encode the cipher in ascii. I have a separate post that delves a bit into this. Can anyone offer some advice?
crypto++ / pycrypto with google app engine
Update:
I have discovered that the ciphertext resulting from encrypting in C with Crypto++ is not the same as the ciphertext from encrypting in python with PyCrypto. Could there be something I'm doing wrong with initializing the keys? I do something like:
ARC4::Encryption enc("a");
in C. And in python I do:
testobj=ARC4.new('a')
The %-encoded resulting cipher is different in C than in python. I noticed that in C, I can pass a 2nd parameter for keylength, which I guessed should be 1 for "a", resulting in a different cipher than when putting no parameter. The %-encoded result was still different from the python encoding, though.
Does anything look particularly amiss with my init perhaps?
I've discovered that the problem was not with the init of either crypto impl but rather mistakenly trying to stuff the encrypted cipher text into an NSString which cant simply take raw binary data with no particular encoding. The trick was to encode the data in base64 or base16 so that it is is readable, then use unhexlify on the server end before decrypting.

Socket Protocol Fundamentals

Recently, while reading a Socket Programming HOWTO the following section jumped out at me:
But if you plan to reuse your socket for further transfers, you need to realize that there is no "EOT" (End of Transfer) on a socket. I repeat: if a socket send or recv returns after handling 0 bytes, the connection has been broken. If the connection has not been broken, you may wait on a recv forever, because the socket will not tell you that there's nothing more to read (for now). Now if you think about that a bit, you'll come to realize a fundamental truth of sockets: messages must either be fixed length (yuck), or be delimited (shrug), or indicate how long they are (much better), or end by shutting down the connection. The choice is entirely yours, (but some ways are righter than others).
This section highlights 4 possibilities for how a socket "protocol" may be written to pass messages. My question is, what is the preferred method to use for real applications?
Is it generally best to include message size with each message (presumably in a header), as the article more or less asserts? Are there any situations where another method would be preferable?
The common protocols either specify length in the header, or are delimited (like HTTP, for instance).
Keep in mind that this also depends on whether you use TCP or UDP sockets. Since TCP sockets are reliable you can be sure that you get everything you shoved into them. With UDP the story is different and more complex.
These are indeed our choices with TCP. HTTP, for example, uses a mix of second, third, and forth option (double new-line ends request/response headers, which might contain the Content-Length header or indicate chunked encoding, or it might say Connection: close and not give you the content length but expect you to rely on reading EOF.)
I prefer the third option, i.e. self-describing messages, though fixed-length is plain easy when suitable.
If you're designing your own protocol then look at other people's work first; there might already be something similar out there that you could either use 'as is' or repurpose and adjust. For example; ISO-8583 for financial txns, HTTP or POP3 all do things differently but in ways that are proven to work... In fact it's worth looking at these things anyway as you'll learn a lot about how real world protocols are put together.
If you need to write your own protocol then, IMHO, prefer length prefixed messages where possible. They're easy and efficient to parse for the receiver but possibly harder to generate if it is costly to determine the length of the data before you begin sending it.
The decision should depend on the data you want to send (what it is, how is it gathered). If the data is fixed length, then fixed length packets will probably be the best. If data can be easily (no escaping needed) split into delimited entities then delimiting may be good. If you know the data size when you start sending the data piece, then len-prefixing may be even better. If the data sent is always single characters, or even single bits (e.g. "on"/"off") then anything different than fixed size one character messages will be too much.
Also think how the protocol may evolve. EOL-delimited strings are good as long as they do not contain EOL characters themselves. Fixed length may be good until the data may be extended with some optional parts, etc.
I do not know if there is a preferred option. In our real-world situation (client-server application), we use the option of sending the total message length as one of the first pieces of data. It is simple and works for both our TCP and UDP implementations. It makes the logic reasonably "simple" when reading data in both situations. With TCP, the amount of code is fairly small (by comparison). The UDP version is a bit (understatement) more complex but still relies on the size that is passed in the initial packet to know when all data has been sent.