mina 2.0.13 + websocket + buffer.capacity - sockets

Background:
I am sending data by mina web socket (https://issues.apache.org/jira/browse/DIRMINA-907) which is already patched the buf issue..
I used to send the json message through the web socket around 50 bytes.. sometimes, it will goes up to 70 bytes...
The problem is:
at initial, the IOBuffer.capacity() is 2048 --> 2048 --> 1024 --> 1024 -> 512 -> 512 -> 256 -> 256 -> 128 -> 128 -> 64 -> 64 -> 64 -> 64
If the json message becomes 70 bytes. It will split into two message in messageReceived(IoSession session, Object message). Is there any way that i can solve this problem.
I can store the incomplete message, but it will raise another issue such as 2 json messages or 1 valid json message with 1 invalid json message.
Thanks.

I'm assuming you are using TCP with MINA. When a big message is sent over TCP it gets split into smaller chunks to fit into smaller packets. TCP makes sure that the packet data arrives in the correct order when it arrives at it's destination.
What's happening is that your JSON message gets split while being sent over the network. Even though it arrives at the destination in order, you need to put together the two chunks.
The MINA user guide has a good example of how to accomplish this, it should help you out. You can find it here

Finally, i solve it by adding
acceptor.getSessionConfig().setMinReadBufferSize(2048);
into my code.
Set the minimum read buffer to prevent the message split into 2 pieces..
I know that it is not a prefect solution. but my message won't larger than 2K.
jython234 suggested solution is not fit for my need.

Related

TCP connection and a different buffer size for a client and a server

What will happen if I will establish a connection between a client and a server, and configure a different buffer size for each of them.
This is my client's code:
import socket,sys
TCP_IP = sys.argv[1]
TCP_PORT = int(sys.argv[2])
BUFFER_SIZE = 1024
MESSAGE = "World! Hello, World!"
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((TCP_IP, TCP_PORT))
s.send(MESSAGE)
data = s.recv(BUFFER_SIZE)
s.close()
print "received data:", data
Server's code:
import socket,sys
TCP_IP = '0.0.0.0'
TCP_PORT = int(sys.argv[1])
BUFFER_SIZE = 5
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((TCP_IP, TCP_PORT))
s.listen(1)
while True:
conn, addr = s.accept()
print 'New connection from:', addr
while True:
data = conn.recv(BUFFER_SIZE)
if not data: break
print "received:", data
conn.send(data.upper())
conn.close()
That means I will be limited to only 5 bytes? Which means I won't be able to receive the full packet and will lose 1024-5 packets?
I or does it mean I am able to get only packets of 5 bytes, which means that instead of receiving one packets of 1024 bytes as the client sent it, I'll have to divide 1024 by 5 and get 204.8 packets (?) which sounds not possible.
What in general is happing in that code?
Thanks.
Your arguments are based on the assumption that a single send should match a single recv. But this is not the case. TCP is a byte stream and not a message based protocol. This means all what matters are the transferred bytes. And for this is does not matter if it does not matter if one or 10 recv are needed to read 50 bytes.
Apart from that send is not guaranteed to send the full buffer either. It might only send parts of the buffer, i.e. the sender need actually check the return code to find out how much of the given buffer was actually send now and how much need to be retried for sending later.
And note that the underlying "packet" is again a different thing. If there is a send for 2000 bytes it will usually need multiple packets to be send (depending on the maximum transfer unit of the underlying data link layer). But this does not mean that one also need multiple recv. If all the 2000 bytes are already transferred to the OS level receive buffer at the recipient then they can be also be read at once, even if they traveled in multiple packets.
Your socket won't lose the remaining 1024 - 5 (1019) bytes.it just stored on the socket and ready to read again! so , all you need to do is to read from the socket again. the size of buffer you want to read to is decided by yourself. and you are not limited to 5 bytes, you are just limiting the read buffer for each single read to 5 bytes. so for 1024 bytes to read you have to read for 204 times plus another time read which would be the last one. but remember that the last time read fills your last buffer index with null. and that means there is no more bytes available for now.

SwiftNIO: Sent package partially received

I have developed a client and a server using swift nio, I have no problems sending package of all size between 12 and 1000bytes since server sends a pack of 528bytes and when client got it, it is 512bytes. I'm trying to figure out why it happens. Does anyone knows if there is any chance to set a minimum ByteBuffer capacity? or if I'm missing something.
Thanks to all.
Assuming you're using TCP (that is, using ClientBootstrap), you cannot expect that the boundaries of messages sent by the server will be reflected in your reads. TCP is "stream-oriented": this means that the messages don't have boundaries at all, they behave just like a stream of data. In the NIO case, that means you would expect to see another read shortly after that contains more data.
The initial ByteBuffer capacity used for reads is controlled by the RecvByteBufferAllocator used by the Channel. This can be overridden:
ClientBootstrap(group: group)
.channelOption(ChannelOptions.recvAllocator,
AdaptiveRecvByteBufferAllocator(minimum: 1024, initial: 1024, maximum: 65536))
The standard defaults for the AdaptiveRecvByteBufferAllocator in NIO 2.23.0 are a minimum size of 64 bytes, an initial size of 1024 bytes, and a maximum size of 65536 bytes. In general we don't recommend overriding these defaults unless you need to: for TCP NIO will ensure the buffer is appropriately sized for the reads we're seeing.

Examine data at in callout driver for FWPM_LAYER_EGRESS_VSWITCH_TRANSPORT_V4 layer in WFP

I am writing the callout driver for Hyper-V 2012 where I need to filter the packets sent from virtual machines.
I added filter at FWPM_LAYER_EGRESS_VSWITCH_TRANSPORT_V4 layer in WFP. Callout function receive packet buffer which I am typecasting it to NET_BUFFER_LIST. I am doing following to get the data pointer
pNetBuffer = NET_BUFFER_LIST_FIRST_NB((NET_BUFFER_LIST*)pClassifyData->pPacket);
pContiguousData = NdisGetDataBuffer(pNetBuffer, NET_BUFFER_DATA_LENGTH(pNetBuffer), 0, 1, 0);
I have simple client-server application to test the packet data. Client is on VM and server is another machine. As I observed, data sent from client to server is truncated and some garbage value is added at the end. There is no issue for sending message from server to client. If I dont add this layer filter client-server works without any issue.
Callback function receives the metadata which incldues ipHeaderSize and transportHeaderSize. Both these values are zero. Are these correct values or should those be non-zero??
Can somebody help me to extract the data from packet in callout function and forward it safely to further layers?
Thank You.
These are the TCP packets. I looked into size and offset information. It seems the problem is consistent across packets.
I checked below values in (NET_BUFFER_LIST*)pClassifyData->pPacket.
NET_BUFFER_LIST->NetBUfferListHeader->NetBUfferListData->FirstNetBuffer->NetBuffe rHeader->NetBufferData->CurrentMdl->MappedSystemVa
First 24 bytes are only sent correctly and remaining are garbage.
For example total size of the packet is 0x36 + 0x18 = 0x4E I don't know what is there in first 0x36 bytes which is constant for all the packets. Is it a TCP/IP header? Second part 0x18 is the actual data which i sent.
I even tried with API NdisQueryMdl() to retrieve from MDL list.
So on the receiver side I get only 24 bytes correct and remaining is the garbage. How to read the full buffer from NET_BUFFER_LIST?

How much data to receive from server in SSL handshake before calling InitializeSecurityContext?

In our Windows C++ application I am using InitializeSecurityContext() client side to open an schannel connection to a server which is running stunnel SSL proxy. My code now works, but only with a hack I would like to eliminate.
I started with this sample code:http://msdn.microsoft.com/en-us/library/aa380536%28v=VS.85%29.aspx
In the sample code, look at SendMsg and ReceiveMsg. The first 4 bytes of any message sent or received indicates the message length. This is fine for the sample, where the server portion of the sample conforms to the same convention.
stunnel does not seem to use this convention. When the client is receiving data during the handshake, how does it know when to stop receiving and make another call to InitializeSecurityContext()?
This is how I structured my code, based on what I could glean from the documentation:
1. call InitializeSecurityContext which returns an output buffer
2. Send output buffer to server
3. Receive response from server
4. call InitializeSecurityContext(server_response) which returns an output buffer
5. if SEC_E_INCOMPLETE_MESSAGE, go back to step 3,
if SEC_I_CONTINUE_NEEDED go back to step 2
I expected InitializeSecurityContext in step 4 to return SEC_E_INCOMPLETE_MESSAGE if not enough data was read from the server in step 3. Instead, I get SEC_I_CONTINUE_NEEDED but an empty output buffer. I have experimented with a few ways to handle this case (e.g. go back to step 3), but none seemed to work and more importantly, I do not see this behavior documented.
In step 3 if I add a loop that receives data until a timeout expires, everything works fine in my test environment. But there must be a more reliable way.
What is the right way to know how much data to receive in step 3?
SChannel is different than the Negotiate security package. You need to receive at least 5 bytes, which is the SSL/TLS record header size:
struct {
ContentType type;
ProtocolVersion version;
uint16 length;
opaque fragment[TLSPlaintext.length];
} TLSPlaintext;
ContentType is 1 byte, ProtocolVersion is 2 bytes, and you have 2 byte record length. Once you read those 5 bytes, SChannel will return SEC_E_INCOMPLETE_MESSAGE and will tell you exactly how many more bytes to expect:
SEC_E_INCOMPLETE_MESSAGE
Data for the whole message was not read from the wire.
When this value is returned, the pInput buffer contains a SecBuffer structure with a BufferType member of SECBUFFER_MISSING. The cbBuffer member of SecBuffer contains a value that indicates the number of additional bytes that the function must read from the client before this function succeeds.
Once you get this output, you know exactly how much to read from the network.
I found the problem.
I found this sample:
http://www.codeproject.com/KB/IP/sslsocket.aspx
I was missing the handling of SECBUFFER_EXTRA (line 987 SslSocket.cpp)
The SChannel SSP returns SEC_E_INCOMPLETE_MESSAGE from both InitializeSecurityContext and DecryptMessage when not enough data is read.
A SECBUFFER_MISSING message type is returned from DecryptMessage with a cbBuffer value of the amount of desired bytes.
But in practice, I did not use the "missing data" value. The documentation indicates the value is not guaranteed to be correct, and is only a hint for developers can use to reduce calls.
InitalizeSecurityContext MSDN doc:
While this number is not always accurate, using it can help improve performance by avoiding multiple calls to this function.
So I unconditionally read more data into the same buffer whenever SEC_E_INCOMPLETE_MESSAGE was returned. Reading multiple bytes at a time from a socket.
Some extra input buffer management was required to append more read data and keep the lengths right. DecryptMessage will modify the input buffers' cbBuffer properties when it fails, which surprised me.
Printing out the buffers and return result after calling InitializeSecurityContext shows the following:
read socket:bytes(5).
InitializeSecurityContext:result(80090318). // SEC_E_INCOMPLETE_MESSAGE
inBuffers[0]:type(2),bytes(5).
inBuffers[1]:type(0),bytes(0). // no indication of missing data
outBuffer[0]:type(2),bytes(0).
read socket:bytes(74).
InitializeSecurityContext:result(00090312). // SEC_I_CONTINUE_NEEDED
inBuffers[0]:type(2),bytes(79). // notice 74 + 5 from before
inBuffers[1]:type(0),bytes(0).
outBuffer[0]:type(2),bytes(0).
And for the DecryptMessage Function, input is always in dataBuf[0], with the rest zeroed.
read socket:bytes(5).
DecryptMessage:len 5, bytes(17030201). // SEC_E_INCOMPLETE_MESSAGE
DecryptMessage:dataBuf[0].BufferType 4, 8 // notice input buffer modified
DecryptMessage:dataBuf[1].BufferType 4, 8
DecryptMessage:dataBuf[2].BufferType 0, 0
DecryptMessage:dataBuf[3].BufferType 0, 0
read socket:bytes(8).
DecryptMessage:len 13, bytes(17030201). // SEC_E_INCOMPLETE_MESSAGE
DecryptMessage:dataBuf[0].BufferType 4, 256
DecryptMessage:dataBuf[1].BufferType 4, 256
DecryptMessage:dataBuf[2].BufferType 0, 0
DecryptMessage:dataBuf[3].BufferType 0, 0
read socket:bytes(256).
DecryptMessage:len 269, bytes(17030201). // SEC_E_OK
We can see my TLS Server peer is sending TLS headers (5 bytes) in one packet, and then the TLS message (8 for Application Data), then the Application Data payload in a third.
You must read some arbitrary amount the first time, and when you receive SEC_E_INCOMPLETE_MESSAGE, you must look in the pInput SecBufferDesc for a SECBUFFER_MISSING and read its cbBuffer to find out how many bytes you are missing.
This problem was doing my head in today, as I was attempting to modify my handshake myself, and having the same problem the other commenters were having, i.e. not finding a SECBUFFER_MISSING. I do not want to interpret the tls packet myself, and I do not want to unconditionally read some unspecified number of bytes. I found the solution to that, so I'm going to address their comments, too.
The confusion here is because the API is confusing. Ordinarily, to read the output of InitializeSecurityContext, you look at the content of the pOutput parameter (as defined in the signature). It's that SecBufferDesc that contains the SECBUFFER_TOKEN etc to pass to AcceptSecurityContext.
However, in the case where InitializeSecurityContext returns SEC_E_INCOMPLETE_MESSAGE, the SECBUFFER_MISSING is returned in the pInput SecBufferDesc, in place of the SECBUFFER_ALERT SecBuffer that was passed in.
The documentation does say this, but not in a way that clearly contrasts this case against the SEC_I_CONTINUE_NEEDED and SEC_E_OK cases.
This answer also applies to AcceptSecurityContext.
From MSDN, I'd presume SEC_E_INCOMPLETE_MESSAGE is returned when not enough data is received from server at the moment. Instead, SEC_I_CONTINUE_NEEDED returned with InBuffers[1] indicating amount of unread data (note that some data is processed and must be skipped) and OutBuffers containing nothing.
So the algorithm is:
If SEC_I_CONTINUE_NEEDED returned, check type of InBuffers[1]
If it is SECBUFFER_EXTRA, handle it (move InBuffers[1].cbBuffer bytes to the beginning of input buffer) and jump to next recv & InitializeSecurityContext iteration
If OutBuffers is not empty, send its contents to server

limitation of the reception buffer

I established a connection with a client this way:
gen_tcp:listen(1234,[binary,{packet,0},{reuseaddr,true},{active,false},{recbuf,2048}]).
This code performs message processing:
loop(Socket)->
inet:setops(Socket,[{active,once}],
receive
{tcp,Socket,Data}->
handle(Data),
loop(Socket);
{Pid,Cmd}->
gen_tcp:send(Socket,Cmd),
loop(Socket);
{tcp_close,Socket}->
% ...
end.
My OS is Windows. When the size of the message is 1024 bytes, I lose bytes in Data. The server sends ACK + FIN to the client.
I believe that the Erlang is limited to 1024 bytes, therefore I defined recbuf.
Where the problem is: Erlang, Windows, hardware?
Thanks.
You may be setting the receive buffer far too small. Erlang certainly isn't limited to a 1024 byte buffer. You can check for yourself by doing the following in the shell:
{ok, S} = gen_tcp:connect("www.google.com", 80, [{active,false}]),
O = inet:getopts(S, [recbuf]),
gen_tcp:close(S),
O.
On Mac OS X I get a default receive buffer size of about 512Kb.
With {packet, 0} parsing, you'll receive tcp data in whatever chunks the network stack chooses to send it in, so you have to do message boundary parsing and buffering yourself. Do you have a reliable way to check message boundaries in the wire protocol? If so, receive the tcp data and append it to a buffer variable until you have a complete message. Then call handle on the complete message and remove the complete message from the buffer before continuing.
We could probably help you more if you gave us some information on the client and the protocol in use.