How to read the whole message with Chilkat socket? - sockets

I need to get the whole message(response), but socket.ReceiveBytes(); returns just part of the message. I tried to loop it but it fails on timeout when no bytes to receive.
List<byte> lb = new List<byte>();
byte[] receivedMsg = socket.ReceiveBytes();
while (receivedMsg.Length > 0)
{
lb.AddRange(receivedMsg);
receivedMsg = socket.ReceiveBytes();
}
So, how I can check if there are byte to read? How I can read the whole message?

Since its a Chilkat implementation, you should probably contact the developer. But I found this that could help: http://www.cknotes.com/?p=302
Ultimately, you need to know how much to read from the socket to constitute a whole message. For example, if the overlying protocol is a portmapper, then you know that you are expecting messsages in the format that the RFC specifies (http://tools.ietf.org/html/rfc1833.)
If you are rolling your own protocol over a socket connection, then use the method in the Chilkat blog post about putting the size of the total message in the first 4 bytes.

Related

PlayWS calculate the size of a http call without consuming the stream

I'm currently using the PlayWS http client which returns an Akka stream. From my understanding, I can consume the stream and turn it into a Byte[] to calculate the size. However, this also consumes the stream and I can't use it anymore. Anyway around this?
I think there are two different aspects related to the question.
You want to know the size of the server response in advance to prepare buffer. Unfortunately there is no guaranteed way to do this. HTTP 1.1 spec explicitly allows transfer mode when the server does not know the size of the response in advance via chunked transfer encoding. See also quote from 3.3.1. Transfer-Encoding:
A recipient MUST be able to parse the chunked transfer coding
(Section 4.1) because it plays a crucial role in framing messages
when the payload body size is not known in advance.
Section 3.3.3. Message Body Length specifies how length of a message body is defined and it besides the aforementioned chunked transfer encoding it also contains quite unhelpful
Otherwise, this is a response message without a declared message
body length, so the message body length is determined by the
number of octets received prior to the server closing the
connection.
This is added for backward compatibility and discouraged from usage but is still legally allowed.
Still in many real world scenarios you can use Content-Length header field that the server may return. However there is a catch here as well: if gzip Content-Encoding is used, then Content-Length will contain size of the compressed body.
To sum up: in general case you can't get the size of the message body in advance before you fully get the server response i.e. in terms of code perform a blocking call on the response. You may try to use Content-Length and it might or might not help in your specific case.
You already have a fully downloaded response (or you are OK with blocking on your StreamedResponse) and you want to process it by first getting the size and only then processing the actual data. In such case you may first use getBodyAsBytes method which returns IndexedSeq[Byte] and thus has size, and then convert it into a new Source using Source.single which is actually exactly what the default (i.e. non-streaming) implementation of getBodyAsSource does.

Receiving data from lua tcp socket without data size

I've been working in a socket tcp connection to a game server. The big problem here is that the game server send the data without any separators - since it sends the packet lenght inside the data -, making impossible to use socket:receive("*a") or "*l". The data received from the server does not have a static size and are sent in HEX format. I'm using this solution:
while true do
local rect, r, st = socket.select({_S.sockets.main, _S.sockets.bulle}, nil, 0.2)
for i, con in ipairs(rect) do
resp, err, part = con:receive(1)
if resp ~= nil then
dataRecv = dataRecv..resp
end
end
end
As you can see, I can only get all the data from the socket by reading one byte and appending it to a string, not a good way since I have two sockets to read. Is there a better way to receive data from this socket?
I don't think there is any other option; usually in a situation like this the client reads a packet of specific length to figure out how much it needs to read from the rest of the stream. Some protocols combine new line and the length; for example HTTP uses line separators for headers, with one of the headers specifying the length of the content that follows the headers.
Still, you don't need to read the stream one-by-one character as you can switch to non-blocking read and request any number of characters. If there is not enough to read, you'll get partially read content plus "timeout" signaled, which you can handle in your logic; from the documentation:
In case of error, the method returns nil followed by an error message
which can be the string 'closed' in case the connection was closed
before the transmission was completed or the string 'timeout' in case
there was a timeout during the operation. Also, after the error
message, the function returns the partial result of the transmission.

Read from Half Open Socket

I am trying to connect to Apple Push Notification Service which uses a simple binary protocol over TCP protected with TLS (or SSL). The protocol indicates that when an error is encountered (there are about 10 well defined error conditions) APNS will send back an error response and then close the connection. This results in a half closed socket because the remote peer closed the socket. I can see its a graceful shutdown because APNS sends a FIN and RST using tcpdump.
Out of all the error conditions, I can deal with most before sending with validation. The situation in which this fails is when a notification is sent to an invalid device token which cannot be dealt with that easily because the tokens could be malformed. Tokens are opaque 32 byte values that are provided by APNS to a device and then registered with me. I have no way of knowing if it is valid when submitted to my service. Presumably APNS checksums the tokens in some way that they can do quick validation on the token fast.
Anyway,
I did what I thought was the right thing:-
a. open socket
b. try writing
c. if write failed, read the error response
Unfortunately, this doesn't seem to work. I figure APNS is sending an error response and I am not reading it back right or I am not setting the socket up right. I have tried the following techniques:-
Use a separate thread per socket to try-read the response if any every 5ms or so.
Use a blocking read after write failure.
Use a final read after remote disconnect.
I have tried this with C# + .NET 4.5 on Windows and Java 1.7 on Linux. In either case, I never seem to get the error response and the socket indicates that no data is available to read.
Are half-closed sockets supported on these operating systems and/or frameworks? There isn't anything that seems to indicate either way.
I know that the way I am setting things up works correctly because if I use a valid token with a valid notification, those do get delivered.
In response to one of the comments, I am using the enhanced notification format so a response should arrive from APNS.
Here is the code I have for C#:-
X509Certificate certificate =
new X509Certificate(#"Foo.cer", "password");
X509CertificateCollection collection = new X509CertificateCollection();
collection.Add(certificate);
Socket socket =
new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
socket.Connect("gateway.sandbox.push.apple.com", 2195);
NetworkStream stream =
new NetworkStream(socket, System.IO.FileAccess.ReadWrite, false);
stream.ReadTimeout = 1000;
stream.WriteTimeout = 1000;
sslStream =
new SslStream(stream, true,
new RemoteCertificateValidationCallback(ValidateServerCertificate), null);
sslStream.AuthenticateAsClient("gateway.sandbox.push.apple.com", collection,
SslProtocols.Default, false);
sslStream.ReadTimeout = 10000;
sslStream.WriteTimeout = 1000;
// Task rdr = Task.Factory.StartNew(this.reader);
// rdr is used for parallel read of socket sleeping 5ms between each read.
// Not used now but another alternative that was tried.
Random r = new Random(DateTime.Now.Second);
byte[] buffer = new byte[32];
r.NextBytes(buffer);
byte[] resp = new byte[6];
String erroneousToken = toHex(buffer);
TimeSpan t = (DateTime.UtcNow - new DateTime(1970, 1, 1));
int timestamp = (int) t.TotalSeconds;
try
{
for (int i = 0; i < 1000; ++i)
{
// build the notification; format is published in APNS docs.
var not = new ApplicationNotificationBuilder().withToken(buffer).withPayload(
#'{"aps": {"alert":"foo","sound":"default","badge":1}}').withExpiration(
timestamp).withIdentifier(i+1).build();
sslStream.Write(buffer);
sslStream.Flush();
Console.Out.WriteLine("Sent message # " + i);
int rd = sslStream.Read(resp, 0, 6);
if (rd > 0)
{
Console.Out.WriteLine("Found response: " + rd);
break;
}
// doesn't really matter how fast or how slow we send
Thread.Sleep(500);
}
}
catch (Exception ex)
{
Console.Out.WriteLine("Failed to write ...");
int rd = sslStream.Read(resp, 0, 6);
if (rd > 0)
{
Console.Out.WriteLine("Found response: " + rd); ;
}
}
// rdr.Wait(); change to non-infinite timeout to allow error reader to terminate
I implemented server side for APNS in Java and have problems reading the error responses reliably (meaning - never miss any error response), but I do manage to get error responses.
You can see this related question, though it has no adequate answer.
If you never manage to read the error response, there must be something wrong with your code.
Using a separate thread for reading worked for me, though not 100% reliable.
Use a blocking read after write fail - that's what Apple suggest to do, but it doesn't always work. It's possible that you send 100 messages, and the first has an invalid token, and only after the 100th message you get a write failure. At this point it is sometimes too late to read the error response from the socket.
I'm not sure what you mean there.
If you want to guarantee that the reading of the error responses will work, you should try to read after each write, with a sufficient timeout. This, of course, is not practical for using in production (since it's incredibly slow), but you can use it to verify that your code of reading and parsing the error response is correct. You can also use it to iterate over all the device tokens you have, and find all the invalid ones, in order to clean your DB.
You didn't post any code, so I don't know what binary format you are using to send messages to APNS. If you are using the simple format (that starts with a 0 byte and has no message ID), you won't get any responses from Apple.

DatagramPacket can receive data?

I'm reading through my textbook and I see this:
The first constructor:
public Datagrampacket (byte ibuf [], int ilength)
constructs a DatagramPacket for receiving packets of length ilength.
Is this just an odd wording, or do DatagramPacket's actually receive data along with sending it? I always thought DatagramPackets were just classes containing information you would send between DatagramSockets
DatagramPacket does not send or receive data. Instead, it is used by by DataSocket in two ways.
It is used by DatagramSocket.receive(DatagramPacket packet) which populates packet with some received data,
or it is used by DatagramSocket.send(DatagramPacket packet) to send the data contained in packet.
Hope this helps.
Check the Javadoc. DatagramPackets are used for both sending and receiving. See DatagramSocket.receive().

How much data to receive from server in SSL handshake before calling InitializeSecurityContext?

In our Windows C++ application I am using InitializeSecurityContext() client side to open an schannel connection to a server which is running stunnel SSL proxy. My code now works, but only with a hack I would like to eliminate.
I started with this sample code:http://msdn.microsoft.com/en-us/library/aa380536%28v=VS.85%29.aspx
In the sample code, look at SendMsg and ReceiveMsg. The first 4 bytes of any message sent or received indicates the message length. This is fine for the sample, where the server portion of the sample conforms to the same convention.
stunnel does not seem to use this convention. When the client is receiving data during the handshake, how does it know when to stop receiving and make another call to InitializeSecurityContext()?
This is how I structured my code, based on what I could glean from the documentation:
1. call InitializeSecurityContext which returns an output buffer
2. Send output buffer to server
3. Receive response from server
4. call InitializeSecurityContext(server_response) which returns an output buffer
5. if SEC_E_INCOMPLETE_MESSAGE, go back to step 3,
if SEC_I_CONTINUE_NEEDED go back to step 2
I expected InitializeSecurityContext in step 4 to return SEC_E_INCOMPLETE_MESSAGE if not enough data was read from the server in step 3. Instead, I get SEC_I_CONTINUE_NEEDED but an empty output buffer. I have experimented with a few ways to handle this case (e.g. go back to step 3), but none seemed to work and more importantly, I do not see this behavior documented.
In step 3 if I add a loop that receives data until a timeout expires, everything works fine in my test environment. But there must be a more reliable way.
What is the right way to know how much data to receive in step 3?
SChannel is different than the Negotiate security package. You need to receive at least 5 bytes, which is the SSL/TLS record header size:
struct {
ContentType type;
ProtocolVersion version;
uint16 length;
opaque fragment[TLSPlaintext.length];
} TLSPlaintext;
ContentType is 1 byte, ProtocolVersion is 2 bytes, and you have 2 byte record length. Once you read those 5 bytes, SChannel will return SEC_E_INCOMPLETE_MESSAGE and will tell you exactly how many more bytes to expect:
SEC_E_INCOMPLETE_MESSAGE
Data for the whole message was not read from the wire.
When this value is returned, the pInput buffer contains a SecBuffer structure with a BufferType member of SECBUFFER_MISSING. The cbBuffer member of SecBuffer contains a value that indicates the number of additional bytes that the function must read from the client before this function succeeds.
Once you get this output, you know exactly how much to read from the network.
I found the problem.
I found this sample:
http://www.codeproject.com/KB/IP/sslsocket.aspx
I was missing the handling of SECBUFFER_EXTRA (line 987 SslSocket.cpp)
The SChannel SSP returns SEC_E_INCOMPLETE_MESSAGE from both InitializeSecurityContext and DecryptMessage when not enough data is read.
A SECBUFFER_MISSING message type is returned from DecryptMessage with a cbBuffer value of the amount of desired bytes.
But in practice, I did not use the "missing data" value. The documentation indicates the value is not guaranteed to be correct, and is only a hint for developers can use to reduce calls.
InitalizeSecurityContext MSDN doc:
While this number is not always accurate, using it can help improve performance by avoiding multiple calls to this function.
So I unconditionally read more data into the same buffer whenever SEC_E_INCOMPLETE_MESSAGE was returned. Reading multiple bytes at a time from a socket.
Some extra input buffer management was required to append more read data and keep the lengths right. DecryptMessage will modify the input buffers' cbBuffer properties when it fails, which surprised me.
Printing out the buffers and return result after calling InitializeSecurityContext shows the following:
read socket:bytes(5).
InitializeSecurityContext:result(80090318). // SEC_E_INCOMPLETE_MESSAGE
inBuffers[0]:type(2),bytes(5).
inBuffers[1]:type(0),bytes(0). // no indication of missing data
outBuffer[0]:type(2),bytes(0).
read socket:bytes(74).
InitializeSecurityContext:result(00090312). // SEC_I_CONTINUE_NEEDED
inBuffers[0]:type(2),bytes(79). // notice 74 + 5 from before
inBuffers[1]:type(0),bytes(0).
outBuffer[0]:type(2),bytes(0).
And for the DecryptMessage Function, input is always in dataBuf[0], with the rest zeroed.
read socket:bytes(5).
DecryptMessage:len 5, bytes(17030201). // SEC_E_INCOMPLETE_MESSAGE
DecryptMessage:dataBuf[0].BufferType 4, 8 // notice input buffer modified
DecryptMessage:dataBuf[1].BufferType 4, 8
DecryptMessage:dataBuf[2].BufferType 0, 0
DecryptMessage:dataBuf[3].BufferType 0, 0
read socket:bytes(8).
DecryptMessage:len 13, bytes(17030201). // SEC_E_INCOMPLETE_MESSAGE
DecryptMessage:dataBuf[0].BufferType 4, 256
DecryptMessage:dataBuf[1].BufferType 4, 256
DecryptMessage:dataBuf[2].BufferType 0, 0
DecryptMessage:dataBuf[3].BufferType 0, 0
read socket:bytes(256).
DecryptMessage:len 269, bytes(17030201). // SEC_E_OK
We can see my TLS Server peer is sending TLS headers (5 bytes) in one packet, and then the TLS message (8 for Application Data), then the Application Data payload in a third.
You must read some arbitrary amount the first time, and when you receive SEC_E_INCOMPLETE_MESSAGE, you must look in the pInput SecBufferDesc for a SECBUFFER_MISSING and read its cbBuffer to find out how many bytes you are missing.
This problem was doing my head in today, as I was attempting to modify my handshake myself, and having the same problem the other commenters were having, i.e. not finding a SECBUFFER_MISSING. I do not want to interpret the tls packet myself, and I do not want to unconditionally read some unspecified number of bytes. I found the solution to that, so I'm going to address their comments, too.
The confusion here is because the API is confusing. Ordinarily, to read the output of InitializeSecurityContext, you look at the content of the pOutput parameter (as defined in the signature). It's that SecBufferDesc that contains the SECBUFFER_TOKEN etc to pass to AcceptSecurityContext.
However, in the case where InitializeSecurityContext returns SEC_E_INCOMPLETE_MESSAGE, the SECBUFFER_MISSING is returned in the pInput SecBufferDesc, in place of the SECBUFFER_ALERT SecBuffer that was passed in.
The documentation does say this, but not in a way that clearly contrasts this case against the SEC_I_CONTINUE_NEEDED and SEC_E_OK cases.
This answer also applies to AcceptSecurityContext.
From MSDN, I'd presume SEC_E_INCOMPLETE_MESSAGE is returned when not enough data is received from server at the moment. Instead, SEC_I_CONTINUE_NEEDED returned with InBuffers[1] indicating amount of unread data (note that some data is processed and must be skipped) and OutBuffers containing nothing.
So the algorithm is:
If SEC_I_CONTINUE_NEEDED returned, check type of InBuffers[1]
If it is SECBUFFER_EXTRA, handle it (move InBuffers[1].cbBuffer bytes to the beginning of input buffer) and jump to next recv & InitializeSecurityContext iteration
If OutBuffers is not empty, send its contents to server