MSMQ reading 1mb or larger message - msmq

I have an MSMQ message of about 2mb send from server A to server B. Server B does receive it, but when the Windows Service reads it (timeout is 5000 seconds), to insert it into the database, it doesn't. The message disappears off the queue though.
I suspect this might be happening with other (smaller) messages as well.
Any ideas on where to debug/setting etc?
Edit:
Dim m As Message
Using msgQ As New MessageQueue(queueToRead, QueueAccessMode.Receive)
msgQ.Formatter = New XmlMessageFormatter(New Type() {GetType(System.String)})
Dim msgs As MessageEnumerator = msgQ.GetMessageEnumerator2
While msgs.MoveNext(TimeSpan.FromMilliseconds(queueTimeout))
m = msgQ.ReceiveById(msgs.Current.Id, TimeSpan.FromMilliseconds(100))
UDTT = m.Label
recordSet = m.Body
'Log Transaction to DB here
End While
End Using

Related

Send and read data through socket from fiber

Trying to figure out how to send/read data through socket. On remote server I create new netcat -l 4444 and from local send text data echo "test" | netcat remote.host 4444. This is always works fine.
Trying to reproduce:
require "socket"
HOST = "remote.host"
PORT = 4444
ch_request = Channel(String).new
ch_response = Channel(String).new
spawn do
socket = TCPSocket.new(HOST, PORT)
loop do
select
when request = ch_request.receive
socket << request
socket.flush
end
if response = socket.gets
ch_response.send response
end
end
end
sleep 0.1
ch_request.send "hello"
loop do
select
when response = ch_response.receive
pp response
end
end
In my dream I send data to channel, read it from first loop then send to socket. The same way but reverse order need for read it from second loop.
On practice this is not happens. On local after connect I got "test" and can't send back anything. On remote I can send to local but on local got only empty string once and nothing more after.
What mean this behavior and how to achieve planned?
You didn't show this, but I suppose you have a second implementation using TCPServer for the netcat -l equivalent.
You need to use separate fibers for reading/writing to the socket and the channel. It's a bit hard to judge what exactly happens without seeing the server, but I suppose you end up in a deadlock where both sides are waiting on input of the other side or user, but cannot proceed to actually send or read anything. In other words you interlocked the sending/receiving part, requiring the other side to carefully react and interplay so to not lock up the client. This is obviously a brittle approach.
Instead you should make sure any fiber does not do more than one operation in a loop. One receives from the socket and forwards that to a channel, the second one receives from a channel and forwards that to the socket, the third one receives from the reader side channel and prints or does whatever you want to do the data and the last one fills the sender channel. This way no operation can block one of the others. Of course one of those fibers should simply be the main program one.
In the server you additionally need one fiber that accepts the client connections and spawns the sender and receiver loops for each.
Finally note that a select statement with a single when branch has no effect, you can make the call directly. select is useful if you need to read or write to multiple channels concurrently in the same fiber, so for example if you would have multiple channels providing data to be send out to a socket, you would use select to not have the messages be corrupted by two fibers writing to the same socket at the same time. An additional usecase for select is to send or receive from a channel with a timeout.
Who is looking for an answer to similar questions. The final result what I wanted looks like this:
# Open socket for simulate remote server: `netcat -v -4 -l 4444`
require "socket"
HOST = "remote.host"
PORT = 4444
# JFYI: In real life this packed into class and I use class variable instead consts.
TUBE_REQUEST = Channel(String).new
TUBE_RESPONSE = Channel(String).new
SOCKET = TCPSocket.new(HOST, PORT)
spawn do
until SOCKET.closed?
if request = TUBE_REQUEST.receive
SOCKET << request
SOCKET.flush
end
end
end
spawn do
until SOCKET.closed?
if response = SOCKET.gets
TUBE_RESPONSE.send response
end
end
end
sleep 0.1
def receive_response
TUBE_RESPONSE.receive
end
def send(message, wait_for_response = true)
TUBE_REQUEST.send message
receive_response if wait_for_response
end
send("command with response")
send("command with new line and response\n")
send("command without new line and response", false)
It will send each command and wait for answer (except the last) from remote and then call the next command.

In UWP StreamSocket, can I read data with timeout and leave the connection open if timeout elapses

As I couldn't find any way to peek for data (read data without consuming the buffer) as asked at How to peek StreamSocket for data in UWP apps I'm now trying to make my own "peek" but still no luck.
I don't see how I can read data from StreamSocket in the manner which will let me use timeouts and leave the connection usable in case if timeout elapses.
In the end, the problem is as follows. In my, let's say, IMAP client, I get response from a server and if this response is negative, I need to wait a bit to see if the server immediately sends yet another response (sometimes, the server can do it, with extra details on the error or even a zero packet to close the connection). if the server didn't send another response, I'm fine, just leaving the method and returning to the caller. The caller can then send more data to the stream, receive more responses, etc.
So, after sending a request and getting initial response I need in some cases to read socket once again with a very small timeout interval and if no data arrives, just do nothing.
You can use a CancelationTokenSource to generate a timeout and stop an async operation.
The DataReader consumes the data from the input stream of the StreamSocket. Its LoadAsync() method will return when there is at least one byte of data. Here, we are adding a cancellation source that will cancel the asynchronous task after 1 second to stop the DataReader.LoadAsync() if no data has been consumed.
var stream = new StreamSocket();
var inputStream = stream.InputStream;
var reader = new DataReader(inputStream);
reader.InputStreamOptions = InputStreamOptions.Partial;
while(true)
{
try
{
var timeoutSource = new CancellationTokenSource(TimeSpan.FromSeconds(1));
var data = await reader.LoadAsync(1).AsTask(timeoutSource.Token);
while(reader.UnconsumedBufferLength > 0)
{
var read = reader.ReadUInt32();
}
}
catch(TaskCanceledException)
{
// timeout
}
}
Do no forget that disposing the DataReader will close the stream and the connection.

Socket not receiving both messages from server (almost at same time)

So i got this issue here, I'm using a openfl.net.Socket to connect with my server and receive messages from it.
The problem is the server is sending two messages almost at the same time and my socket appears to read only one, i tried putting a breakpoint on the second message and releasing it just after it stops (like a sleep of 0.5 seconds), and so my client receive both messages, but sending both at almost same time i get only one... Tips?
socket.addEventListener(ProgressEvent.SOCKET_DATA, onResponse);
function onResponse(e:ProgressEvent):Void
{
trace("response");
if (socket.bytesAvailable > 0)
{
var size:Int = socket.readInt();
var domainId:Int = socket.readInt();
var messageId:Int = socket.readInt();
var count:Int = socket.readInt();
var socketData:String = socket.readUTFBytes(socket.bytesAvailable);
trace("RECEIVE: " + socketData);
var message:Message = Message.JSONToMessage(socketData);
Domain.processMessage(message);
}
}
I hope i made myself clear
So, on the receiving side, in one recv you are getting all data that sent from the other end.
One thing you should know about TCP is that it does not maintain message boundaries. It in fact does not know what a "message" is. Its a byte stream protocol. Three sends here will can result in three recv at the other end or even one recv at the other end for the full exchange of data.
Applications using TCP should construct "messages" out of what the TCP is handing over to them. TCP just ensures the data is given in the order it was sent, and tries its best to deliver the packets sent to the receiver. It is up to the application protocol to define what should be done with the data.

how to tell server that client already finished the output without shutting down the outputstream

I having the following code to create a client socket to send/receive data:
val socket:Socket = new Socket(InetAddress.getByName("127.0.0.1"), 7777)
val inputStream = socket.getInputStream()
val bufferSource = new BufferedSource(inputStream)
val out = new PrintStream(socket.getOutputStream())
var data = "Hello Everyone"
out.println(data)
out.flush()
***socket.shutdownOutput()***
val in = bufferSource.getLines()
if (in.hasNext) {
println(in.next())
}
If I don't run socket.shutdownOutput(), I won't get the data from server,
because Server side is still waiting the input. Therefore I have to shutdown the outputStream.
But if shutdown the output, it can not be reopen. So I have to create a new socket for sending new data.
That caused sending one record needs to create a new socket. This is really awkward.
Is there any other way to tell the server that the output already finished without shutting down the output please.
Thanks in advance!
The problem is that the server doesn't know when to stop reading and process and reply.
What you need here is an application-level protocol that would dictate how server and clients are to communicate - what is a command, how a response to be formatted, etc.
This could be a line-oriented protocol - each new line represents a message (in general the message delimiter could be any other character sequence not appearing in the messages).
Or it could be fixed length messages; or messages pre-pended with message length (or type) to let the other side know how much data yo expect.

Read from Half Open Socket

I am trying to connect to Apple Push Notification Service which uses a simple binary protocol over TCP protected with TLS (or SSL). The protocol indicates that when an error is encountered (there are about 10 well defined error conditions) APNS will send back an error response and then close the connection. This results in a half closed socket because the remote peer closed the socket. I can see its a graceful shutdown because APNS sends a FIN and RST using tcpdump.
Out of all the error conditions, I can deal with most before sending with validation. The situation in which this fails is when a notification is sent to an invalid device token which cannot be dealt with that easily because the tokens could be malformed. Tokens are opaque 32 byte values that are provided by APNS to a device and then registered with me. I have no way of knowing if it is valid when submitted to my service. Presumably APNS checksums the tokens in some way that they can do quick validation on the token fast.
Anyway,
I did what I thought was the right thing:-
a. open socket
b. try writing
c. if write failed, read the error response
Unfortunately, this doesn't seem to work. I figure APNS is sending an error response and I am not reading it back right or I am not setting the socket up right. I have tried the following techniques:-
Use a separate thread per socket to try-read the response if any every 5ms or so.
Use a blocking read after write failure.
Use a final read after remote disconnect.
I have tried this with C# + .NET 4.5 on Windows and Java 1.7 on Linux. In either case, I never seem to get the error response and the socket indicates that no data is available to read.
Are half-closed sockets supported on these operating systems and/or frameworks? There isn't anything that seems to indicate either way.
I know that the way I am setting things up works correctly because if I use a valid token with a valid notification, those do get delivered.
In response to one of the comments, I am using the enhanced notification format so a response should arrive from APNS.
Here is the code I have for C#:-
X509Certificate certificate =
new X509Certificate(#"Foo.cer", "password");
X509CertificateCollection collection = new X509CertificateCollection();
collection.Add(certificate);
Socket socket =
new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
socket.Connect("gateway.sandbox.push.apple.com", 2195);
NetworkStream stream =
new NetworkStream(socket, System.IO.FileAccess.ReadWrite, false);
stream.ReadTimeout = 1000;
stream.WriteTimeout = 1000;
sslStream =
new SslStream(stream, true,
new RemoteCertificateValidationCallback(ValidateServerCertificate), null);
sslStream.AuthenticateAsClient("gateway.sandbox.push.apple.com", collection,
SslProtocols.Default, false);
sslStream.ReadTimeout = 10000;
sslStream.WriteTimeout = 1000;
// Task rdr = Task.Factory.StartNew(this.reader);
// rdr is used for parallel read of socket sleeping 5ms between each read.
// Not used now but another alternative that was tried.
Random r = new Random(DateTime.Now.Second);
byte[] buffer = new byte[32];
r.NextBytes(buffer);
byte[] resp = new byte[6];
String erroneousToken = toHex(buffer);
TimeSpan t = (DateTime.UtcNow - new DateTime(1970, 1, 1));
int timestamp = (int) t.TotalSeconds;
try
{
for (int i = 0; i < 1000; ++i)
{
// build the notification; format is published in APNS docs.
var not = new ApplicationNotificationBuilder().withToken(buffer).withPayload(
#'{"aps": {"alert":"foo","sound":"default","badge":1}}').withExpiration(
timestamp).withIdentifier(i+1).build();
sslStream.Write(buffer);
sslStream.Flush();
Console.Out.WriteLine("Sent message # " + i);
int rd = sslStream.Read(resp, 0, 6);
if (rd > 0)
{
Console.Out.WriteLine("Found response: " + rd);
break;
}
// doesn't really matter how fast or how slow we send
Thread.Sleep(500);
}
}
catch (Exception ex)
{
Console.Out.WriteLine("Failed to write ...");
int rd = sslStream.Read(resp, 0, 6);
if (rd > 0)
{
Console.Out.WriteLine("Found response: " + rd); ;
}
}
// rdr.Wait(); change to non-infinite timeout to allow error reader to terminate
I implemented server side for APNS in Java and have problems reading the error responses reliably (meaning - never miss any error response), but I do manage to get error responses.
You can see this related question, though it has no adequate answer.
If you never manage to read the error response, there must be something wrong with your code.
Using a separate thread for reading worked for me, though not 100% reliable.
Use a blocking read after write fail - that's what Apple suggest to do, but it doesn't always work. It's possible that you send 100 messages, and the first has an invalid token, and only after the 100th message you get a write failure. At this point it is sometimes too late to read the error response from the socket.
I'm not sure what you mean there.
If you want to guarantee that the reading of the error responses will work, you should try to read after each write, with a sufficient timeout. This, of course, is not practical for using in production (since it's incredibly slow), but you can use it to verify that your code of reading and parsing the error response is correct. You can also use it to iterate over all the device tokens you have, and find all the invalid ones, in order to clean your DB.
You didn't post any code, so I don't know what binary format you are using to send messages to APNS. If you are using the simple format (that starts with a 0 byte and has no message ID), you won't get any responses from Apple.