Node.js: how to flush socket? - sockets

I'm trying to flush a socket before sending the next chunk of the data:
var net = require('net');
net.createServer(function(socket) {
socket.on('data', function(data) {
console.log(data.toString());
});
}).listen(54358, '127.0.0.1');
var socket = net.createConnection(54358, '127.0.0.1');
socket.setNoDelay(true);
socket.write('mentos');
socket.write('cola');
This, however, doesn't work despite the setNoDelay option, e.g. prints "mentoscola" instead of "mentos\ncola". How do I fix this?

Looking over the WriteableStream api, and the associated example it seems that you should set your breaks or delimiters yourself.
exports.puts = function (d) {
process.stdout.write(d + '\n');
};
Because your socket is a stream, data will be written/read without your direct control, and #write won't change your data or assume you're meaning to break between writes, since you could be streaming a large piece of information over the socket and might want to set other delimiters.
I'm definitely no expert in this area, but that seems like the logical answer to me.
Edit: This is a duplicate of Nodejs streaming, and the conclusion there was the same as the answer I specified: working with streams isn't line-by-line, set your own delimiters.

Maybe all data written in the same tick is sent as a batch.
Maybe at the receiving side, the node will combine the separate data segments before emitting the data event.

Related

Reading from Socket Stream Blocking After Retrieval

I'm currently attempting to read an incoming message from a client socket, that, prior to the below procedure has already been connected to the server socket. The below procedure outputs the message, one character at a time, as it retrieves it from the stream.
The problem is that, when the stream is out of information, the call to Ada.Streams.Read is blocking, and stops the application flow completely. According to some examples, it would appear as though Offset should be set to 0 automatically at the end of the stream, but that never happens. Instead the application stops at the call to Read.
procedure Read_From (Channel : Sockets.Stream_Access) is
use Ada.Text_IO;
use Ada.Streams;
Data : Stream_Element_Array (1 .. 1);
Offset : Stream_Element_Offset;
begin
loop
Read (Channel.All, Data, Offset);
exit when Offset = 0;
Put (Character'Val (Data (1)));
end loop;
-- The application never reaches this point.
New_Line;
Put_Line ("Finished reading from client!");
end Read_From;
-- #param Channel `GNAT.Sockets.Stream (Client_Socket)`
I've also attempted the same process with GNAT.Sockets.Receive_Socket, but the same issue remains: the application flow is stopped completely, assumably awaiting further information from the stream, even though there is nothing more to retrieve.
Any pointers in the right direction would be highly appreciated!
Normally, you’d read a (binary) message from a stream knowing how much data needed to be read, so you could read until you’d got that much.
But, if you’re reading a text message from an externally-defined source, as it might be an HTTP request, there needs to be some terminator sequence so you can read character-by-character until you’ve read the terminator. In the case of an HTTP request, that’s a CR/LF/CR/LF sequence. Or it could be a null-terminated C string, in which case you’d be looking for the ASCII.NUL.
The Ada way to transfer variable-length text is to use String’Output/String’Input (see ARM 13.13.2(18)ff). What happens for a String (an array of Character) is that first the bounds are sent, then the content; on reception, the bounds are read, a String with those bounds is created, and the required number of bytes are read into the new String, which is then returned.
Basically that's how Ada streams work. The end of the stream only comes once you reach the final end of the stream, not just the current end of a buffer.
If you want to be able to interrupt reading, you have to use another representation of the connection than GNAT.Sockets.Stream_Access.

How can i clear QLocalSocket?

I've a problem in clearing the QLocalSocket.
Now I'm sending & receiving the image data through QLocalServer/QLocalSocket.
But in receiving program, memory increases heavily because of piled image data in memory.
so, I want to clean up the socket when the data was read.
but it seems there is no function in QLocalSocket reference.
How can I clear the socket?
It looks like the only way to avoid this behaviour is to close socket (not named pipe server) and to open it again once you have received enough data. Also please note, that just closing socket and using the same instance (i.e socket created on stack) caused me a lot of troubles.
I am doing this next way:
On the sender side you have:
dataSocket->write((char*)data, sizeof(data));
dataSocket->disconnectFromServer();
and on the client side:
void LocalSocketClient::requestNewFrame()
{
if (socket) {
socket->disconnect();
socket->deleteLater();
}
socket = new QLocalSocket();
dataStream.setDevice(socket);
connect(socket, &QLocalSocket::disconnected, this, &LocalSocketClient::requestNewFrame);
connect(socket, &QLocalSocket::readyRead, this, &LocalSocketClient::readSocket);
socket->connectToServer(NAMED_PIPE_NAME, QIODevice::ReadOnly);
}
void LocalSocketClient::readSocket()
{
if(dataStream.readRawData((char*)&currentFrame, sizeof(currentFrame)) > 0) {
}
}
where currentFrame is predefined known struct of your data.
This is not the most elegant solution as for me, I am still investigating how to avoid infinite new/deleteLater operations. But without them I was getting random writing errors on the sender side (looks like Qt event loop was deleting socket handle once it was closed and not deleted messing up private data of the socket)

Invalid field in source data: 0 TCP_Message

I'm using ProtoBuf-Net for serialize and deserialize TCP_Messages.
I've tried all the suggestions I've found here, so I really don't know where the mistake is.
The serialize is made server side, and the deserialize is made on an application client-side.
Serialize code:
public void MssGetCardPersonalInfo(out RCPersonalInfoRecord ssPersonalInfoObject, out bool ssResult) {
ssPersonalInfoObject = new RCPersonalInfoRecord(null);
TCP_Message msg = new TCP_Message(MessageTypes.GetCardPersonalInfo);
MemoryStream ms = new MemoryStream();
ProtoBuf.Serializer.Serialize(ms, msg);
_tcp_Client.Send(ms.ToArray());
_waitToReadCard.Start();
_stopWaitHandle.WaitOne();
And the deserialize:
private void tpcServer_OnDataReceived(Object sender, byte[] data, TCPServer.StateObject clientState)
{
TCP_Message message = new TCP_Message();
MemoryStream ms = new MemoryStream(data);
try
{
//ms.ToArray();
//ms.GetBuffer();
//ms.Position = 0;
ms.Seek(0, SeekOrigin.Begin);
message = Serializer.Deserialize<TCP_Message>(ms);
} catch (Exception ex)
{
EventLog.WriteEntry(_logSource, "Error deserializing: " + ex.Message, EventLogEntryType.Error, 103);
}
As you can see, I've tried a bunch of different approache, now comented.
I have also tried to deserialize using the DeserializeWithLengthPrefix but it didn't work either.
I'm a bit noob on this, so if you could help me I would really appreciate it.
Thank's
The first thing to look at here is: is the data you receive the data you send. Until you can answer "yes" to that, all other questions are moot. It is very easy to confuse network code and end up reading partial frames, etc. As a crude debugger test:
Debug.WriteLine(Convert.ToBase64String(ms.GetBuffer(), 0, (int)ms.Length));
should work. If the two base-64 strings are not identical, then you aren't working with the same data. This can be because of a range of reasons, including packet splitting and combining. You need to keep in mind that in a stream, what you send is not what you get - at least, not down to the fragment level. You might "send" data in the way of:
one bundle of 20 bytes
one bundle of 10 bytes
but at the receiving end, it would be entirely legitimate to read:
1 byte
22 bytes
7 bytes
All that TCP guarantees is the order and accuracy of the bytes. It says nothing about their breakdown in terms of chunks. When writing network code, there are basically 2 approaches:
have one thread that synchronously reads from a stream and local buffer (doesn't scale well)
async code (very scalable), but accept that you're going to have to do a lot of "do I have a complete frame? if not, append to an input buffer; if so, process any available frame data (could be multiple), then shuffle any incomplete data to the start of the buffer"

Mirth is reading too slow from disk

I am using Mirth 3.0.1 version. I am reading a file (using File Reader) having 34,000 records. Every record is having 45 columns and are pipe(|) separated. Mirth is taking too much time while reading the file from the disk. Mirth is installed on the same server where file is located.Earlier, I was facing the java head space issue which I resolved after setting the -Xms1024m -Xmx4096m in files mcserver.vmoptions & mcservice.vmoptions. Now I have to solve reading performance issue. Please find in attachment the channel for the same.
The answer to this problem is highly dependent on the solution itself. As an example, if you are doing transformations when you benchmark, it might be that the problem is not with reading the files, but rather with doing massive amounts of filtering and transformations in Mirth. Since Mirth converts everything you configure into basically one gigantic Javascript that executes on the server, it might just as well be that this is causing the performance problem. Pre-processor scripts might also create a problem if you do something that causes Mirth to read the whole file.
It migh also be that your 34.000 lines in the file contains huge quantities of information, simply making the file very big and extensive to process. If every record in the file is supposed to create new messages within Mirth, you might also want to check your batch settings for the reader.
And in addition to this, the performance of the read operations from disk is of course affected a lot by the infrastructure and hardware of the platform itself. You did mention that you are reading the files locally and that you had to increase the memory for Mirth. All of this could of course be a problem in itself. To make a benchmark you would want to compare this to something else. Maybe write a small Java program to just read the file to compare performance outside of Mirth.
Thanks for the suggestions.
I have used router.routeMessage('channelName','PartOfMsg') to route the 5000 records(from one channel to second channel) from the file having 34000 of records. This has helped to read faster from the file and processing the records at the same time.
For Mirth Community, below is the code to route the msg from one channel to other channel, this solution is also for the requirement if you have bulk of records to process in batches
In Source Transformer,
debug = "ON";
XML.ignoreWhitespace = true;
logger.debug('Inside source transformer "SplitFileIntoFiles" of channel: SplitFile');
var
subSegmentCounter = 0,
xmlMessageProcessCounter = 0,
singleFileLimit = 5000,
isError = false,
xmlMessageProcess = new XML(<delimited><row><column1></column1><column2></column2></row></delimited>),
newSubSegment = <row><column1></column1><column2></column2></row>,
totalPatientRecords = msg.children().length();
logger.debug('Total number of records found in the patient input file are: ');
logger.debug(totalPatientRecords);
try{
for each (seg in msg.children())
{
xmlMessageProcess.appendChild(newSubSegment);
xmlMessageProcess['row'][xmlMessageProcessCounter] = msg['row'][subSegmentCounter];
if (xmlMessageProcessCounter == singleFileLimit -1)
{
logger.debug('Now sending the 5000 records to the next channel from channel DOR Batch File Process IHI');
router.routeMessage('DOR SendPatientsToMedicare',xmlMessageProcess);
logger.debug('After sending the 5000 records to the next channel from channel DOR Batch File Process IHI');
xmlMessageProcessCounter = 0;
delete xmlMessageProcess['row'];
}
subSegmentCounter++;
xmlMessageProcessCounter++;
}// End of FOR loop
}// End of try block
catch (exception)
{
logger.error('The exception has been raised in source transformer "SplitFileIntoFiles" of channel: SplitFile');
logger.error(exception);
globalChannelMap.put('isFailed',true);
globalChannelMap.put('errDesc',exception);
return true;
}
if (xmlMessageProcessCounter > 1)
{
try
{
logger.debug('Now sending the remaining records to the next channel from channel DOR Batch File Process IHI');
router.routeMessage('DOR SendPatientsToMedicare',xmlMessageProcess);
logger.debug('After sending the remaining records to the next channel from channel DOR Batch File Process IHI');
delete xmlMessageProcess['row'];
}
catch (exception)
{
logger.error('The exception has been raised in source transformer "SplitFileIntoFiles" of channel: SplitFile');
logger.error(exception);
globalChannelMap.put('isFailed',true);
globalChannelMap.put('errDesc',exception);
return true;
}
}
return true;
// End of JavaScript
Hope, this will help.

socket receive loop never returns

I have a loop that reads from a socket in Lua:
socket = nmap.new_socket()
socket:connect(host, port)
socket:set_timeout(15000)
socket:send(command)
repeat
response,data = socket:receive_buf("\n", true)
output = output..data
until data == nil
Basically, the last line of the data does not contain a "\n" character, so is never read from the socket. But this loop just hangs and never completes. I basically need it to return whenever the "\n" delimeter is not recognised. Does anyone know a way to do this?
Cheers
Updated
to include socket code
Update2
OK I have got around the initial problem of waiting for a "\n" character by using the "receive_bytes" method.
New code:
--socket set as above
repeat
data = nil
response,data = socket:receive_bytes(5000)
output = output..data
until data == nil
return output
This works and I get the large complete block of data back. But I need to reduce the buffer size from 5000 bytes, as this is used in a recursive function and memory usage could get very high. I'm still having problems with my "until" condition however, and if I reduce the buffer size to a size that will require the method to loop, it just hangs after one iteration.
Update3
I have gotten around this problem using string.match and receive_bytes. I take in at least 80 bytes at a time. Then string.match checks to see if the data variable conatins a certain pattern. If so it exits. Its not the cleanest solution, but it works for what I need it to do. Here is the code:
repeat
response,data = socket:receive_bytes(80)
output = output..data
until string.match(data, "pattern")
return output
I believe the only way to deal with this situation in a socket is to set a timeout.
The following link has a little bit of info, but it's on http socket: lua http socket timeout
There is also this one (9.4 - Non-Preemptive Multithreading): http://www.lua.org/pil/9.4.html
And this question: http://lua-list.2524044.n2.nabble.com/luasocket-howto-read-write-Non-blocking-TPC-socket-td5792021.html
A good discussion on Socket can be found on this link:
http://nitoprograms.blogspot.com/2009/04/tcpip-net-sockets-faq.html
It's .NET but the concepts are general.
See update 3. Because the last part of the data is always the same pattern, I can read in a block of bytes and each time check if that block has the pattern. If it has the pattern it will mean that it is the end of the data, append to the output variable and exit.