NodeJS: What is the proper way to handling TCP socket streams ? Which delimiter should I use? - sockets

From what I understood here, "V8 has a generational garbage collector. Moves objects aound randomly. Node can’t get a pointer to raw string data to write to socket." so I shouldn't store data that comes from a TCP stream in a string, specially if that string becomes bigger than Math.pow(2,16) bytes. (hope I'm right till now..)
What is then the best way to handle all the data that's comming from a TCP socket ? So far I've been trying to use _:_:_ as a delimiter because I think it's somehow unique and won't mess around other things.
A sample of the data that would come would be something_:_:_maybe a large text_:_:_ maybe tons of lines_:_:_more and more data
This is what I tried to do:
net = require('net');
var server = net.createServer(function (socket) {
socket.on('connect',function() {
console.log('someone connected');
buf = new Buffer(Math.pow(2,16)); //new buffer with size 2^16
socket.on('data',function(data) {
if (data.toString().search('_:_:_') === -1) { // If there's no separator in the data that just arrived...
buf.write(data.toString()); // ... write it on the buffer. it's part of another message that will come.
} else { // if there is a separator in the data that arrived
parts = data.toString().split('_:_:_'); // the first part is the end of a previous message, the last part is the start of a message to be completed in the future. Parts between separators are independent messages
if (parts.length == 2) {
msg = buf.toString('utf-8',0,4) + parts[0];
console.log('MSG: '+ msg);
buf = (new Buffer(Math.pow(2,16))).write(parts[1]);
} else {
msg = buf.toString() + parts[0];
for (var i = 1; i <= parts.length -1; i++) {
if (i !== parts.length-1) {
msg = parts[i];
console.log('MSG: '+msg);
} else {
buf.write(parts[i]);
}
}
}
}
});
});
});
server.listen(9999);
Whenever I try to console.log('MSG' + msg), it will print out the whole buffer, so it's useless to see if something worked.
How can I handle this data the proper way ? Would the lazy module work, even if this data is not line oriented ? Is there some other module to handle streams that are not line oriented ?

It has indeed been said that there's extra work going on because Node has to take that buffer and then push it into v8/cast it to a string. However, doing a toString() on the buffer isn't any better. There's no good solution to this right now, as far as I know, especially if your end goal is to get a string and fool around with it. Its one of the things Ryan mentioned # nodeconf as an area where work needs to be done.
As for delimiter, you can choose whatever you want. A lot of binary protocols choose to include a fixed header, such that you can put things in a normal structure, which a lot of times includes a length. In this way, you slice apart a known header and get information about the rest of the data without having to iterate over the entire buffer. With a scheme like that, one can use a tool like:
node-buffer - https://github.com/substack/node-binary
node-ctype - https://github.com/rmustacc/node-ctype
As an aside, buffers can be accessed via array syntax, and they can also be sliced apart with .slice().
Lastly, check here: https://github.com/joyent/node/wiki/modules -- find a module that parses a simple tcp protocol and seems to do it well, and read some code.

You should use the new stream2 api. http://nodejs.org/api/stream.html
Here are some very useful examples: https://github.com/substack/stream-handbook
https://github.com/lvgithub/stick

Related

How to differentiate as abbreviation for voice response in dialog flow

I have integrated my dialogflow agent with google assistant. There is a welcome intent that will ask you to choose any of the option
Choose any of the sports
1. NBA
2. NHL
3. FIH
It reads the response with ever individual words(as an abbreviation). But when I produce the same in response from webhook, it is not reading the response with individual words(or not considering the response as abbreviation) and reads together. How can I achieve this? Am I missing something in the response?
You likely want to make sure you're sending back SSML in your response, rather than sending back text and letting it convert it to speech, and specifically marking the abbreviations using the <say-as> tag and telling it to interpret the contents as characters.
So you might send it back as something like:
<speak>
Are you interested in learning more about
the <say-as interpret-as="characters">NBA</say-as>,
the <say-as interpret-as="characters">NHL</say-as>
or the <say-as interpret-as="characters">FIH</say-as>?
</speak>
The little pronunciation differences with and without SSML are serious problems. I stick in a speak /speak for everything. Also a unique number I like and a test hook to have speech 'count' or not so there is a way to test things. Also a hook so an intent is triggered for 'repeat that please' :
Point is to use sayUsual for everything ordinary.
// Mostly SSML start char kit as globals
const startSp = "<speak>", endSp = "</speak>";
// Handle "Can you repeat that ?" well
var vfSpokenByMe = "";
// VF near globals what was said, etc
var repeatPossible = {}; repeatPossible.vf = ""; repeatPossible.n = 0;
// An answer from this app to the human in text
function absorbMachineVf( intentNumber, aKind, aStatement )
{
// Numbers reserved for 'repeats'
if( intentNumber > 9000 ) { return; }
// Machine to say this, a number for intents too
repeatPossible.vf = aStatement; repeatPossible.n = intentNumber;
}
// Usual way to say a thing
function sayUsual( n, speechAgent, somethingToSay )
{
// Work with an answer of any sort
absorbMachineVf( n, 'usual', somethingToSay );
// Sometimes we are just pretending, so
if( !testingNow )
{ speechAgent.add( startSp + somethingToSay + endSp ); }
// Make what we said as an answer available 'for sure' to rest of code
vfSpokenByMe = somethingToSay; // Even in simulation
}

Combining parts of Stream

I've got an observable watching a log that is continuously being written too. Each line is a new onNext call. Sometimes the log outputs a single log item over multiple lines. Detecting this is easy, I just can't find the right RX call.
I'd like to find a way to collect the single log items into a List of lines, and onNext the list when the single log item is complete.
Buffer doesn't seem right as this isn't time based, it's algorithm based.
GroupBy might be what I want, but the documentation is confusing for it. It also seems that the observables it creates probably won't have onComplete called until the completion of the source observable.
This solution can't delay the log much (preferably not at all). I need to be reading the log as close to real time as possible, and order matters.
Any push in the right direction would be great.
This is a typical reactive parsing problem. You could use Rxx Parsers, or for a native solution you can build your own state machine with either Scan or by defining an async iterator. Scan is preferable for simple parsers and often uses a Scan-Where-Select pattern.
Async iterator state machine example: Turnstile
Scan parser example (untested):
IObservable<string> lines = ReadLines();
IObservable<IReadOnlyList<string>> parsed = lines.Scan(
new
{
ParsingItem = (IEnumerable<string>)null,
Item = (IEnumerable<string>)null
},
(state, line) =>
// I'm assuming here that items never span lines partially.
IsItem(line)
? IsItemLastLine(line)
? new
{
ParsingItem = (IEnumerable<string>)null,
Item = (state.ParsingItem ?? Enumerable.Empty<string>()).Concat(line)
}
: new
{
ParsingItem = (state.ParsingItem ?? Enumerable.Empty<string>()).Concat(line),
Item = (List<string>)null
}
: new
{
ParsingItem = (IEnumerable<string>)null,
Item = new[] { line }
})
.Where(result => result.Item != null)
.Select(result => result.Item.ToList().AsReadOnly());

Receiving data in node.js

I am having a java program send data to me over a specific socket to my node.js application. I want to be able to obtain all of the data, which is information from a SQlite database, and send it off to something else.
I've found something like the following can work but it seems to be unreliable as data is missing and sometimes it doesn't even show up.
stream.addListener('data', function(data){
buffer.write(data.toString());
});
on a side note, I need the socket to stay open so I can't call the "end" event.
I really don't have any attachment to stream.addListener so i can use something else if it works how i want. Basically what i'm asking is, What is the most effective way to obtain data from a socket using node.js?
P.S. thank you for your time
The data event is not guaranteed to have all the data sent to it in one go. You'll need to build up a buffer over multiple events and watch for delimiters of some kind (newlines, null characters, whatever you feel). Here's an example from a project where I'm parsing data from IRC (converted from CoffeeScript); parseData is the event handler for the data event (e.g. socket.on('data', this.parseData);):
IrcConnection.prototype.parseData = function(data) {
var line, lines, i;
data = data.replace("\r\n", "\n");
this.buffer += data;
lines = this.buffer.split("\n");
this.buffer = "";
/* Put the last line back in the buffer if it was incomplete */
if (lines[lines.length - 1] !== '') {
this.buffer = lines[lines.length - 1];
}
/* Remove the final \n or incomplete line from the array */
lines = lines.splice(0, lines.length - 1);
for (i = 0; i < lines.length; i++) {
line = lines[i];
this.emit('raw', line);
}
};

Protovis - dealing with a text source

lets say I have a text file with lines as such:
[4/20/11 17:07:12:875 CEST] 00000059 FfdcProvider W com.test.ws.ffdc.impl.FfdcProvider logIncident FFDC1003I: FFDC Incident emitted on D:/Prgs/testing/WebSphere/AppServer/profiles/ProcCtr01/logs/ffdc/server1_3d203d20_11.04.20_17.07.12.8755227341908890183253.txt com.test.testserver.management.cmdframework.CmdNotificationListener 134
[4/20/11 17:07:27:609 CEST] 0000005d wle E CWLLG2229E: An exception occurred in an EJB call. Error: Snapshot with ID Snapshot.8fdaaf3f-ce3f-426e-9347-3ac7e8a3863e not found.
com.lombardisoftware.core.TeamWorksException: Snapshot with ID Snapshot.8fdaaf3f-ce3f-426e-9347-3ac7e8a3863e not found.
at com.lombardisoftware.server.ejb.persistence.CommonDAO.assertNotNull(CommonDAO.java:70)
Is there anyway to easily import a data source such as this into protovis, if not what would the easiest way to parse this into a JSON format. For example for the first entry might be parsed like so:
[
{
"Date": "4/20/11 17:07:12:875 CEST",
"Status": "00000059",
"Msg": "FfdcProvider W com.test.ws.ffdc.impl.FfdcProvider logIncident FFDC1003I",
},
]
Thanks, David
Protovis itself doesn't offer any utilities for parsing text files, so your options are:
Use Javascript to parse the text into an object, most likely using regex.
Pre-process the text using the text-parsing language or utility of your choice, exporting a JSON file.
Which you choose depends on several factors:
Is the data somewhat static, or are you going to be running this on a new or dynamic file each time you look at it? With static data, it might be easiest to pre-process; with dynamic data, this may add an annoying extra step.
How much data do you have? Parsing a 20K text file in Javascript is totally fine; parsing a 2MB file will be really slow, and will cause the browser to hang while it's working (unless you use Workers).
If there's a lot of processing involved, would you rather put that load on the server (by using a server-side script for pre-processing) or on the client (by doing it in the browser)?
If you wanted to do this in Javascript, based on the sample you provided, you might do something like this:
// Assumes var text = 'your text';
// use the utility of your choice to load your text file into the
// variable (e.g. jQuery.get()), or just paste it in.
var lines = text.split(/[\r\n\f]+/),
// regex to match your log entry beginning
patt = /^\[(\d\d?\/\d\d?\/\d\d? \d\d:\d\d:\d\d:\d{3} [A-Z]+)\] (\d{8})/,
items = [],
currentItem;
// loop through the lines in the file
lines.forEach(function(line) {
// look for the beginning of a log entry
var initialData = line.match(patt);
if (initialData) {
// start a new item, using the captured matches
currentItem = {
Date: initialData[1],
Status: initialData[2],
Msg: line.substr(initialData[0].length + 1)
}
items.push(currentItem);
} else {
// this is a continuation of the last item
currentItem.Msg += "\n" + line;
}
});
// items now contains an array of objects with your data

Simplest way to process a list of items in a multi-threaded manner

I've got a piece of code that opens a data reader and for each record (which contains a url) downloads & processes that page.
What's the simplest way to make it multi-threaded so that, let's say, there are 10 slots which can be used to download and process pages in simultaneousy, and as slots become available next rows are being read etc.
I can't use WebClient.DownloadDataAsync
Here's what i have tried to do, but it hasn't worked (i.e. the "worker" is never ran):
using (IDataReader dr = q.ExecuteReader())
{
ThreadPool.SetMaxThreads(10, 10);
int workerThreads = 0;
int completionPortThreads = 0;
while (dr.Read())
{
do
{
ThreadPool.GetAvailableThreads(out workerThreads, out completionPortThreads);
if (workerThreads == 0)
{
Thread.Sleep(100);
}
} while (workerThreads == 0);
Database.Log l = new Database.Log();
l.Load(dr);
ThreadPool.QueueUserWorkItem(delegate(object threadContext)
{
Database.Log log = threadContext as Database.Log;
Scraper scraper = new Scraper();
dc.Product p = scraper.GetProduct(log, log.Url, true);
ManualResetEvent done = new ManualResetEvent(false);
done.Set();
}, l);
}
}
You do not normally need to play with the Max threads (I believe it defaults to something like 25 per proc for worker, 1000 for IO). You might consider setting the Min threads to ensure you have a nice number always available.
You don't need to call GetAvailableThreads either. You can just start calling QueueUserWorkItem and let it do all the work. Can you repro your problem by simply calling QueueUserWorkItem?
You could also look into the Parallel Task Library, which has helper methods to make this kind of stuff more manageable and easier.