WinRT writing to TCP stream not working - sockets

I have started with the development of a "WinRT" app ("Metro"-style apps for Windows 8). The app should read and write some data via a TCP stream. Reading works fine, but writing does not work. Below you can find the code which uses the full .NET Framework (which works):
var client = new TcpClient();
client.Connect(IPAddress.Parse("192.168.178.51"), 60128);
var stream = client.GetStream();
var writer = new StreamWriter(stream);
writer.WriteLine("ISCP\0\0\0\x10\0\0\0.....");
writer.Flush();
In comparison the following code does not work:
var tcpClient = new StreamSocket();
await tcpClient.ConnectAsync(new HostName("192.168.178.51"), "60128");
var writer = new DataWriter(_tcpClient.OutputStream);
writer.WriteString("ISCP\0\0\0\x10\0\0\0....");
writer.FlushAsync();
WriteString returns the correct length of the string (25), yet the other end does not receive the correct command. Via Wireshark I also see a correct package for the full .NET version, but not for the WinRT version.
How to fix this?
.NET version:
WinRT version:

After your call to writer.WriteString() you need to actually commit the date that is now on the buffer by calling writer.StoreAsync()
any call to wrtier.WriteXX will only store data in memory. Once you call writer.StoreAsync() that data in memory will be sent.
My guess is that StreamWrtiers.WriteLine does this for you in a single call.

Related

MimeKit .Net Core : The target process exited with code -1073741819 while evaluating the function "MimeKit.MimeMessage.ToString"

I am using Mimekit/Mailkit to forward mail in my .Net core app.
The system exits with the following error :
The target process exited with code -1073741819 while evaluating the function "MimeKit.MimeMessage.ToString"
While executing these lines:
var builder = new BodyBuilder();
builder.TextBody = forwardMail.Body ?? string.Empty;
builder.Attachments.Add(new MessagePart { Message = message });
message.Body = builder.ToMessageBody();
It is happening all the time. Why is this? How can I fix this issue ?
You are making the message recursive.
message.Body = something that embeds message
When you call ToString() on it, the message gets written first to a MemoryStream (and from there, gets converted into a string) and the MemoryStream buffer must continue growing without bounds because there is no end to a recursive message.
You likely meant to embed a different message, but your code has a body part of the message pointing back to the top-level message again resulting in an infinite loop when it gets written out.

Meteor: uploading file from client to Mongo collection vs file system vs GridFS

Meteor is great but it lacks native supports for traditional file uploading. There are several options to handle file uploading:
From the client, data can be sent using:
Meteor.call('saveFile',data) or collection.insert({file:data})
'POST' form or HTTP.call('POST')
In the server, the file can be saved to:
a mongodb file collection by collection.insert({file:data})
file system in /path/to/dir
mongodb GridFS
What are the pros and cons for these methods and how best to implement them? I am aware that there are also other options such as saving to a third party site and obtain an url.
You can achieve file uploading with Meteor without using any more packages or a third party
Option 1: DDP, saving file to a mongo collection
/*** client.js ***/
// asign a change event into input tag
'change input' : function(event,template){
var file = event.target.files[0]; //assuming 1 file only
if (!file) return;
var reader = new FileReader(); //create a reader according to HTML5 File API
reader.onload = function(event){
var buffer = new Uint8Array(reader.result) // convert to binary
Meteor.call('saveFile', buffer);
}
reader.readAsArrayBuffer(file); //read the file as arraybuffer
}
/*** server.js ***/
Files = new Mongo.Collection('files');
Meteor.methods({
'saveFile': function(buffer){
Files.insert({data:buffer})
}
});
Explanation
First, the file is grabbed from the input using HTML5 File API. A reader is created using new FileReader. The file is read as readAsArrayBuffer. This arraybuffer, if you console.log, returns {} and DDP can't send this over the wire, so it has to be converted to Uint8Array.
When you put this in Meteor.call, Meteor automatically runs EJSON.stringify(Uint8Array) and sends it with DDP. You can check the data in chrome console websocket traffic, you will see a string resembling base64
On the server side, Meteor call EJSON.parse() and converts it back to buffer
Pros
Simple, no hacky way, no extra packages
Stick to the Data on the Wire principle
Cons
More bandwidth: the resulting base64 string is ~ 33% larger than the original file
File size limit: can't send big files (limit ~ 16 MB?)
No caching
No gzip or compression yet
Take up lots of memory if you publish files
Option 2: XHR, post from client to file system
/*** client.js ***/
// asign a change event into input tag
'change input' : function(event,template){
var file = event.target.files[0];
if (!file) return;
var xhr = new XMLHttpRequest();
xhr.open('POST', '/uploadSomeWhere', true);
xhr.onload = function(event){...}
xhr.send(file);
}
/*** server.js ***/
var fs = Npm.require('fs');
//using interal webapp or iron:router
WebApp.connectHandlers.use('/uploadSomeWhere',function(req,res){
//var start = Date.now()
var file = fs.createWriteStream('/path/to/dir/filename');
file.on('error',function(error){...});
file.on('finish',function(){
res.writeHead(...)
res.end(); //end the respone
//console.log('Finish uploading, time taken: ' + Date.now() - start);
});
req.pipe(file); //pipe the request to the file
});
Explanation
The file in the client is grabbed, an XHR object is created and the file is sent via 'POST' to the server.
On the server, the data is piped into an underlying file system. You can additionally determine the filename, perform sanitisation or check if it exists already etc before saving.
Pros
Taking advantage of XHR 2 so you can send arraybuffer, no new FileReader() is needed as compared to option 1
Arraybuffer is less bulky compared to base64 string
No size limit, I sent a file ~ 200 MB in localhost with no problem
File system is faster than mongodb (more of this later in benchmarking below)
Cachable and gzip
Cons
XHR 2 is not available in older browsers, e.g. below IE10, but of course you can implement a traditional post <form> I only used xhr = new XMLHttpRequest(), rather than HTTP.call('POST') because the current HTTP.call in Meteor is not yet able to send arraybuffer (point me if I am wrong).
/path/to/dir/ has to be outside meteor, otherwise writing a file in /public triggers a reload
Option 3: XHR, save to GridFS
/*** client.js ***/
//same as option 2
/*** version A: server.js ***/
var db = MongoInternals.defaultRemoteCollectionDriver().mongo.db;
var GridStore = MongoInternals.NpmModule.GridStore;
WebApp.connectHandlers.use('/uploadSomeWhere',function(req,res){
//var start = Date.now()
var file = new GridStore(db,'filename','w');
file.open(function(error,gs){
file.stream(true); //true will close the file automatically once piping finishes
file.on('error',function(e){...});
file.on('end',function(){
res.end(); //send end respone
//console.log('Finish uploading, time taken: ' + Date.now() - start);
});
req.pipe(file);
});
});
/*** version B: server.js ***/
var db = MongoInternals.defaultRemoteCollectionDriver().mongo.db;
var GridStore = Npm.require('mongodb').GridStore; //also need to add Npm.depends({mongodb:'2.0.13'}) in package.js
WebApp.connectHandlers.use('/uploadSomeWhere',function(req,res){
//var start = Date.now()
var file = new GridStore(db,'filename','w').stream(true); //start the stream
file.on('error',function(e){...});
file.on('end',function(){
res.end(); //send end respone
//console.log('Finish uploading, time taken: ' + Date.now() - start);
});
req.pipe(file);
});
Explanation
The client script is the same as in option 2.
According to Meteor 1.0.x mongo_driver.js last line, a global object called MongoInternals is exposed, you can call defaultRemoteCollectionDriver() to return the current database db object which is required for the GridStore. In version A, the GridStore is also exposed by the MongoInternals. The mongo used by current meteor is v1.4.x
Then inside a route, you can create a new write object by calling var file = new GridStore(...) (API). You then open the file and create a stream.
I also included a version B. In this version, the GridStore is called using a new mongodb drive via Npm.require('mongodb'), this mongo is the latest v2.0.13 as of this writing. The new API doesn't require you to open the file, you can call stream(true) directly and start piping
Pros
Same as in option 2, sent using arraybuffer, less overhead compared to base64 string in option 1
No need to worry about file name sanitisation
Separation from file system, no need to write to temp dir, the db can be backed up, rep, shard etc
No need to implement any other package
Cachable and can be gzipped
Store much larger sizes compared to normal mongo collection
Using pipe to reduce memory overload
Cons
Unstable Mongo GridFS. I included version A (mongo 1.x) and B (mongo 2.x). In version A, when piping large files > 10 MB, I got lots of error, including corrupted file, unfinished pipe. This problem is solved in version B using mongo 2.x, hopefully meteor will upgrade to mongodb 2.x soon
API confusion. In version A, you need to open the file before you can stream, but in version B, you can stream without calling open. The API doc is also not very clear and the stream is not 100% syntax exchangeable with Npm.require('fs'). In fs, you call file.on('finish') but in GridFS you call file.on('end') when writing finishes/ends.
GridFS doesn't provide write atomicity, so if there are multiple concurrent writes to the same file, the final result may be very different
Speed. Mongo GridFS is much slower than file system.
Benchmark
You can see in option 2 and option 3, I included var start = Date.now() and when writing end, I console.log out the time in ms, below is the result. Dual Core, 4 GB ram, HDD, ubuntu 14.04 based.
file size GridFS FS
100 KB 50 2
1 MB 400 30
10 MB 3500 100
200 MB 80000 1240
You can see that FS is much faster than GridFS. For a file of 200 MB, it takes ~80 sec using GridFS but only ~ 1 sec in FS. I haven't tried SSD, the result may be different. However, in real life, the bandwidth may dictate how fast the file is streamed from client to server, achieving 200 MB/sec transfer speed is not typical. On the other hand, a transfer speed ~2 MB/sec (GridFS) is more the norm.
Conclusion
By no mean this is comprehensive, but you can decide which option is best for your need.
DDP is the simplest and sticks to the core Meteor principle but the data are more bulky, not compressible during transfer, not cachable. But this option may be good if you only need small files.
XHR coupled with file system is the 'traditional' way. Stable API, fast, 'streamable', compressible, cachable (ETag etc), but needs to be in a separate folder
XHR coupled with GridFS, you get the benefit of rep set, scalable, no touching file system dir, large files and many files if file system restricts the numbers, also cachable compressible. However, the API is unstable, you get errors in multiple writes, it's s..l..o..w..
Hopefully soon, meteor DDP can support gzip, caching etc and GridFS can be faster...
Hi just to add on to Option1 regarding viewing of the file. I did it without ejson.
<template name='tryUpload'>
<p>Choose file to upload</p>
<input name="upload" class='fileupload' type='file'>
</template>
Template.tryUpload.events({
'change .fileupload':function(event,template){
console.log('change & view');
var f = event.target.files[0];//assuming upload 1 file only
if(!f) return;
var r = new FileReader();
r.onload=function(event){
var buffer = new Uint8Array(r.result);//convert to binary
for (var i = 0, strLen = r.length; i < strLen; i++){
buffer[i] = r.charCodeAt(i);
}
var toString = String.fromCharCode.apply(null, buffer );
console.log(toString);
//Meteor.call('saveFiles',buffer);
}
r.readAsArrayBuffer(f);};

How to get application process to wait until the socket has data to read using libevent bufferevents?

I'm working with libevent for the first time and have been having an issue trying to get my application to not run until the read callback is called. I am using bufferevents as well. Essentially I am doing is trying to avoid the sleep in my main application loop and instead have the OS wake up the process (via libevent) when there is data to be read off the socket. Anyone know how to do this? I found in an alpha build of libevent that you can set a base event loop to be EVLOOP_NO_EXIT_ON_EMPTY, but from looking at the libevent code that will just use up my whole proc I believe. I also read on this question that it is a bad idea to set a socket to blocking on windows which is why I haven't done that as a solution either. I will mark this with libuv and libev too since they are similar and might contribute to my solution.
you have to use the following api, some of the API may be oudated you can search google for new one.
struct event_base *base ;
struct event g_eve
base = event_init();
//after binding the socket register your socket for read event using below api
event_set(&g_eve, SockFd, EV_READ | EV_PERSIST, CallbackFunctin, &g_eve);
event_add(&g_eve, NULL);
event_base_dispatch(base);

How to specify ADO.NET connection timeouts of less than a second?

Connection time outs are specified in the connectionString in web.config file like this:
"Data Source=dbs;Initial Catalog=db;"+"Integrated Security=SSPI;Connection Timeout=30"
The time is in seconds. I want to specify a connection timeout in milliseconds, say 500ms. How can I do that?
Edit 1: I want to do this to create a ping method which just checks if the database is reachable or not.
Edit 2: I have been searching for some similar solutions and this answer mentioned specifying timeout in milliseconds. So I was intrigued and wanted to find out how it can be done.
Firstly, please make sure that you are using non-pooled connections to ensure that you are always getting a fresh connection, you can do this by adding Pooling=false to your connection string. For both of these solutions I would also recommend adding Connection Timeout=1 just to ensure that ADO.NET does not needlessly continue to open the connection after you application has given up.
For .Net 4.5 you can use the new OpenAsync method and a CancellationToken to achieve a short timeout (e.g. 500ms):
using (var tokenSource = new CancellationTokenSource())
using (var connection = new SqlConnection(connectionString))
{
tokenSource.CancelAfter(500);
await connection.OpenAsync(tokenSource.Token);
}
When this times out you should see the Task returned by OpenAsync go to the canceled state, which will result in a TaskCanceledException
For .Net 4.0 you can wrap the connection open in a Task and wait on that for the desired time:
var openTask = Task.Factory.StartNew(() =>
{
using (var connection = new SqlConnection(connectionString))
{
connection.Open();
}
});
openTask.ContinueWith(task =>
{
// Need to observe any exceptions here - perhaps you might log them?
var ignored = task.Exception;
}, TaskContinuationOptions.OnlyOnFaulted);
if (!openTask.Wait(500))
{
// Didn't complete
Console.WriteLine("Fail");
}
In this example, openTask.Wait() will return a bool that indicates if the Task completed or not. Please be aware that in .Net 4.0 you must observe all exceptions thrown inside tasks, otherwise they will cause your program to crash.
If you need examples for versions of .Net prior to 4.0 please let me know.

I'm sending a command to a serial COM port in C# and not getting data back, but when I use Putty I get data - what am I doing wrong?

I have a C# application, which I'm writing to try automate data extraction from a serial device. As the title of my question says, I have tried the exact same commands in Putty and I get data back. Could somebody please tell me what I have missed out, so that I can get the same data out with my C# application please?
Basically, I need to COM6, a speed/baud of 57600, and send the command without quotes "UH". I should be presented with a few lines of text data, which appears to only work on Putty.
As a quick test, I threw this together:
private void SerialPort serialPort = new SerialPort();
private void getHistory_Click(object sender, EventArgs e)
{
serialPort.DataReceived += new SerialDataReceivedEventHandler(serialPort_DataReceived);
serialPort.PortName = "COM6";
serialPort.BaudRate = 57600;
serialPort.Open();
if (serialPort.IsOpen())
{
serialPort.Write("UH");
}
}
private void serialPort_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
string result = serialPort.ReadExisting();
Invoke(new MethodInvoker(delegate{ textbox1.AppendText(result); }));
}
The DataReceived event does get fired, but it only returns back the "UH" I sent up, no further data. Any help with this problem would be highly appreciated!
Justin
Well, without further detail of the device in question, it is hard to say for sure, but two things spring to mind:
Firstly, what comms protocol does the device require? You have set up the baud rate, but have no mention of data bits, parity, or stop bits. I think the .NET serial port class defaults to 8,N,1. If your device is the same then you should be fine. If it is not, then it won't work.
Secondly, does the device require any kind of termination to the data to define a complete packet? Commonly this can be the data sent is appended with a carriage return and a line feed (0x0D and 0x0A), or perhaps is has a prefix of STX (0x02) and a suffix of ETX (0x03).
Any message that the device responds with is likely to be in the same format too.
I don't know how Putty works, but check the setup and see if it is appending anything to the message you type, and the protocol. Hyperterminal does this too, so you could test it with this also.