I am looking for the best way to transfer files from the compact framework to a server via REST. I have a web service I created using .net Web API. I've looked at several SO questions and other sites that dealt with sending files, but none of them seem to work the for what I need.
I am trying to send media files from WM 6 and 6.5 devices to my REST service. While most of the files are less than 300k, an odd few may be 2-10 or so megabytes. Does anyone have some snippets I could use to make this work?
Thanks!
I think this is the minimum for sending a file:
using (var fileStream = File.Open(#"\file.txt", FileMode.Open, FileAccess.Read, FileShare.Read))
{
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create("http://www.destination.com/path");
request.Method = "POST"; // or PUT, depending on what the server expects
request.ContentLength = fileStream.Length; // see the note below
using (var requestStream = request.GetRequestStream())
{
int bytes;
byte[] buffer = new byte[1024]; // any reasonable buffer size will do
while ((bytes = fileStream.Read(buffer, 0, buffer.Length)) > 0)
{
requestStream.Write(buffer, 0, bytes);
}
}
try
{
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
}
}
catch (WebException ex)
{
// failure
}
}
Note: HTTP needs a way to know when you're "done" sending data. There are three ways to achieve this:
Set request.ContentLength as used in the example, because we know the size of the file before sending anything
Set request.SendChunked, to send chunks of data including their individual size
You could also set request.AllowWriteStreamBuffering to write to an in-memory buffer, but I wouldn't recommend wasting that much memory on the compact framework.
Related
I'm working with C#, Dotnet core, and NeventStore( version- 9.0.1), trying to evaluate various persistence options that it supports out of the box.
More specifically, when trying to use the mongo persistence, the payload is getting stored without any compression being applied.
Note: Payload compression is happening perfectly when using the SQL persistence of NEventStore whereas not with the mongo persistence.
I'm using the below code to create the event store and initialize:
private IStoreEvents CreateEventStore(string connectionString)
{
var store = Wireup.Init()
.UsingMongoPersistence(connectionString,
new NEventStore.Serialization.DocumentObjectSerializer())
.InitializeStorageEngine()
.UsingBsonSerialization()
.Compress()
.HookIntoPipelineUsing()
.Build();
return store;
}
And, I'm using the below code for storing the events:
public async Task AddMessageTostore(Command command)
{
using (var stream = _eventStore.CreateStream(command.Id))
{
stream.Add(new EventMessage { Body = command });
stream.CommitChanges(Guid.NewGuid());
}
}
The workaround did: Implementing the PreCommit(CommitAttempt attempt) and Select methods in IPipelineHook and by using gzip compression logic the compression of events was achieved in MongoDB.
Attaching data store image of both SQL and mongo persistence:
So, the questions are:
Is there some other option or setting I'm missing so that the events get compressed while saving(fluent way of calling compress method) ?
Is the workaround mentioned above sensible to do or is it a performance overhead?
I also faced the same issue while using the NEventStore.Persistence.MongoDB.
Even if I used the fluent way of compress method, the payload compression is not happening perfectly in the mongo persistence like SQL persistence.
Finally, I have achieved the compression/decompression by customizing the logic inside the PreCommit(CommitAttempt attempt) and Select(ICommit committed) methods.
Code used for compression:
using (var stream = new MemoryStream())
{
using (var compressedStream = new GZipStream(stream,
CompressionMode.Compress))
{
var serializer = new JsonSerializer {
TypeNameHandling = TypeNameHandling.None,
ReferenceLoopHandling = ReferenceLoopHandling.Ignore
};
var writer = new JsonTextWriter(new StreamWriter(compressedStream));
serializer.Serialize(writer, this);
writer.Flush();
}
return stream.ToArray();
}
Code used for decompression:
using (var stream = new MemoryStream(bytes))
{
var decompressedStream = new GZipStream(stream, CompressionMode.Decompress);
var serializer = new JsonSerializer {
TypeNameHandling = TypeNameHandling.None,
ReferenceLoopHandling = ReferenceLoopHandling.Ignore
};
var reader = new JsonTextReader(new StreamReader(decompressedStream));
var body = serializer.Deserialize(reader, type);
return body as Command;
}
I'm not sure if this a right approach or will this have any impact on the performance of EventStore operations like Insert and Select..
In this question Mirth HTTP POST request with Parameters using Javascript I used a semblance of the first answer. Code seen below.
I'm running this code for a file that has nearly 46,000 rows. Which equates to about 46,000 requests hitting our external server. I'm noting that Mirth is making requests to our API endpoint about 1.6 times per second. This is unusually slow, and I would like some help to understand whether this is something related to Mirth, or related to the code above. Can repeated Imports in a for loop cause slow downs? Or is there a specific Mirth setting that limits the number of requests sent?
Version of Mirth is 3.12.0
Started the process at 2:27 PM and it's expected to be finished by almost 8:41 PM tonight, that's ridiculously slow.
//Skip the first header row
for (i = 1; i < msg['row'].length(); i++) {
col1 = msg['row'][i]['column1'].toString();
col2...
...
//Insert into results if the file and sample aren't already present
InsertIntoDatabase()
}
function InsertIntoDatabase() {
with(JavaImporter(
org.apache.commons.io.IOUtils,
org.apache.http.client.methods.HttpPost,
org.apache.http.client.entity.UrlEncodedFormEntity,
org.apache.http.impl.client.HttpClients,
org.apache.http.message.BasicNameValuePair,
com.google.common.io.Closer)) {
var closer = Closer.create();
try {
var httpclient = closer.register(HttpClients.createDefault());
var httpPost = new HttpPost('http://<server_name>/InsertNewCorrection');
var postParameters = [
new BasicNameValuePair("col1", col1),
new BasicNameValuePair(...
...
];
httpPost.setEntity(new UrlEncodedFormEntity(postParameters, "UTF-8"));
httpPost.setHeader('Content-Type', 'application/x-www-form-urlencoded')
var response = closer.register(httpclient.execute(httpPost));
var is = closer.register(response.entity.content);
result = IOUtils.toString(is, 'UTF-8');
} finally {
closer.close();
}
}
return result;
}
I am new to using vertx and I am using vertx filesystem api to read file of large size.
vertx.fileSystem().readFile("target/classes/readme.txt", result -> {
if (result.succeeded()) {
System.out.println(result.result());
} else {
System.err.println("Oh oh ..." + result.cause());
}
});
But the RAM is all consumed while reading and the resource is not even flushed after use. The vertx filesystem api also suggest
Do not use this method to read very large files or you risk running out of available RAM.
Is there any alternative to this?
To read large file you should open an AsyncFile:
OpenOptions options = new OpenOptions();
fileSystem.open("myfile.txt", options, res -> {
if (res.succeeded()) {
AsyncFile file = res.result();
} else {
// Something went wrong!
}
});
Then an AsyncFile is a ReadStream so you can use it together with a Pump to copy the bits to a WriteStream:
Pump.pump(file, output).start();
file.endHandler((r) -> {
System.out.println("Copy done");
});
There are different kind of WriteStream, like AsyncFile, net sockets, HTTP server responses, ...etc.
To read/process a large file in chunks you need to use the open() method which will return an AsyncFile on success. On this AsyncFile you setReadBufferSize() (or not, the default is 8192), and attach a handler() which will be passed a Buffer of at most the size of the read buffer you just set.
In the example below I have also attached an endHandler() to print a final newline to stay in line with the sample code you provided in the question:
vertx.fileSystem().open("target/classes/readme.txt", new OpenOptions().setWrite(false).setCreate(false), result -> {
if (result.succeeded()) {
result.result().setReadBufferSize(READ_BUFFER_SIZE).handler(data -> System.out.print(data.toString()))
.endHandler(v -> System.out.println());
} else {
System.err.println("Oh oh ..." + result.cause());
}
});
You need to define READ_BUFFER_SIZE somewhere of course.
The reason for that is that internally .readFile calls to Files.readAllBytes.
What you should do instead is create a stream out of your file, and pass it to Vertx handler:
try (InputStream steam = new FileInputStream("target/classes/readme.txt")) {
// Your handling here
}
I am trying to implement Facebook X_FACEBOOK_PLATFORM SASL mechanism so I could integrate Facebook Chat to my application over XMPP.
Here is the code:
var ak = "my app id";
var sk = "access token";
var aps = "my app secret";
using (var client = new TcpClient())
{
client.Connect("chat.facebook.com", 5222);
using (var writer = new StreamWriter(client.GetStream())) using (var reader = new StreamReader(client.GetStream()))
{
// Write for the first time
writer.Write("<stream:stream xmlns=\"jabber:client\" xmlns:stream=\"http://etherx.jabber.org/streams\" version=\"1.0\" to=\"chat.facebook.com\"><auth xmlns=\"urn:ietf:params:xml:ns:xmpp-sasl\" mechanism=\"X-FACEBOOK-PLATFORM\" /></stream:stream>");
writer.Flush();
Thread.Sleep(500);
// I am pretty sure following works or at least it's not what causes the error
var challenge = Encoding.UTF8.GetString(Convert.FromBase64String(XElement.Parse(reader.ReadToEnd()).Elements().Last().Value)).Split('&').Select(s => s.Split('=')).ToDictionary(s => s[0], s => s[1]);
var response = new SortedDictionary<string, string>() { { "api_key", ak }, { "call_id", DateTime.Now.Ticks.ToString() }, { "method", challenge["method"] }, { "nonce", challenge["nonce"] }, { "session_key", sk }, { "v", "1.0" } };
var responseString1 = string.Format("{0}{1}", string.Join(string.Empty, response.Select(p => string.Format("{0}={1}", p.Key, p.Value)).ToArray()), aps);
byte[] hashedResponse1 = null;
using (var prov = new MD5CryptoServiceProvider()) hashedResponse1 = prov.ComputeHash(Encoding.UTF8.GetBytes(responseString1));
var builder = new StringBuilder();
foreach (var item in hashedResponse1) builder.Append(item.ToString("x2"));
var responseString2 = Convert.ToBase64String(Encoding.UTF8.GetBytes(string.Format("{0}&sig={1}", string.Join("&", response.Select(p => string.Format("{0}={1}", p.Key, p.Value)).ToArray()), builder.ToString().ToLower()))); ;
// Write for the second time
writer.Write(string.Format("<response xmlns=\"urn:ietf:params:xml:ns:xmpp-sasl\">{0}</response>", responseString2));
writer.Flush();
Thread.Sleep(500);
MessageBox.Show(reader.ReadToEnd());
}
}
I shortened and shrunk the code as much as possible, because I think my SASL implementation (whether it works or not, I haven't had a chance to test it yet) is not what causes the error.
I get the following exception thrown at my face: Unable to read data from the transport connection: An established connection was aborted by the software in your host machine.
10053
System.Net.Sockets.SocketError.ConnectionAborted
It happens every time I try to read from client's stream for the second time. As you can see i pause a thread here so Facebook server has enough time to answer me, but I used asynchronous approach before and I encountered the exact same thing, so I decided to try it synchronously first. Anyway actual SASL mechanism implementation really shouldn't cause this because if I don't try to authenticate right away, but I send the request to see what mechanisms server uses and select that mechanism in another round of reading and writing, it fails, but when I send mechanism selection XML right away, it works and fails on whatever second I send.
So the conclusion is following: I open the socket connection, write to it, read from it (first read works both sync and async), write to it for the second time and try to read from it for the second time and here it always fails. Clearly then, problem is with socket connection itself. I tried to use new StreamReader for second read but to no avail. This is rather unpleasant since I would really like to implement facade over NetworkStream with "Received" event or something like Send(string data, Action<string> responseProcessor) to get some comfort working with that stream, and I already had the implementation, but it also failed on second read.
Thanks for your suggestions.
Edit: Here is the code of facade over NetworkStream. Same thing happens when using this asynchronous approach, but couple of hours ago it worked, but for second response returned same string as for first. I can't figute out what I changed in a meantime and how.
public void Send(XElement fragment)
{
if (Sent != null) Sent(this, new XmppEventArgs(fragment));
byte[] buffer = new byte[1024];
AsyncCallback callback = null;
callback = (a) =>
{
var available = NetworkStream.EndRead(a);
if (available > 0)
{
StringBuilder.Append(Encoding.UTF8.GetString(buffer, 0, available));
NetworkStream.BeginRead(buffer, 0, buffer.Length, callback, buffer);
}
else
{
var args = new XmppEventArgs(XElement.Parse(StringBuilder.ToString()));
if (Received != null) Received(this, args);
StringBuilder = new StringBuilder();
// NetworkStream.BeginRead(buffer, 0, buffer.Length, callback, buffer);
}
};
NetworkStream.BeginRead(buffer, 0, buffer.Length, callback, buffer);
NetworkStreamWriter.Write(fragment);
NetworkStreamWriter.Flush();
}
The reader.ReadToEnd() call consumes everything until end-of-stream, i.e. until TCP connection is closed.
A friend of mine came to me with a problem: when using the NetworkStream class on the server end of the connection, if the client disconnects, NetworkStream fails to detect it.
Stripped down, his C# code looked like this:
List<TcpClient> connections = new List<TcpClient>();
TcpListener listener = new TcpListener(7777);
listener.Start();
while(true)
{
if (listener.Pending())
{
connections.Add(listener.AcceptTcpClient());
}
TcpClient deadClient = null;
foreach (TcpClient client in connections)
{
if (!client.Connected)
{
deadClient = client;
break;
}
NetworkStream ns = client.GetStream();
if (ns.DataAvailable)
{
BinaryFormatter bf = new BinaryFormatter();
object o = bf.Deserialize(ns);
ReceiveMyObject(o);
}
}
if (deadClient != null)
{
deadClient.Close();
connections.Remove(deadClient);
}
Thread.Sleep(0);
}
The code works, in that clients can successfully connect and the server can read data sent to it. However, if the remote client calls tcpClient.Close(), the server does not detect the disconnection - client.Connected remains true, and ns.DataAvailable is false.
A search of Stack Overflow provided an answer - since Socket.Receive is not being called, the socket is not detecting the disconnection. Fair enough. We can work around that:
foreach (TcpClient client in connections)
{
client.ReceiveTimeout = 0;
if (client.Client.Poll(0, SelectMode.SelectRead))
{
int bytesPeeked = 0;
byte[] buffer = new byte[1];
bytesPeeked = client.Client.Receive(buffer, SocketFlags.Peek);
if (bytesPeeked == 0)
{
deadClient = client;
break;
}
else
{
NetworkStream ns = client.GetStream();
if (ns.DataAvailable)
{
BinaryFormatter bf = new BinaryFormatter();
object o = bf.Deserialize(ns);
ReceiveMyObject(o);
}
}
}
}
(I have left out exception handling code for brevity.)
This code works, however, I would not call this solution "elegant". The other elegant solution to the problem I am aware of is to spawn a thread per TcpClient, and allow the BinaryFormatter.Deserialize (née NetworkStream.Read) call to block, which would detect the disconnection correctly. Though, this does have the overhead of creating and maintaining a thread per client.
I get the feeling that I'm missing some secret, awesome answer that would retain the clarity of the original code, but avoid the use of additional threads to perform asynchronous reads. Though, perhaps, the NetworkStream class was never designed for this sort of usage. Can anyone shed some light?
Update: Just want to clarify that I'm interested to see if the .NET framework has a solution that covers this use of NetworkStream (i.e. polling and avoiding blocking) - obviously it can be done; the NetworkStream could easily be wrapped in a supporting class that provides the functionality. It just seemed strange that the framework essentially requires you to use threads to avoid blocking on NetworkStream.Read, or, to peek on the socket itself to check for disconnections - almost like it's a bug. Or a potential lack of a feature. ;)
Is the server expecting to be sent multiple objects over the same connection? IF so I dont see how this code will work, as there is no delimiter being sent that signifies where the first object starts and the next object ends.
If only one object is being sent and the connection closed after, then the original code would work.
There has to be a network operation initiated in order to find out if the connection is still active or not. What I would do, is that instead of deserializing directly from the network stream, I would instead buffer into a MemoryStream. That would allow me to detect when the connection was lost. I would also use message framing to delimit multiple responses on the stream.
MemoryStream ms = new MemoryStream();
NetworkStream ns = client.GetStream();
BinaryReader br = new BinaryReader(ns);
// message framing. First, read the #bytes to expect.
int objectSize = br.ReadInt32();
if (objectSize == 0)
break; // client disconnected
byte [] buffer = new byte[objectSize];
int index = 0;
int read = ns.Read(buffer, index, Math.Min(objectSize, 1024);
while (read > 0)
{
objectSize -= read;
index += read;
read = ns.Read(buffer, index, Math.Min(objectSize, 1024);
}
if (objectSize > 0)
{
// client aborted connection in the middle of stream;
break;
}
else
{
BinaryFormatter bf = new BinaryFormatter();
using(MemoryStream ms = new MemoryStream(buffer))
{
object o = bf.Deserialize(ns);
ReceiveMyObject(o);
}
}
Yeah but what if you lose a connection before getting the size? i.e. right before the following line:
// message framing. First, read the #bytes to expect.
int objectSize = br.ReadInt32();
ReadInt32() will block the thread indefinitely.