How do I start and stop the flow of a connection to an internet server? - rest

I'm using an ESP8266 with ESP8266WiFi and ESP8266HTTPClient libraries. My app doesn't have enough memory to download the entire JSON file that I need, but all I really need is a few fields from it, so I can discard most of it as I read it in.
What I don't understand is how to start, stop, or otherwise slow down the incoming data so that I can process it and pick out what I need as it comes in from the server. I have to use a fairly small buffer when I make the connection due to memory limitations caused by the rest of the program.
Is there a way to fill the buffer from the server, pause the transmission, process and clear the data in the buffer, and then resume the transmission until the whole JSON file is processed?

Sounds like you will want to use a streaming JSON parser. There are a couple of forks of such a library on GitHub. https://github.com/mrfaptastic/json-streaming-parser2 seems to be the one still maintained.

Related

Where Video Player keeps network files and can you keep multiple ones

I've been working with video_player package from Flutter. I'm mostly testing videos taken from the url. My code is very similar to the ones from the examples.
Everywhere I read about it, I can see that this library does not support caching of the videos. But what this exactly means? What exactly is happening behind the scenes and how the behaviour would change if the caching would actually be implemented? How this is different from buffering? Are the video files simply downloaded to our device?
If yes, then were those files are kept?
One additional question is, how can I check the network consumption caused by using such connection? I've tried using Dev Tools but the network tab is always empty.
One last thing is, is it possible to pre-initialize next videos, so when we would like to switch between them, they are already partially pre-loaded?
U can use a package that helps you manage caching futter_cache_manager
https://pub.dev/packages/flutter_cache_manager
U can use this in combination with video_player. However, u would have to download the whole file first to be then be able to retrieve it for video_player to consume it.
An idea would be to stream the video and also download a copy locally. This however would consume more data than just downloading and caching the video first, then playing it locally.
As for how to check for network consumption, i am not sure.
Starting with the theory:
1.Cache is a high-speed storage area while a buffer is a normal storage area on ram for temporary storage.
2.Cache is made from static ram which is faster than the slower dynamic ram used for a buffer.
3.The buffer is mostly used for input/output processes while the cache is used during reading and writing processes from the disk.
4.Cache can also be a section of the disk while a buffer is only a section of the ram.
5.A buffer can be used in keyboards to edit typing mistakes while the cache cannot.
When it comes to video buffering just look at youtube app. You can see the buffer being made when grey line grows bigger before the red line. Mostly information stored this way cannot be accessible at all as Android uses combination of RAM allocation for both caching and buffering as it sees fit for current active process.
Technically you could try pre-loading different videos by starting and pausing all of them at once but I cannot imagine how much tampering with system memory control it would take, even youtube doesn't work like that.

Can too many WSASend in short time be a problem?

I'm making a simple mmorpg server with IOCP.
I implemented a simple movement function so I tested with dummy clients(also IOCP).
Everything works fine only when few clients are connected. After around 500~1000 clients are connected, some dummy clients occasionally read weird data. I checked that server sends data as I expected but when it comes to dummy clients reading them, they read random data.
My guess is that it could be related to operation system's recv buffer being overflowed but I'm only guessing right now... I have no idea how to check them.
Any suggestion would be very thankful!
The problem with too many WSASends doesn't usually manifest as corrupted data; that's more likely to be a bug in your code. Perhaps your problem is caused by you failing to manage the lifetime of the buffer that is being used to send data correctly? It needs to stay stable until you get the completion for the WSASend call. If you were reusing it sooner than that then you would corrupt the data being sent.
The reason this may show up when you have lots of WSASends outstanding to lots of clients is that the send operations may be taking longer to complete and so make it more likely that your bug will be hit...
It doesn't matter how many WSASends you issue as long as your clients are able to receive the data as fast as you can send it. As soon as you are sending faster than they can receive then there will be problems. I address these problems in this answer.

Asynchronouos Socket Communication & Heap fragmentation

I wrote a multithreaded Socket Server application which accepts over a 1000 concurrent connections. Recently we had application crash; after analyzing the dump files came to know app has crash due to heap corruption. I found the same issue discussed in following links.
.NET Does NOT Have Reliable Asynchronouos Socket Communication?
http://support.microsoft.com/kb/947862
And also discussion suggest 3 solutions.
The network application should have an upper bound on the number of outstanding asynchronous IO that it posts.
Use Microsoft CCR
Use TPL
Due to the time factor, I thought to stick with #1, but I don't have a clear picture how to implement this. Can some one give a good starting point please?
And also has anyone used Async with TPL to solve this issue?
You mean a better starting point than the blog posting that I linked to in the answer that you refer to?
The issue is this:
Memory and other per-operation resources that are used during an async write are often "in use" until the remote peer's TCP stack acks the data and the local stack can complete your async write operation to tell you that you can reuse your buffer.
The local peer has no control over this as it's all governed by the speed at which the remote peer reads data from its socket and the congestion on the link between the two peers.
Because of the above you need to have a hard limit on the amount of async writes that you have outstanding at any one time. You can track this by incrementing a counter just before you issue an async write and decrementing it in the completion handler.
What you do once you hit that limit is up to you. In the original article I favour a queue that data to be written is placed into. This queue can then be used as a source of data as write completions occur. Once the queue is empty you can send normally again. Of course this simply moves the problem - you still have a memory resource that's controlled by the remote peer (the queued data) but you don't also have other OS resources used too (non-paged pool, I/O page lock limit, etc).
You could simply stop your peer sending when you reach your limit - and now the API that you build over the async API needs to have a 'can't sent at the moment, try again later' return from a send which previously used to always "work".
If you're doing this I would also seriously look at avoiding the pinned memory issue by allocating a large block of buffers in one contiguous block and using them from the pool.
First, that's a very old KB article. How are you sure you have that particular problem?
Then, as Hans Passant answers in the SO question, if you write bad async code, it will bite you. If you don't take care of your resources (and memory buffers are resources), a concurrent program will face memory errors
It's very hard to write good concurrent code using raw Threads and TPL does make it easier but it won't fix the bugs you already have. In fact, unless you identify your current problems you are likely to transfer them to the version that uses TPL.
Without knowing the specific problem that caused your application to crash, I can only make some suggestions:
Use BufferManager to reuse memory buffers instead of allocating new ones.
Use a queue to store requests and process them asynchronously instead of starting a new thread for each request.
There are other techniques you can use as well, depending on the type of application you are building. Eg you could use TPL DataFlow to break processing in independent steps.
As for CCR, there is not much point in using it outside Robotics Studio. TPL contains most of the relevant functionality you need to write concurrent apps.

Which to use when? NSURLConnection vs. lower level socket API

I am developing an iPhone application which streams data(e.x.ECGData like points) from a server and displays(means Plotting) it on the screen -- i.e., live streaming. For that purpose, I am using NSURLConnection.
The problem I am facing is that, since the data is coming so speedily from the server to the iPhone, the cache buffer is increasing rapidly, causing the displayed data to lag behind the actual data coming from the server. After some time, the application goes too slowly, and gets a memory warning.
So my question is, how should I handle this data coming from the server? Should I continue with NSURLConnection or go for lower level socket programming?
I propose you implement some sort of flow control:
The simplest approach is to drop data if your buffers are full. For video streams, frames can be dropped. I don't know whether the same is possible with your data.
Another approach is to switch from the event-based API of NSURLConnection (where the framework controls when you have to react) to CFSocket class where you can read data when you are ready for it. It's more low-level, requires a separate thread and some advanced logic like going to sleep when the buffer is full and being woken up when the main thread has displayed more data and made more space in the buffer. With this approach you are basically building on top of TCPs flow control mechanism.
Yet another approach would be to use another network protocol where you have more control about the amount of data being sent.
I would use ASIHttpRequest streaming. You can implement a delegate method request:didReceiveData: to get your data in chunks as it comes in, deflate it if needed pares it and display. If you need cache you can always save it to file.

iPhone network performance

I have a question and I am very open to suggestions (even very odd ones!)
I am writing an iPhone app, which does a request (URL with parameters) to a server. As a response, the iPhone receives XML. All is well.
Right now, I am looking to improve my application's speed by measuring the time it takes to perform certain tasks. I've found that, of all the tasks performed (downloading, XML Parsing, sending the request, handeling the request, parsing objects from XML), downloading the actual XML takes the longest.
Now, my XML files are very, very, very easy and very, very, very small. I use them primarily to read RSS-like data and show them in a UITableView.
My app works very well, and there is nothing that feels really slow, but there is one application in the App Store right now which does something very similar to my application, but is way faster and feels more 'snappy', if you know what I mean. It also has the great feature to load the headlines one by one from the RSS-feed.
Currently I'm experimenting with gzip compression of my data, but the compression only makes my data half the size and it doesn't seem to do any real good for performance. The main thing is, that the data has to be downloaded, before it gets parsed. It would be very cool to have a 'stream' of data, which is parsed as it comes in. That way, I can do two jobs almost simultaneous and load headlines one by one (making user interactivity more attractive).
Anyone has an idea of how to improve my performance? Either by great compression tips or entirely different ways to communicate with the server.. All is welcome!
UPDATE: putting the latency and responsiveness of the server aside; how could I get a source of XML to be 'streamed' to my iPhone (downloaded byte for byte) and at the same time get parsed? Right now it is a linear process of downloading -> parsing -> showing, but it could become semi-parallel by downloading & parsing at the same time (and show each item when it is done downloading, instead of loading it all at the same time in a UITableView)
Assuming you're using NSXMLParser's initWithContentsOfURL:, that's probably part of the problem. It seems like that downloads the entire contents of the URL, then hands it off to the parser to parse all at once.
Despite the fact that the NSXMLParser is an event-driven parser, it doesn't seem to support streaming data to the parser in an incremental manner. You could, of course, replace NSXMLParser with some other parsing library that handles incremental data in a more sensible way.
An alternative would be to use NSURLConnection, and create a new NSXMLParser and re-parse the data each time some data comes in, in the connection:didReceiveData: method of your NSURLConnection's delegate. You''d have to write some extra code to ignore the extra events from re-parsing the beginning of the file more than once.
This seems like it'd be more work than just grabbing some other library and adapting it, but maybe not, depending on how you're handling the downstream creation of your table data.
If NSMutableArray is the underlying data structure of your UITableView you should try
using initWithContentsOfURL. The format on the server needs to be in apples "plist" xml which is easy to generate. I'm guessing that if cocoa already has resources acquired for
processing xml it would be quicker to use them instead of creating your own xml parser instance.
How are you parsing the XML data. The need to parse XML as you receive it is the whole reason pull parsing was invented, and they have the NSXMLParser event driven parser that operates on a stream of data...
That also means compression of the data is counterproductive.
For very small amounts of XML, the issue of speed is probably about latency of connection.
Thus, other big factor could be the DNS lookup. You could do that yourself once beforehand and cache the IP, perhaps rechecking only on failure to connect to your server...
Since your xml files are very small, (just 8k or so, I imagine?) I don't think you'd see a big performance boost trying to parse them as they come in. You can use an asynchronous NSURLConnection to do that, but I can't see it helping with a small file very much. Have you considered the time it takes to generate the XML file on the server? Are you using PHP to create the XML, or accessing a static file? For testing purposes, it might be interesting to access static files and see how that compares.
I ran into a problem like this a while ago, and it turned out the mySQL database on my server was being slow, and it really had nothing to do with my app.
Also, use Instruments to look at the amount of memory that is allocated while you're downloading and parsing an XML document. Some XML parsers load the entire XML document into memory and then allow you to index randomly into the document's elements, while others provide step by step parsing only. The step-by-step method is much better for limited devices like the iPhone and it's probably what you're using, but a quick look at memory consumption should tell you.