I want to send log to server periodically after 30 sec.
For performance i want to use different thread by using compute function.
But Timer is not working in compute.
Any suggestions to do task in different threads periodically in flutter?
You could just use an Isolate directly. It's what compute does under the hood.
However, I don't think sending information to a server would block your UI thread too much.
On top of that, if you're using app state to determine the log message I would just keep it in the main Isolate.
I would probably wrap the (Material|Widget|Cupertino)App in a StatefulWidget and add a Timer.periodic in the initState.
You also have to note communicating with an Isolate means your message has to be copied to its memory. While sending an HTTP Request to a server is usually async and non-ui-blocking.
Related
I just want to precache some endpoint calls at the beginning of the app to load faster some request since the backend service is really slow.
I've tried using Isolates for this but seems like Hive Cache doesn't support that, so even if I try to request on another Isolate it will ended up taking the same time (20-30 seconds) the same request when I'm trying to pull for the second time, where it should cache it.
Since I read it wasn't supported yet, and I personally think is critical, I moved to just call the endpoints meanwhile the app is loading a bunch of stuff in the main thread. I dont want to delay any more the app so I just want to preload 5 endpoints, so that the next time I'm requesting again it can perform faster.
1. First Approach, with Isolates
I'm calling this inside a function
final requestsToPrecache = events
.map((entityId) =>
_dataRepository.getEventTable(entityId: entityId))
.toList();
compute(precacheFutureOperations, requestsToPrecache);
Then this function is the one I'm passing thru Isolates
void precacheFutureOperations(List<Future> functions) {
Future.wait(functions);
}
2. Second Approach, just not calling await on the Network Request to trigger the request and cache them, so I dont need to wait for the response since I just want to execute it and keep it in cache before it is used, so I just call b
precacheFutureOperations(requestsToPrecache);
All of this trigger the endpoints I want to precache succesfully, I am monitoring that using proxyman. The weird part is that it doesnt cache them this way. Only after I call the request normally for the Detail Screen, is that it actually caches as expected if I tried to re-enter the screen.
What can I do to precache multiple request at the beginning of the app
I have a Web Api stateless service that is creating an Actor that does some long running processing via a reminder (fire and forget). It stores its own progress in local state. I am unable to get the progress of that long running process due to the single threaded nature of the Actor, any call to the method that gets the progress will wait until the long running process has completed. Does anyone have a solution for this (without using an external data source)?
If you simply wish to get the current state of your Actor without having to wait for an Actor lock you can actually use the underlying ActorService that is hosting the Actors to query the state without interrupting, or being blocked by the long running Actor method.
The ActorService hosting Actors is really just a StatelessService (with some bells and whistles) and you can communicate with it the same way you would communicate with any Service - add an IService interface to it and the use IServiceProxy to talk to it. This SO Answer shows how you can do that How to get state from service fabric actor without waiting for other methods to complete?
If you want to get progress along the way even during the execution of your Actor method you can force a save of the state changes in the middle of your long running exectuion by calling SaveStateAsync
You could create a ProgressTrackingActor and periodically update it from the existing Actor. Query the ProgressTrackingActor for progress.
You can use an ActorReference to indicate which Actor to query progress for, or use the same ActorId value.
I am working on a chat client. To get new messages (or post new one) I have to perform GET (or POST) request. All new messages are stored via core data. At the moment I don't know how to implement it in most optimal way.
My thoughts:
On view controller init stage create background thread which will periodically checks for new messages (if conversation is active - with short period, if not - with period about 60 secs). If there are new messages, we store them in DB and signal delegate that there are new messages to display.
Friend suggested to use performSelector afterDelay, but I don't understand how to use it in my app.
Something else?
Thanks in advance.
Don't use performSelector afterDelay. Using NSTimer is much better (as the trigger for starting the next download). Also, use NSOperationQueue to manage your background tasks. Create yourself a custom NSOperation that you can instantiate and it will complete your request process. When you create a new operation to check for new messages, check if one is already in progress (there is no point having multiple requests in progress at the same time).
Other notes:
Make sure you consider the threading with regards to the Core Data store (having the operation call back to the main thread with the results will probably be easiest as the result data will always be relatively small).
If you have lots of messages being sent and you want to show constant status (like Skype does, showing you when someone is typing) you would need to use sockets to keep the connection alive the whole time (the cost of new connections each time would be prohibitive).
I just got asked to reduce the traffic made by my GWT app. There is one method that checks for status.
This method is an asynchronous call wrapped in a Timer. I know web apps are stateless and all that, but I do wonder if there is some other way to do this, or if everyone has a Timer wrapped around a call when they need this kind of behaviour.
You can check out gwteventservice. It claims to have a way to push server events and notify the client.
I have a feeling they might be implemented as long running (hanging) client to server RPC calls which time out after an interval (say 20sec), and then are re-made. The server returns the callback if an event happens in the meanwhile.
I haven't used it personally but know of people using it to push events to the client. Have a look at the docs . If my assumption is correct, the idea is to send an RPC call to the server which does not return (hangs). If an event happens on the server, the server responds and that RPC call returns with the event. If there is no event, the call will time out in 20 seconds. Then a new call is made to the server which hangs in the same way until there is an event.
What this achieves is that it reduces the number of calls to the server to either one for each event (if there is one), or a call once every 20 seconds (if there isn't one). It looks like the 20 sec interval can be configured.
I imagine that if there is no event the amount of data sent back will be minimal - it might be possible to cancel the callback entirely, or have it fail without data transfer, but I don't really know.
Here is another resource on ServerPush - which is likely what's implemented by gwteventservice.
Running on Google app engine you could use they Channel technology
http://code.google.com/intl/en-US/appengine/docs/java/channel/overview.html
If you need the client to get the status from the server, then you pretty much have to make a call to the server to get it's status.
You could look at reducing the size of some of your messages
You could wind back the timer so the status call goes out less often
You could "optimise" so that the timer gets reset when some other client/server IO happens (i.e. if the server replies you know it is ok, so you don't need to send the next status request).
Let's say that if I read from www.example.com/number, I get a random number. In my iPhone app, I want to be able to continuously read from that address and display the new number on the screen after each request is finished. Let's also assume that I want this process to start as soon as the view loads. Lastly, as a side-note, I'm using ASIHTTPRequest to simplify the web requests.
Approach 1: In my viewDidLoad method I could synchronously read from the URL in a loop (execution will not continue until I get a response from the HTTP request). Pros: the requests are serial and I have full control to respond to each one. Cons: the UI never gets updated because I never exit the function and give control back to the run time loop. Clearly, this is not a good solution.
Approach 2: In my viewDidLoad method I create a timer which calls a fetchURL function once per second. Pros: each request is in a separate thread, and the UI updates after each request is finished. Cons: the requests are in separate threads, and cannot be controlled well. For example, if there is a connection timeout on the first request, I want to be able to display an error popup, and not have any further requests happen until settings are changed. However, with this approach, if it takes 3 seconds to timeout, two additional requests will have already been started in that time. If I just slow down the timer, then data comes in too slowly when the connection is working well.
It seems like there should be some approach which would merge the benefits of the first two approaches I mentioned. I would like a way that I could decide whether on not to send the next request based on the result of the previous request.
Approach 3: I considered using a timer which fires more quickly (say every .25 seconds), but have the timer's function check a flag to see what to do next. So, if the previous request has finished, it sends a new request (unless there was an error). Otherwise, if the previous request has not finished, the timer's function returns without sending a new request. By firing this timer more quickly, you would get better response time, but the flag would let me get the synchronization I wanted.
It seems like Approach 3 would do what I want, but it also seems a little forced. Does anyone have a suggestion for a better approach to this, or is something like Approach 3 the best way to do it?
You could do this using GCD with less code and using fewer resources. This is how you could do it:
In viewDidLoad call a block asynchronously (using dispatch_async) that does the following:
Load the data with a synchronous call and handle timeouts if it failed.
If successful, inform the main thread to update the UI.
Queue a new block to run after a delay that does the same thing (using dispatch_after).
To call back to the main thread from another thread I can think of these methods:
If you want to update a custom view, you can set setNeedsDisplay from your block
Otherwise, you could queue a block on what's called "main queue", which is a queue running on the main thread. You get this queue by calling dispatch_get_main_queue. and then treat it like any other queue (for example you can add your block by calling dispatch_async).
If you don't want to use blocks you can use the NSObject's performSelectorOnMainThread:withObject:waitUntilDone: method.
See GCD Reference for more details.
That said, you should never keep performing small requests so frequently (unless for specific tasks like fetching game data or something). It will severely reduce battery life by keeping antenna from sleeping.
I believe an NSOperation is what you need. Use the number 1 solution above, but place the code in your NSOperation's main method. Something like this:
The .h file
#interface MyRandomNumberFetcher : NSOperation {
}
#end
The .m file
#implementation MyRandomNumberFetcher
- (void) main {
// This is where you start the web service calls.
}
#end
I'd also recommend adding a reference to the UI controller so your operation queue class can call it back when it's appropriate.
Here's another suggestion. Create an NSOperationQueue that will run your requests on a different thread. If you find you need to refresh the UI call performSelectorOnMainThread. When the request completes create another request and add it to the queue. Set the queue to run only one action at a time.
This way you'll never have two requests running at the same time.