In Flutter if I create a Timer, will the supplied callback get executed if the application is in background?
I think documentation is not clear on this or I might have missed it (please supply a link if you find it anywhere).
I just tried doing this with a timer set up to 10 seconds and it works fine.
I assume that is approach is not very reliable and other methods* should be used instead. I think if the app is paused/terminated by operating system to preserve battery or due to low memory, nothing will be executed. But for me this situation could be an ok approach.
* I know there are isolates so I guess I could spawn the timer inside one of these. Downside is that the isolate must be regular function (or static method) so no access to application data in my scenario. Then there are different scheduling packages etc but I'm trying to avoid these for now. I know about background tasks but I'm really looking for an answer on code execution using timers.
I am trying to figure out how to use a background thread to execute a command ever 4hrs.
I have never created anything like this before so have only been reading about it so far.. One of the things I have read are this
"Threads tie up physical memory and critical system resources"
So in that case would it be a bad idead to have this thread that checkes the time then executes my method... or is there a better option, I have read about GCD (Grand Central Dispatch) but I am not sure if this is applicable as I think its more for concurrent requests? not something that repeats over and over again checking the time..
Or finally is there something I have completely missed where you can execute a request every 4hrs?
Any help would be greatly appreciated.
There is a max time background processes are allowed to run (10 min) which would make your approach difficult. Your next best attempt is to calculate the next event as save the times tamp somewhere. Then if the app is executed at or after that event it can carry out whatever action you want.
This might help:
http://www.audacious-software.com/2011/01/ios-background-processing-limits/
I think that it would be good to make use of a time stamp and post a notification for when the time reaches for hours from now.
Multithreading is not a good means to do this because essentially you would be running a loop for four hours eating clock cycles. Thanks to the magic of operating systems this would not eat up an entire core or anything silly like that however it would be continuously computed if it was allowed to run. This would be a vast waste of resources so it is not allowed. GCD was not really meant for this kind of thing. It was meant to allow for concurrency to smooth out UI interaction as well as complete tasks more efficiently, a 4hr loop would be inefficent. Think of concurrency as a tool for something like being able to interact with a table while its content is being loaded or changed. GCD blocks make this very easy when used correctly. GCD and other multithreading abilities give tools to do calculations in the background as well as interact with databases and deal with requests without ever affecting the users experience. Many people whom are much smarter then me have written exstensively on what multithreading/multitasking is and what it is good for. In a way posting a message for a time would be method of multitasking without the nastiness of constantly executing blocks through GCD to wait for the 4 hr time period, however it is possible to do this. You could execute a block that monitored for time less then the max length of a threads lifetime then when the threads execution is over dispatch it again until the desired time is achieved. This is a bad way of doing this. Post a notification to the notification center, its easy and will accomplish your goal without having to deal with the complexity of multithreading yourself.
You can post a notification request observing for a time change and it will return its note, however this requires you application be active or in the background. I can not guarantee the OS wont kill your application however if it is nice and quiet with a small memory footprint in "background" state its notification center request will remain active and function as intended.
I am trying to use iPhone for some sort of measurement and I need an event that fires every mSec (or sooner), this event will turn on/off the LED.
I have tried a few options to make this work without any success. If I use NSTImer event, the maximum frequency I can reach is around 100Hz.
If I turn on/off the LED in a simple for loop, the frequency goes to 300Hz but that seems to be the maximum. Also it is not purely periodic, a context switch happens and I get the control back much later.
I also experimented with Mach API but no luck. Do you think is it even possible to turn on/off LED at a frequency around 1KHz.
NSTimer has a limited resolution as you have discovered.
Apps are not designed to be real-time systems and system processes can (and do) occur from time to time which may delay processing periodically (memory alerts, system functions, etc). These may not be manifested in obvious ways for usual apps, but if you are attempting to do some intense loops then you will observe them more frequently.
I am new to objective-c/cocoa programming. I am making an application which is to constantly sync with a server and keep its view updated.
Now in a nutshell, heres what I thought of: Initiate an NSTimer to trigger every second or two, contact the server, if there is a change, update the view. Is this a good way of doing it?
I have read elsewhere that you can have a thread running in the background which monitors the changes and updates the view. I never worked with threads before and I know they can be quite troublesome and you need a good amount of experience with memory management to get most out of them.
I have one month to get this application done. What do you guys recommend? Just use an NSTimer and do it the way I though of...or learn multithreading and get it done that way (but keep in mind my time frame).
Thanks!
I think using separate thread in this case would be too much. You need to use threads when there is some task that runs for considerable amount of time and can freeze your app for some time.
In your case do this:
Create timer and call some method (say update) every N seconds.
in update send asynchronous request to server and check for any changes.
download data using NSURLConnection delegate and parse. Note: if there is probability that you can receive a huge amount of data from server and its processing can take much time (for example parsing of 2Mb of XML data) then you do need to perform that is a separate thread.
update all listeners (appropriate view controllers for example) with processed data.
continue polling using timer.
Think about requirements. The most relevant questions, IMO, are :
does your application have to get new data while running in background?
does your application need to be responsive, that is, not sluggish when it's fetching new data?
I guess the answer to the first question is probably no. If you are updating a view depending on the data, it's only required to fetch the data when the view is visible. You cannot guarantee always fetching data in background anyway, because iOS can always just kill your application. Anyway, in your application's perspective, multithreading is not relevant to this question. Because either you are updating only in foreground or also in background, your application need no more than one thread.
Multithreading is relevant rather to the second question. If your application has to remain responsive while fetching data, then you will have to run your fetching code on a detached thread. What's more important here is, the update on the user interface (like views) must happen on the main thread again.
Learning multithreading in general is something indeed, but iOS SDK provides a lot of help. Learning how to use operation queue (I guess that's the easiest to learn, but not necessarily the easiest to use) wouldn't take many days. In a month period, you can definitely finish the job.
Again, however, think clearly why you would need multithreading.
I find that most game development requires a main game loop, but I don't know why it's necessary. Couldn't we implement an event listener and respond to every user action? Animations (etc.) could then be played when a event occurs.
What is the purpose of a main game loop?
The argument that you "need a loop because otherwise what calls the event listener" does not hold water. Admittedly on any mainstream OS, you do indeed have such a loop, and event listeners do work that way, but it is entirely possible to make an interrupt driven system that works without any loops of any kind.
But you still would not want to structure a game that way.
The thing that makes a loop the most appealing solution is that your loop becomes what in real-time programming is referred to as a 'cyclic executive'. The idea is that you can make the relative execution rates of the various system activities deterministic with respect to one another. The overall rate of the loop may be controlled by a timer, and that timer may ultimately be an interrupt, but with modern OS's, you will likely see evidence of that interrupt as code that waits for a semaphore (or some other synchronization mechanism) as part of your "main loop".
So why do you want deterministic behavior? Consider the relative rates of processing of your user's inputs and the baddies AIs. If you put everything into a purely event based system, there's no guarantee that the AIs won't get more CPU time than your user, or the other way round, unless you have some control over thread priorities, and even then, you're apt to have difficulty keeping timing consistent.
Put everything in a loop, however, and you guarantee that your AIs time-lines are going to proceed in fixed relationship with respect to your user's time. This is accomplished by making a call out from your loop to give the AIs a timeslice in which to decide what to do, a call out to your user input routines, to poll the input devices to find out how your user wants to behave, and call out to do your rendering.
With such a loop, you have to watch that you are not taking more time processing each pass than actually goes by in real time. If you're trying to cycle your loop at 100Hz, all your loop's processing had better finish up in under 10msec, otherwise your system is going to get jerky. In real-time programming, it's called overrunning your time frame. A good system will let you monitor how close you are to overrunning, and you can then mitigate the processing load however you see fit.
An event listener is also dependent on some invocation loop whether you see it or not. Who else is going to call the listener?
Building an explicit game loop gives you absolute control on what's going on so you won't be dependent on whatever some toolkit/event handling library does in its event loop.
A game loop (highly simplified is as follows)
initialise
do
input
update
render
loop
clean up
This will happen every frame the game is drawn. So for games that run at 60fps the above is performed sixty times every second.
This means the game runs smoothly, the game stays in sync and the updates/draws per cycle happen frequently enough. Animation is simply a trick of the eye, objects move between locations but when played quickly enough they appear to be travelling between these locations.
If you were to only update on user input, the game would only react when the user was providing input. Other game components such as A.I game objects would not react on their own. A loop is therefore the easiest and best way of updating a game.
It's not true that all kind of games require a dedicated main game loop.
Action games need such a loop due to frequent object updates and game input precision.
On the other hand, I implemented a minesweeper game and I used window
messages for the notifications.
It's because current operating systems aren't fully event based. Even though things are often represented as events, you'll still have to create a loop where you wait for the next event and process it indefinitely (as an example the Windows event loop). Unix signals are probably the closest thing you get to events on an OS level, but they're not really efficient enough for things like this.
In practical terms, as other people have indicated, a loop is needed.
However, your idea is theoretically sound. You don't need a loop. You need event-based operations.
At a simple level, you can conceptualize the CPU to have a several timers;
one fires on the rising edge of 60Hz and calls the blitting code.
Another might be ticking at 60kHz and be rendering the latest updates of the objects in the game world to the blitter buffer.
Another might be ticking at 10kHz and be taking input from the user. (pretty high resolution, lol)
Another might be the game 'heartbeat' and ticks at 60MHz; AI and physics might operate at heartbeat time.
Of course these timers can be tuned.
Practically, what would be happening is your would be (somewhat elided) like this:
void int_handler1();
//...
int main()
{
//install interrupt handlers
//configure settings
while(1);
}
The nature of games is that they're typically simulations, and are not just reacting based on external events but on internal processes as well. You could represent these internal processes by recurring events instead of polling, but they're practically equivalent:
schedule(updateEvent, 33ms)
function updateEvent:
for monster in game:
monster.update()
render()
vs:
while 1:
for monster in game:
monster.update()
wait(33ms)
render()
Interestingly, pyglet implements the event-based method instead of the more traditional loop. And while this works well a lot of the time, sometimes it causes performance problems or unpredictable behaviour caused by the clock resolution, vsync, etc. The loop is more predictable and easier to understand (unless you come from an exclusively web programming background, perhaps).
Any program that can just sit there indefinitely and respond to user's input needs some kind of loop. Otherwise it will just reach the end of program and will exit.
The main loop calls the event listener. If you are lucky enough to have an event-driven operating system or window manager, the event loop resides there. Otherwise, you write a main loop to mediate the "impedance mismatch" between an system-call interfaces that is based on I/O, poll, or select, and a traditional event-driven application.
P.S. Since you tagged your question with functional-programming, you might want to check out Functional Reactive Programming, which does a great job connecting high-level abstractions to low-level, event-based implementations.
A game needs to run in real-time, so it works best if it is running on one CPU/core continuously. An event-driven application will typically yield the CPU to another thread when there is no event in the queue. There may be a considerable wait before the CPU switches back to your process. In a game, this would mean brief stalls and jittery animation.
Two reasons -
Even event driven systems usually need a loop of some kind that reads events from a queue of some kind and dispatches them to a handler so you end up with an event loop in windows for example anyway and might was well extend it.
For the purposes of animation you'd need to handle some kind of even for every frame of the animation. You could certainly do this with a timer or some kind of idle event, but you'd probably end up creating those in some kind of loop anyway so it's just easier to use the loop
directly.
I've seen systems that do handle it all using events, they have a frame listener that listens to an event dispatched at the start of each frame. They still have a tiny game loop internally but it does little more than handle windowing system events, and create frame events,