iPhone Timer Frequency - iphone

I am trying to use iPhone for some sort of measurement and I need an event that fires every mSec (or sooner), this event will turn on/off the LED.
I have tried a few options to make this work without any success. If I use NSTImer event, the maximum frequency I can reach is around 100Hz.
If I turn on/off the LED in a simple for loop, the frequency goes to 300Hz but that seems to be the maximum. Also it is not purely periodic, a context switch happens and I get the control back much later.
I also experimented with Mach API but no luck. Do you think is it even possible to turn on/off LED at a frequency around 1KHz.

NSTimer has a limited resolution as you have discovered.
Apps are not designed to be real-time systems and system processes can (and do) occur from time to time which may delay processing periodically (memory alerts, system functions, etc). These may not be manifested in obvious ways for usual apps, but if you are attempting to do some intense loops then you will observe them more frequently.

Related

OpenGL VSync / NSTimer issues on macOS

I'm trying to set up a simple OpenGL game on macOS, using an NSTimer to set up a run loop as explained here. The idea is to create a repeating timer with a very small (~1ms) time interval and rely on vsync to regulate the frame rate.
I'm setting my NSOpenGLContext swap interval to a value of 1, which should enable vsync. I was under the impression that this would cause NSOpenGLContext.flushbuffer to block, but this doesn't seem to be the case. My render code is firing off much more frequently than 60 times per second.
The document I linked has been marked as retired, but all the official documentation I've read suggests that it's possible to throttle an NSTimer loop to the display's refresh rate somehow. I haven't been able to get this working though, and I'm wondering if this approach is no longer viable.
Am I missing something? In a modern project, is it better just to go with a CVDisplayLink?
My understanding is that it's unlikely that an NSTimer will fire more frequently than about 10-20 times per second, and with timer coalescing you aren't likely to get guaranteed fire times appropriate for this type of application. For example, one answer to this question points out that the docs say:
Because of the various input sources a typical run loop manages, the effective resolution of the time interval for a timer is limited to on the order of 50-100 milliseconds. If a timer’s firing time occurs during a long callout or while the run loop is in a mode that is not monitoring the timer, the timer does not fire until the next time the run loop checks the timer. Therefore, the actual time at which the timer fires potentially can be a significant period of time after the scheduled firing time.
CVDisplayLink is definitely the preferred solution. It's quite simple to use, too.
And to answer your other question, yes something has changed. The OS has changed how timers are handled to help improve energy performance. I worked on an app that used the method you're suggesting up until about OS 10.9 or 10.10. Once that came out we had to rethink our strategy because timers worked differently.

Unity hard real time synchronization

I need to synchronize Unity app to a 3rd party app where time synchronization is crucial (1-2ms varient max).
The way this is done today (without Unity) is getting priority of the OS scheduler with a designated app which will assure a constant delay.
A constant delay is good enough as it can be used in the data analysis which is not done in real time. Today the constant delay is measured once on the beginning.
Thanks in advance.
This kind of delays should be easy to achieve in a background thread.
Threads work well in Unity, despite common belief. The only thing you need to look out for is not to access Unity objects from the thread.
Easiest way to do this is to start a thread in a MonoBehaviour.Start with the IsBackground property set to true (so you don't have to worry about it blocking your application exit) and communicate to and from it with a message queue (for example a List<Action> with locked access).

Create a background thread that executes a command every 4hrs

I am trying to figure out how to use a background thread to execute a command ever 4hrs.
I have never created anything like this before so have only been reading about it so far.. One of the things I have read are this
"Threads tie up physical memory and critical system resources"
So in that case would it be a bad idead to have this thread that checkes the time then executes my method... or is there a better option, I have read about GCD (Grand Central Dispatch) but I am not sure if this is applicable as I think its more for concurrent requests? not something that repeats over and over again checking the time..
Or finally is there something I have completely missed where you can execute a request every 4hrs?
Any help would be greatly appreciated.
There is a max time background processes are allowed to run (10 min) which would make your approach difficult. Your next best attempt is to calculate the next event as save the times tamp somewhere. Then if the app is executed at or after that event it can carry out whatever action you want.
This might help:
http://www.audacious-software.com/2011/01/ios-background-processing-limits/
I think that it would be good to make use of a time stamp and post a notification for when the time reaches for hours from now.
Multithreading is not a good means to do this because essentially you would be running a loop for four hours eating clock cycles. Thanks to the magic of operating systems this would not eat up an entire core or anything silly like that however it would be continuously computed if it was allowed to run. This would be a vast waste of resources so it is not allowed. GCD was not really meant for this kind of thing. It was meant to allow for concurrency to smooth out UI interaction as well as complete tasks more efficiently, a 4hr loop would be inefficent. Think of concurrency as a tool for something like being able to interact with a table while its content is being loaded or changed. GCD blocks make this very easy when used correctly. GCD and other multithreading abilities give tools to do calculations in the background as well as interact with databases and deal with requests without ever affecting the users experience. Many people whom are much smarter then me have written exstensively on what multithreading/multitasking is and what it is good for. In a way posting a message for a time would be method of multitasking without the nastiness of constantly executing blocks through GCD to wait for the 4 hr time period, however it is possible to do this. You could execute a block that monitored for time less then the max length of a threads lifetime then when the threads execution is over dispatch it again until the desired time is achieved. This is a bad way of doing this. Post a notification to the notification center, its easy and will accomplish your goal without having to deal with the complexity of multithreading yourself.
You can post a notification request observing for a time change and it will return its note, however this requires you application be active or in the background. I can not guarantee the OS wont kill your application however if it is nice and quiet with a small memory footprint in "background" state its notification center request will remain active and function as intended.

How is time-based programming done?

I'm not really sure what the correct term is, but how are time-based programs like games and simulations made? I've just realized that I've only wrote programs that wait for input, then do something, and am amazed that I have no idea how I would write something like pong :)
How would something like a flight simulator be coded? It obviously wouldn't run as fast as the computer could run it. I'm guessing everything is executed on some kind of cycle. But how do you handle it when a computation takes longer than the cycle.
Also, what is the correct term for this? Searching "time-based programming" doesn't really give me helpful results.
Games are split into simulation (decide what appears, disappears or moves) and rendering (show it on the screen, play sounds). Simulation is designed to be time-dependent: you can tell the simulator "50ms have elapsed" and it will compute 50ms worth of simulation. A typical game loop will render (which takes an arbitrary amount of time), then run the simulator for the duration since the last time the simulator was run.
If the code runs fast, then the simulator steps will be short (only a few ms) and the game will render the scene more often.
If the code runs slowly, the simulator steps will have longer steps and there will be proportionally fewer renders.
If the simulator runs slower than the simulation itself (it takes 100ms to compute 50ms worth of simulation) then the game cannot run. But this is an exceedingly rare situation, and games sometimes have emergency systems that drop the quality of the simulation to improve performance when this happens.
Note that time-dependent does not necessarily mean millisecond-level precision. Some systems implement simulations using time-based functions (traveled distance equals speed times elapsed time), while others run fixed-duration simulation steps.
I think the correct term is "Real-time application".
For the first question, I'm with spender's answer.
If you know the elapsed time between two frames, you can calculate (with physics, for example) the new position of the elements based on the previous ones.
There are two approaches to this, each with advantages and disadvantages.
You can either go frame based, whereby a timer signals n new frames every second. You calculate movement simply by counting elapsed frames. In the case that computation exceeds the available time, the game slows down.
...or, keeping the frame concept, but this time you keep an absolute measure of time, when the next frame is signalled, you calculate world movement via the amount of elapsed time. This means that stuff happens in real-time, but in the case of severe CPU starvation, gameplay will become choppy.
There's an old saying that "the clock is an actor". Time-based programs are event-driven programs, but the clock is a constant source of events. At least, that's a fairly common and reasonably easy way of doing things. It falls down if you're doing hard realtime or very high performance things.
This is where you can learn the basics:
http://www.gamedev.net/reference/start_here/
Nearly all of the games are programmed in real time architecture and the computer capabilities(and the coding of course :)) determine the frame rate.
Game programming is a really complex job including object modeling, scripting, math calculations, fast and nice rendering algorithms and some other stuff like pixel shaders.
So i would recommend you to check out available engines in the first place.(just google "free game engine")
Basic logic is to create an infinite loop (while(true){}) and the loop should:
Listen for the callbacks - you get the keyb, mouse and system messages here.
Do the physics due to the time passed till the previous frame and user inputs.
Render the new frame (gdi, derictX or openGL)
Have fun
Basically, there are 2 different approaches that allow you to add animation to a game:
Frame-based Animation: easier to understand and implement but has some serious disadvantages. Think about it this way: imagine your game runs at 60FPS and it takes 2 seconds to draw a ball that goes from one side of the screen to the other. In other words, the game needs 120 frames to move the ball along the screen. If you run this game on a slow computer that's only able to render 30FPS, it means that after 2 seconds the ball will be at the middle of the screen. So the problem of this approach is that rendering (drawing the objects) and simulation (updating the positions of the objects) are done by the same function.
Time-based Animation: a sophisticated approach that separates the simulation code from the rendering code. The amount of FPS the computer can render will not influence the amount of movement (animation) that has to be done in 2 seconds.
Steven Lambert wrote a fantastic article about these techniques, as well as 3rd approach that solves a few problems with Time-based Animation.
Some time ago I wrote a C++/Qt application to demonstrate all these approaches and you can find a video of the prototype running here:
Source code is available on Github.
Searching for time-based movement will give you better results.
Basically, you either have a timer loop or an event triggered on a regular clock, depending on your language. If it's a loop, you check the time and only react every 1/60th of a second or so.
Some sites
http://www.cppgameprogramming.com/
Ruby game programming
PyGame
Flight Simulation is one of the more complex examples of real-time simulations. The understanding of fluid dynamics, control systems, and numerical methods can be overwhelming.
As an introduction to the subject of flight simulation, I recommend Build Your Own Flight Sim in C++. It is out of print, but seems to be available used. This book is from 1996, and is horribly dated. It assumes a DOS environment. However, it provides a good overview of the topics, covers numerical integration, basic flight mechanics and control systems. The code examples are simplistic, reasonably complete, and do not assume the more common toolsets used for graphics today. As with most things, I think it is easier to learn the subject with a more basic reference.
A more advanced text (college senior, first year graduate school) is Principles of Flight Simulation provides excellent coverage of the breadth of topics involved in making a flight simulation. This book would make an excellent reference for anyone seriously interested in flight simulation as an engineering task, or for more realistic game development.

Why is a "main" game loop necessary for developing a game?

I find that most game development requires a main game loop, but I don't know why it's necessary. Couldn't we implement an event listener and respond to every user action? Animations (etc.) could then be played when a event occurs.
What is the purpose of a main game loop?
The argument that you "need a loop because otherwise what calls the event listener" does not hold water. Admittedly on any mainstream OS, you do indeed have such a loop, and event listeners do work that way, but it is entirely possible to make an interrupt driven system that works without any loops of any kind.
But you still would not want to structure a game that way.
The thing that makes a loop the most appealing solution is that your loop becomes what in real-time programming is referred to as a 'cyclic executive'. The idea is that you can make the relative execution rates of the various system activities deterministic with respect to one another. The overall rate of the loop may be controlled by a timer, and that timer may ultimately be an interrupt, but with modern OS's, you will likely see evidence of that interrupt as code that waits for a semaphore (or some other synchronization mechanism) as part of your "main loop".
So why do you want deterministic behavior? Consider the relative rates of processing of your user's inputs and the baddies AIs. If you put everything into a purely event based system, there's no guarantee that the AIs won't get more CPU time than your user, or the other way round, unless you have some control over thread priorities, and even then, you're apt to have difficulty keeping timing consistent.
Put everything in a loop, however, and you guarantee that your AIs time-lines are going to proceed in fixed relationship with respect to your user's time. This is accomplished by making a call out from your loop to give the AIs a timeslice in which to decide what to do, a call out to your user input routines, to poll the input devices to find out how your user wants to behave, and call out to do your rendering.
With such a loop, you have to watch that you are not taking more time processing each pass than actually goes by in real time. If you're trying to cycle your loop at 100Hz, all your loop's processing had better finish up in under 10msec, otherwise your system is going to get jerky. In real-time programming, it's called overrunning your time frame. A good system will let you monitor how close you are to overrunning, and you can then mitigate the processing load however you see fit.
An event listener is also dependent on some invocation loop whether you see it or not. Who else is going to call the listener?
Building an explicit game loop gives you absolute control on what's going on so you won't be dependent on whatever some toolkit/event handling library does in its event loop.
A game loop (highly simplified is as follows)
initialise
do
input
update
render
loop
clean up
This will happen every frame the game is drawn. So for games that run at 60fps the above is performed sixty times every second.
This means the game runs smoothly, the game stays in sync and the updates/draws per cycle happen frequently enough. Animation is simply a trick of the eye, objects move between locations but when played quickly enough they appear to be travelling between these locations.
If you were to only update on user input, the game would only react when the user was providing input. Other game components such as A.I game objects would not react on their own. A loop is therefore the easiest and best way of updating a game.
It's not true that all kind of games require a dedicated main game loop.
Action games need such a loop due to frequent object updates and game input precision.
On the other hand, I implemented a minesweeper game and I used window
messages for the notifications.
It's because current operating systems aren't fully event based. Even though things are often represented as events, you'll still have to create a loop where you wait for the next event and process it indefinitely (as an example the Windows event loop). Unix signals are probably the closest thing you get to events on an OS level, but they're not really efficient enough for things like this.
In practical terms, as other people have indicated, a loop is needed.
However, your idea is theoretically sound. You don't need a loop. You need event-based operations.
At a simple level, you can conceptualize the CPU to have a several timers;
one fires on the rising edge of 60Hz and calls the blitting code.
Another might be ticking at 60kHz and be rendering the latest updates of the objects in the game world to the blitter buffer.
Another might be ticking at 10kHz and be taking input from the user. (pretty high resolution, lol)
Another might be the game 'heartbeat' and ticks at 60MHz; AI and physics might operate at heartbeat time.
Of course these timers can be tuned.
Practically, what would be happening is your would be (somewhat elided) like this:
void int_handler1();
//...
int main()
{
//install interrupt handlers
//configure settings
while(1);
}
The nature of games is that they're typically simulations, and are not just reacting based on external events but on internal processes as well. You could represent these internal processes by recurring events instead of polling, but they're practically equivalent:
schedule(updateEvent, 33ms)
function updateEvent:
for monster in game:
monster.update()
render()
vs:
while 1:
for monster in game:
monster.update()
wait(33ms)
render()
Interestingly, pyglet implements the event-based method instead of the more traditional loop. And while this works well a lot of the time, sometimes it causes performance problems or unpredictable behaviour caused by the clock resolution, vsync, etc. The loop is more predictable and easier to understand (unless you come from an exclusively web programming background, perhaps).
Any program that can just sit there indefinitely and respond to user's input needs some kind of loop. Otherwise it will just reach the end of program and will exit.
The main loop calls the event listener. If you are lucky enough to have an event-driven operating system or window manager, the event loop resides there. Otherwise, you write a main loop to mediate the "impedance mismatch" between an system-call interfaces that is based on I/O, poll, or select, and a traditional event-driven application.
P.S. Since you tagged your question with functional-programming, you might want to check out Functional Reactive Programming, which does a great job connecting high-level abstractions to low-level, event-based implementations.
A game needs to run in real-time, so it works best if it is running on one CPU/core continuously. An event-driven application will typically yield the CPU to another thread when there is no event in the queue. There may be a considerable wait before the CPU switches back to your process. In a game, this would mean brief stalls and jittery animation.
Two reasons -
Even event driven systems usually need a loop of some kind that reads events from a queue of some kind and dispatches them to a handler so you end up with an event loop in windows for example anyway and might was well extend it.
For the purposes of animation you'd need to handle some kind of even for every frame of the animation. You could certainly do this with a timer or some kind of idle event, but you'd probably end up creating those in some kind of loop anyway so it's just easier to use the loop
directly.
I've seen systems that do handle it all using events, they have a frame listener that listens to an event dispatched at the start of each frame. They still have a tiny game loop internally but it does little more than handle windowing system events, and create frame events,