I need to synchronize Unity app to a 3rd party app where time synchronization is crucial (1-2ms varient max).
The way this is done today (without Unity) is getting priority of the OS scheduler with a designated app which will assure a constant delay.
A constant delay is good enough as it can be used in the data analysis which is not done in real time. Today the constant delay is measured once on the beginning.
Thanks in advance.
This kind of delays should be easy to achieve in a background thread.
Threads work well in Unity, despite common belief. The only thing you need to look out for is not to access Unity objects from the thread.
Easiest way to do this is to start a thread in a MonoBehaviour.Start with the IsBackground property set to true (so you don't have to worry about it blocking your application exit) and communicate to and from it with a message queue (for example a List<Action> with locked access).
I have to draw a waveform for an audio file (CMK.mp3) in my application.
For this I have tried this Solution
As this solution is using AVAssetreader, which is taking two much time to display the waveform.
Can anyone please help, is there any other way to display the waveform quickly?
Thanks
AVAssetReader is the only way to read an AVAsset so there is no way around that. You will want to tune the code to process it without incurring unwanted overhead. I have not tried that code yet but I intend on using it to build a sample project to share on GitHub once I have the time, hopefully soon.
My approach to tune it will be to do the following:
Eliminate all Objective-C method calls and use C only instead
Move all work to a secondary queue off the main queue and use a block to call back one finished
One obstacle with rendering a waveform is you cannot have more than one AVAssetReader running at a time, at least the last time I tried. (It may have changed with iOS 6 possibly) A new reader cancels the other and that interrupts playback, so you need to do your work in sequence. I do that with queues.
In an audio app that I built it reads the CMSampleBufferRef into a CMBufferQueueRef which can hold multiple sample buffers. (see copyNextSampleBuffer on AVAssetReader) You can configure the queue to provide you with enough time to process a waveform after an AVAssetReader finishes reading an asset so that the current playback does not exhaust the contents of the CMBufferQueueRef before you start reading more buffers into it for the next track. That will be my approach when I attempt it. I just have to be careful that I do not use too much memory by making the buffer too big or making the buffer so big that it causes issues with playback. I just do not know how long it will take to process the waveform and I will test it on my older iPods and iPhone 4 before I try it on my iPhone 5 to see if they all perform well.
Be sure to stay as close to C as possible. Calls to Objective-C resources during this processing will incur potential thread switching and other run-time overhead costs which are significant enough to be noticeable. You will want to avoid that. What I may do is set up Key-Value Observing (KVO) to trigger the AVAssetReader to start the next task quickly so that I can maintain gapless playback between tracks.
Once I start my audio experiments I will put them on GitHub. I've created a repository where I will do this work. If you are interested you can "watch" that repo so you will know when I start committing updates to it.
https://github.com/brennanMKE/Audio
I am trying to use iPhone for some sort of measurement and I need an event that fires every mSec (or sooner), this event will turn on/off the LED.
I have tried a few options to make this work without any success. If I use NSTImer event, the maximum frequency I can reach is around 100Hz.
If I turn on/off the LED in a simple for loop, the frequency goes to 300Hz but that seems to be the maximum. Also it is not purely periodic, a context switch happens and I get the control back much later.
I also experimented with Mach API but no luck. Do you think is it even possible to turn on/off LED at a frequency around 1KHz.
NSTimer has a limited resolution as you have discovered.
Apps are not designed to be real-time systems and system processes can (and do) occur from time to time which may delay processing periodically (memory alerts, system functions, etc). These may not be manifested in obvious ways for usual apps, but if you are attempting to do some intense loops then you will observe them more frequently.
I am working on a Windows game and while rendering, some computers will experience intermittent pauses ("hitches" for lack of a better term). When profiled they appear in seemingly random places in the code. Eventually I noticed that it wasn't just my process that was affected, but (seemingly) every process on the system. All of the threads in my application hitch at once. The CPU utilization drops during these hitches and it appears as if most processes make no progress.
This leads me to believe this may be an Operating System or Driver issue, but it only occurs while playing the game (and only on some systems). What sort of operations might the operating system be doing that would require the kernel to pause all user threads and block. Some kind of I/O? At first I thought of paging but my impression is that would only affect a single process, no?
Some systems in use: Windows, DirectX (3d), nVidia cards (unknown if replicates on ATI), using overlapped io for streaming
If you have a lot of graphics in use, it may be paging graphics memory into the swap file.
Or perhaps the stream is getting buffered on disk?
It's worth seeing if the hitches coincide with the PC's disk activity LED.
Heavy use of memory mapped IO. This of course includes the system pagefile, but can also include user applications that use mmio heavily (gcc for one)
I find that most game development requires a main game loop, but I don't know why it's necessary. Couldn't we implement an event listener and respond to every user action? Animations (etc.) could then be played when a event occurs.
What is the purpose of a main game loop?
The argument that you "need a loop because otherwise what calls the event listener" does not hold water. Admittedly on any mainstream OS, you do indeed have such a loop, and event listeners do work that way, but it is entirely possible to make an interrupt driven system that works without any loops of any kind.
But you still would not want to structure a game that way.
The thing that makes a loop the most appealing solution is that your loop becomes what in real-time programming is referred to as a 'cyclic executive'. The idea is that you can make the relative execution rates of the various system activities deterministic with respect to one another. The overall rate of the loop may be controlled by a timer, and that timer may ultimately be an interrupt, but with modern OS's, you will likely see evidence of that interrupt as code that waits for a semaphore (or some other synchronization mechanism) as part of your "main loop".
So why do you want deterministic behavior? Consider the relative rates of processing of your user's inputs and the baddies AIs. If you put everything into a purely event based system, there's no guarantee that the AIs won't get more CPU time than your user, or the other way round, unless you have some control over thread priorities, and even then, you're apt to have difficulty keeping timing consistent.
Put everything in a loop, however, and you guarantee that your AIs time-lines are going to proceed in fixed relationship with respect to your user's time. This is accomplished by making a call out from your loop to give the AIs a timeslice in which to decide what to do, a call out to your user input routines, to poll the input devices to find out how your user wants to behave, and call out to do your rendering.
With such a loop, you have to watch that you are not taking more time processing each pass than actually goes by in real time. If you're trying to cycle your loop at 100Hz, all your loop's processing had better finish up in under 10msec, otherwise your system is going to get jerky. In real-time programming, it's called overrunning your time frame. A good system will let you monitor how close you are to overrunning, and you can then mitigate the processing load however you see fit.
An event listener is also dependent on some invocation loop whether you see it or not. Who else is going to call the listener?
Building an explicit game loop gives you absolute control on what's going on so you won't be dependent on whatever some toolkit/event handling library does in its event loop.
A game loop (highly simplified is as follows)
initialise
do
input
update
render
loop
clean up
This will happen every frame the game is drawn. So for games that run at 60fps the above is performed sixty times every second.
This means the game runs smoothly, the game stays in sync and the updates/draws per cycle happen frequently enough. Animation is simply a trick of the eye, objects move between locations but when played quickly enough they appear to be travelling between these locations.
If you were to only update on user input, the game would only react when the user was providing input. Other game components such as A.I game objects would not react on their own. A loop is therefore the easiest and best way of updating a game.
It's not true that all kind of games require a dedicated main game loop.
Action games need such a loop due to frequent object updates and game input precision.
On the other hand, I implemented a minesweeper game and I used window
messages for the notifications.
It's because current operating systems aren't fully event based. Even though things are often represented as events, you'll still have to create a loop where you wait for the next event and process it indefinitely (as an example the Windows event loop). Unix signals are probably the closest thing you get to events on an OS level, but they're not really efficient enough for things like this.
In practical terms, as other people have indicated, a loop is needed.
However, your idea is theoretically sound. You don't need a loop. You need event-based operations.
At a simple level, you can conceptualize the CPU to have a several timers;
one fires on the rising edge of 60Hz and calls the blitting code.
Another might be ticking at 60kHz and be rendering the latest updates of the objects in the game world to the blitter buffer.
Another might be ticking at 10kHz and be taking input from the user. (pretty high resolution, lol)
Another might be the game 'heartbeat' and ticks at 60MHz; AI and physics might operate at heartbeat time.
Of course these timers can be tuned.
Practically, what would be happening is your would be (somewhat elided) like this:
void int_handler1();
//...
int main()
{
//install interrupt handlers
//configure settings
while(1);
}
The nature of games is that they're typically simulations, and are not just reacting based on external events but on internal processes as well. You could represent these internal processes by recurring events instead of polling, but they're practically equivalent:
schedule(updateEvent, 33ms)
function updateEvent:
for monster in game:
monster.update()
render()
vs:
while 1:
for monster in game:
monster.update()
wait(33ms)
render()
Interestingly, pyglet implements the event-based method instead of the more traditional loop. And while this works well a lot of the time, sometimes it causes performance problems or unpredictable behaviour caused by the clock resolution, vsync, etc. The loop is more predictable and easier to understand (unless you come from an exclusively web programming background, perhaps).
Any program that can just sit there indefinitely and respond to user's input needs some kind of loop. Otherwise it will just reach the end of program and will exit.
The main loop calls the event listener. If you are lucky enough to have an event-driven operating system or window manager, the event loop resides there. Otherwise, you write a main loop to mediate the "impedance mismatch" between an system-call interfaces that is based on I/O, poll, or select, and a traditional event-driven application.
P.S. Since you tagged your question with functional-programming, you might want to check out Functional Reactive Programming, which does a great job connecting high-level abstractions to low-level, event-based implementations.
A game needs to run in real-time, so it works best if it is running on one CPU/core continuously. An event-driven application will typically yield the CPU to another thread when there is no event in the queue. There may be a considerable wait before the CPU switches back to your process. In a game, this would mean brief stalls and jittery animation.
Two reasons -
Even event driven systems usually need a loop of some kind that reads events from a queue of some kind and dispatches them to a handler so you end up with an event loop in windows for example anyway and might was well extend it.
For the purposes of animation you'd need to handle some kind of even for every frame of the animation. You could certainly do this with a timer or some kind of idle event, but you'd probably end up creating those in some kind of loop anyway so it's just easier to use the loop
directly.
I've seen systems that do handle it all using events, they have a frame listener that listens to an event dispatched at the start of each frame. They still have a tiny game loop internally but it does little more than handle windowing system events, and create frame events,