CoreMidi midi timecode - midi

I have set up a midi input port in my code and have attached a call back for reading midi data received. That is all working fine. I am reading Midi Timecode and parsing it in my call back. What I have noticed is that depending on when I start my application, I could be as late as 1 second from the device that is transmitting the MTC. Sometimes it is a frame behind. Regardless, it is inconsistent and frustrating. I am not doing any blocking or Obj-C calls in my readProc. I have even gone to the trouble of disconnecting my usb midi device after running my application to see if there is any weird IOKit stuff going on. I could really use some help, even wild-eyed theories? I feel as if Midi TimeStamps are useless as there is no objective reference to compare them to.

I'm going to assume that you know what you are doing here and mean actual MIDI Timecode and not MIDI clock, which is the more common of the two synchronization methods. Regardless, MIDI is slow and you will need to provide an offset (probably in milliseconds) to the client so that it can react accordingly. For example, look at how Ableton Live does it:
I realize that the above screenshot is for MIDI clock, but the same should apply to MTC as well. You may need to provide some type of UI to determine the offset, since as you have discovered, the latency changes depending on runtime conditions.

Related

AudioKit automatically enabling Microphone capture

So I found out that AudioKit is automatically enabling Microphone capture upon AudioKit.start. This also happens in the Playgrounds and is 100% reproducible.
Turning off AKSettings.audioInputEnabled does not seem to help. Besides the fact that it looks like a security risk to any user using MicroSnitch or similar, it also produces problems with BlueTooth Audio, which may automatically downsample for a lot of connections if microphone input is enabled.
Is there another way to turn this off?

Difficulty in replicating Ableton Push MIDI CC functions using AudioKit framework

I'm using AudioKit framework to implement MIDI in one of my hobby projects. In this project, I'm trying to make an app which has navigation buttons (left right up down) and play button (Just like Ableton Push MIDI controller has).
To make them function, I first recorded MIDI data that comes out of Push to map all the keys. I then used MIDI Utility by AudioKit as a starter and sent note values from the app to Ableton Live software, where it successfully triggered sounds. (Kept channel as 0)
Now, I'm trying to replicate the cc functionality of arrow keys which are cc54, cc55, cc62, cc63 and cc85 for Play. When I send this cc MIDI data using MIDI Utility, it send midi data successfully to Ableton (as I can see the light feedback), but it simply does not do what Ableton Push hardware controller would've done.
Am I missing something significant?
I also tested that when a button is pressed value goes to 127, and on button release it goes to 0. Despite of replicating it, it still doesn't work.
This problem is not related to AudioKit at all. But someone who has understanding of how midi channels, sending etc works in Ableton Push might be able to help me.
Is ableton identifying your controller as a push? Ableton has special scripting to deal with various controllers (They use Python, and if you hunt around you might find examples). Thats likely both the problem and the solution. The script is not recognizing your software as a push. However it might be possible to create a new device profile in python that'll give you the flexibility to really get in there and tweak.

Ultra precision when calling methods in ios

I'm programming an app on ios with xcode and I after a lot of work I found out that the functionality is really dependant on the accuracy of the methods calls.
Calling them one line after the other in the code doesn't help, they're still called at up tp 150 ms difference after each other.
So I need to make two methods run at the minimal time difference, hence "at the same time".
These two tasks I'm performing are actually audio and video ones so I understand it might include also in-process latency and delays, so I was wondering maybe you guys would have any isight on how to sync an audio task and a video task so that they start running together, with a very tiny time gap.
I tried using dispatch queues and stuff like that, but they don't work.
I'll be happy to elaborate if I wasn't clear enough.
Thanks!
You can't make an interrupt driven, multi-tasking OS behave like a realtime OS.
It just doesn't work that way.
You'll need to use the various multimedia APIs to set up a context of playback where the audio and video are synchronized within (which I don't know).
Apple has APIs and documentation to go with them on syncing audio and video. http://developer.apple.com
Obviously calling methods sequentially (serially) won't do what you are asking since each method call will take some finite amount of time, during which time the subsequent methods will be blocked.
To run multiple general-purpose tasks concurrently (potentially truly simultaneously on a modern multi-core device) you do need threading via "dispatch queues and stuff like that." GCD most certainly does work, so you are going to need to elaborate on what you mean by "they don't work."
But this is all probably for naught, because you aren't talking about general-purpose tasks. If you are handling audio and video tasks, you really shouldn't do all this with your own code on the CPU; you need hardware acceleration, and Apple provides frameworks to help. I'd probably start by taking a look at the AV Foundation framework, then, depending on what you are trying to do (you didn't really say in your question...) take a look at, variously, OpenAL, Core Video, and Core Audio. (There's a book out on the latter that's getting rave reviews, but I haven't seen it myself and, truth be told, I'm not an A/V developer.)
As you may know, in any multitasking environment (especially on single core devices), you'll not be able to have guarantees on timing between statements you're executing.
However, since you're mentioning playback of audio and video, Apple does provide some fundamentals that can accomplish this through its frameworks.
AVSynchronizedlayer - If some of your video or audio can playback from an AVPlayerItem (which supports a variety of formats), you can build a tree of other events which are synchronized with it, from keyframe animations to other AVPlayerItems.
If multiple audio tracks are what you're looking to in particular, there are some simpler constructs in Core Audio to do that too.
If you can give a more specific example of your needs someone may be able to provide additional ideas.

How can I locally detect iPhone clock advancement by a user between app runs?

A common exploit in casual games is to artificially advance the system clock to jump ahead in gameplay. How can such user clock advancement be detected by an app on an iOS device?
Must not involve network communication
Must not assume app is open (running or suspended) while clock is advanced
Must detect clock advancement, detecting clock rollback is not sufficient
Ideally, the solution would be robust against reboots, but that is not a requirement.
CACurrentMediaTime & mach_absolute_time
Take a look at this questions:
iOS: How to measure passed time, independent of clock and time zone changes?
Calculating number of seconds between two points in time, in Cocoa, even when system clock has changed mid-way
CACurrentMediaTime uses mach_absolute_time:
http://developer.apple.com/library/mac/#documentation/Cocoa/Reference/CoreAnimation_functions/Reference/reference.html
Here you have an example on how to use CACurrentMediaTime:
http://www.informit.com/blogs/blog.aspx?b=02b4e309-308c-468a-bab1-cebb1404be6a
Here you have a more information on mach_absolute_time:
http://developer.apple.com/library/mac/#qa/qa1398/_index.html
http://shiftedbits.org/2008/10/01/mach_absolute_time-on-the-iphone/
I was thinking about the fact that the CoreLocation stuff could do this if that part of the GPS data was exposed to you. However that got me thinking.
My only (far fetched) suggestion is to put something into background processing - which has to be for one of a small specific set of reasons, for example to track location in the background. As a side effect of that, try to detect a clock change on a regular timer. Apple might reject it as it may be clear that its not using the location information and its just a reason to exploit background processing.
Any solution not involving networking is so much harder to implement, I wonder why you're not using it.
Whilst I don't think it's possible to reliably determine whether or not a user has manually turned their clock forward without network access, you can of course reliably determine if they've travelled back in time. And since we know this to be physically impossible it can therefore be assumed they have manipulated their clock to cheat.
What I mean by this is, isn't the usual process to trigger some action in-app that requires a period of waiting, exit the app and turn the clock forward, re-launch the app to gain whatever they were waiting for and then reset the clock back to the actual time?
If this is indeed the case, then to build on the suggestion by #smgambel, you could store the physical time and current time-zone on each launch and compare with the previously stored time and time-zone. If the time is behind the previously stored time, and the time-zone of the device hasn't changed then you can safely assume the user has manipulated the clock.

Send UITouches over a network

I am building an app which allows a user to draw on the screen. I'd like to add network capability so that user A can draw on user B's screen. My current plan is to build a system where I have my own UserOrNetworkTouch object which can be created based on either a real UITouch, or a message which comes over the network, and base all of the drawing in the app off of UserOrNetworkTouch events, rather than UITouch events.
Another thing I'll want to use this system for is recording touches, so a user will be able to press "record", and then play back their drawing at a later time.
I'd like to make sure I'm not reinventing the wheel here. Are there any libraries available which will handle some or all of this for me?
You probably wouldn't send the UITouch objects over the network (although you could if you wanted). I might package then touch positions into a struct of some kind and just send that to decrease the amount of traffic you were sending. If you needed the entire UITouch object and all of its data then sure, send the object to your server.
You could use CFNetwork framework to send data to a server from your client application. If you do you should really try to use IPv6.
Apple have sample code here for working with CFNetwork streams
If you want to record the touch events, just use an NSArray or an NSDictionary if you wanted to store say the touch along with a timestamp for when the touch occurred.
Then just add each touch to the array or dictionary as the user makes them.
Update: I wouldn't waste your time with Apple's WiTap sample code. I've read though it before and there is a LOT of code in it that is just confusing and irrelevant if you want a simple client/server app up and running quickly. It will more than likely be way too confusing for you if you haven't done any network programming before.
I would get the network transfers working first, then if you like you can refer to WiTap for the Bonjour networking part so you can do auto discovery of the client and server. But add Bonjour support only after you have data steams working first.
A good place to start would be Apple's WiTap sample. It sets up a game over Bonjour and sends taps back and forth.
Also look at GameKit which'll make some of the networking even simpler.
A SQLite DB would be a great place to record events. Search for the 'fmdb' SQLite wrapper for a nice Objective-C wrapper.