Waveform representation of any audio in iPhone - iphone

I have to draw a waveform for an audio file (CMK.mp3) in my application.
For this I have tried this Solution
As this solution is using AVAssetreader, which is taking two much time to display the waveform.
Can anyone please help, is there any other way to display the waveform quickly?
Thanks

AVAssetReader is the only way to read an AVAsset so there is no way around that. You will want to tune the code to process it without incurring unwanted overhead. I have not tried that code yet but I intend on using it to build a sample project to share on GitHub once I have the time, hopefully soon.
My approach to tune it will be to do the following:
Eliminate all Objective-C method calls and use C only instead
Move all work to a secondary queue off the main queue and use a block to call back one finished
One obstacle with rendering a waveform is you cannot have more than one AVAssetReader running at a time, at least the last time I tried. (It may have changed with iOS 6 possibly) A new reader cancels the other and that interrupts playback, so you need to do your work in sequence. I do that with queues.
In an audio app that I built it reads the CMSampleBufferRef into a CMBufferQueueRef which can hold multiple sample buffers. (see copyNextSampleBuffer on AVAssetReader) You can configure the queue to provide you with enough time to process a waveform after an AVAssetReader finishes reading an asset so that the current playback does not exhaust the contents of the CMBufferQueueRef before you start reading more buffers into it for the next track. That will be my approach when I attempt it. I just have to be careful that I do not use too much memory by making the buffer too big or making the buffer so big that it causes issues with playback. I just do not know how long it will take to process the waveform and I will test it on my older iPods and iPhone 4 before I try it on my iPhone 5 to see if they all perform well.
Be sure to stay as close to C as possible. Calls to Objective-C resources during this processing will incur potential thread switching and other run-time overhead costs which are significant enough to be noticeable. You will want to avoid that. What I may do is set up Key-Value Observing (KVO) to trigger the AVAssetReader to start the next task quickly so that I can maintain gapless playback between tracks.
Once I start my audio experiments I will put them on GitHub. I've created a repository where I will do this work. If you are interested you can "watch" that repo so you will know when I start committing updates to it.
https://github.com/brennanMKE/Audio

Related

Understanding the role of time in a AVCaptureSession regarding CMSampleBuffers

I recently started programming in Swift as I am trying to work out an iOS camera app idea I've had. The main goal of the project is to save the prior 10 seconds of video before the record button is tapped. So the app is actually always capturing and storing frames, but also discarding the frames that are more than 10 seconds old if the app is not 'recording'.
My approach is to output video and audio data from the AVCaptureSession using respectively AVCaptureVideoDataOutput() and AVCaptureAudioDataOutput(). Using captureOutput() I receive a CMSampleBuffer for both video and audio, who I store in different arrays. I would like those arrays to later serve as an input for the AVAssetWriter.
This is the point where I'm not sure about the role of time and timing regarding the sample buffers and the capture session in general, because in order to present the sample buffers to the AVAssetWriter as an input (I believe) I need to make sure my video and audio data are the same length (duration wise) and synchronized.
I currently need to figure out at what rate the capture session is running, or how I can set that rate. Ideally I would have one audioSampleBuffer for each videoSampleBuffer, representing both the exact same duration. I don't know what realistic values are, but in the end my goal is to output 60fps, so it would be perfect if the videoSampleBuffer would contain 1 frame and the audioSampleBuffer would represent 1/60th of a second. I then could easily append the newest sample buffers to the arrays and drop the oldest.
I've of course done some research regarding my problem, but wasn't able to find what I was looking for.
My initial thought was I had to let the capture session run at some sort of set timescale, but didn't see such an option in the AVFoundation documentation. I then looked into Core Media if there was some way to set the clock the capture session was using, but couldn't find a way to say to the session to use a different CMClock (with properties I know), so I gave up this route. I still wasn't sure about the internal mechanics and timing of the capture session, so I tried to find more information about it, but without much luck. I've also stumbled on the synchronizationClock property of AVCaptureSession, but I couldn't find out how to implement this or find an example.
To this point my best guess is that with every step in time (represented by a timestamp) a new sample buffer for both video and audio is created. Which would be a good thing. But I've a feeling this is just wishful thinking and then would still not know what duration the buffers would represent.
Could anyone help me in the right direction, helping me to understand how time works in a capture session and how to get or set the duration of sample buffers?

SiteCatalyst streaming video tracking and additional clarifications

we're attempting to track a streaming video with SiteCatalyst.The issue comes in as this video has obsviously no end and the s.media Module can't know how to set the seconds or milestones segment views.This is resulting in no tracking calls except for the starting one.Could a possible solution be the usage of s.media.monitor custom functions?Here's explained how to use them together with the basic Media module settings.Maybe a timing deployment of "sendRequest()" method could help...?I use this occasion to ask a brief how-to example of media.monitor methods, because I've been just using the basic settings till now, as below:
s.loadModule("Media");
s.Media.autoTrack = false;
s.Media.trackMilestones = "25,50";
s.Media.segmentByMilestones = true;... ...Thanks a lot
Yeah.. i really, really dislike the Media module. Video tracking is getting more and more popular with the clients, so it has become the biggest thorn in my side, because the nature of videos over the internet is a big mess with all kinds of moving parts internally, that make it extremely difficult to get truly accurate tracking beyond basic "start" and "stop". (actually I take that back.. I think mobile/sdk tracking is quickly becoming the thing i shake my angry fist at the most, but that's a different post!)
I think Adobe has made some heroic efforts to automate video tracking and it more or less works okay if you just have a regular (not flash) object or html5 tag embedded on the page but in practice, MOST of the time, sites implement their videos through 3rd party scripts (e.g. jwplayer, vimeo, youtube api) and the Media module automation basically goes down the drain on that count.
I understand that it needs to know how long a video is to know when to autopop the events, but I swear, 99% of the time in practice, the way Media module expects things to pop in certain orders etc.. it just doesn't align with how videos work in the real world. Even if you attempt to do it the "manual" way, more often than not it's still buggy,e.g. autoplay and buffering ALWAYS seem to screw up the open+play sequence that MUST happen in that order.
Basically, the Media module desperately needs to be rewritten to better handle streaming videos, and also just "manually" using it in general. Anyways..
Two things I have done in your situation. Overall, neither one of these options are a perfect 1:1 to normal videos with a duration, but then, streaming videos aren't really the same, so it doesn't really make sense to treat them the same.
Option #1: Use an estimated duration for your streaming video. So you said it yourself: your streaming videos have no end. Well as I mentioned, you can't calculate percent viewed unless you have a duration, pretty basic math. So, estimate a duration.
I have clients that have streaming webinars or whatever and it's true that there's technically no duration according to the player, but in reality they don't really conduct that webinar 24/7 forever. In reality it's for a set amount of time like 30 minutes or an hour or something. So, just specify the duration as that.
Yes, this will require extra custom work on your end to store/associate an estimated duration. And yes, this does have the potential for being misleading (e.g. if a webinar ends early or runs late). This option is generally good for sites that have set windows for the stream to actually be active.
Option #2: Ditch the notion of % viewed, record it as n time consumed. So the overall point of the milestones is to know how much of a video was actually watched, yes? Well, who said it has to be measured by % viewed?
How about instead, you just record n seconds consumed every n seconds. You can do this with an incrementor eVar, and/or counter event. (Part of the normal video tracking actually does include a counter event "Video Time", or a.media.timePlayed).
So basically, you'd basically just pop the events/props/eVars yourself, and ignore milestone/segment reports.
Note: This option only really works if you are using the older style video tracking that has events/props/eVars assigned for it. If you are using the newer style video tracking that does not use events/props/eVars.. well, AA does not currently offer an official way to manually pop that stuff directly. It is surely possible to unofficially do so, but I have not yet reverse engineered the latest Media module to figure out how to do that. So, in this case your only option is #1.

How can I monitor an mp3 live stream to detect corruption?

Once a month the mp3 streams messes up and the only way to tell it has messed up is by listening to it as it streams. Is there a script or program or tool I can use to monitor the live streams at a given url and send some kind of flag when it corrupts?
What happens is normally it plays a song for example or some music but once a month, every month, randomly, the stream corrupts and starts random chimpmunk like trash audio. Any ideas on this? I am just getting started at this with no idea at all.
Typically, this will happen when you play a track of the wrong sample rate.
Most (all that I've seen) SHOUTcast/Icecast encoders (going straight from files) will compress for MP3 just fine, but assume a fixed sample rate of whatever they are configured for. Typically this will be 44.1kHz. If you drop in a 48kHz track, or a 22.05kHz track, they will play at different speeds while causing all sorts of random issues with the stream.
The problem is easy enough to verify. Simply create a file of a different sample rate and test it. I suspect you will reproduce the problem. If that is the case, to my knowledge there is no way to detect it, since your stream isn't actually corrupt... it just sounds incorrect. You will have to scan all of your files for sample rate. FFMPEG in a script should be able to help you with that.
Now, if the problem actually is a corrupt MP3 stream, then you have problems on your encoding side. I suspect simply swapping out whatever DLL or module you're using with a recent stable version of LAME will help.
To detect a corrupt MP3 stream, your encoder must be using CRC. If you enable it, you should be able to read through the headers of each frame to find the CRC, and then run it on the audio data. In the event you get an error (or several frames with errors), you can then trigger a warning.
You can find information on the MP3 stream header here:
http://www.mp3-tech.org/programmer/frame_header.html

Core Audio - CARingBuffer to read an audio file for callback

Does anyone have a good example of using CARingBuffer to buffer a large audio file and how to read it in a callback?
Should it be reading the audio file in a secondary thread? How do I pause loading the audio file until the loaded buffers have been played (how do I pre-queue the audio file)? CAPlayThrough seems close but is only streaming audio from a microphone.
Thanks!
You can find an example that uses this ring buffer if you download the example code of the book Learning Core Audio here (under the downloads tab). Jump to the chapter 8 example in a folder called CH08_AUGraphInput.
However, if you are simply reading audio from a file, then using an (extra) ring buffer seems like an overkill.. A ring buffer comes in handy when you are having real time (or near real time) input and output (read chapter 8 in the said book for a more detailed explanation of when a ring buffer is necessary.. note that the example in chapter 8 is about playing audio immediately after recording it by a mic, which isn't what you want to do).
The reason why I said extra ring buffer, is because in core Audio there is already an audio Queue (which can be thought of as a ring buffer.. or at least it in your case it replaces the need for a ring buffer: you populate it with data, it plays the data, then fires a callback that informs you that the data you supplied has been played). The apple documentation offers a good explanation on this one.
In your case, if you are simply reading audio from a file, then you can easily control the throughput of the audio from the file. You can pause it by blocking the thread that reads data from the audio file for example.
For a simple example of what I'm talking about, see this example I created on github. For a more advanced example, see Matt Gallagher's famous example.
Generally for audio playback anything that can block or take an unbounded amount of time (in particular file or disk IO) should be done in a secondary thread. So you want to read the audio file's data in a producer thread, and consume the data in your IOProc or RemoteIO callback.
Synchronization becomes an issue with multiple threads, but if you have only one reader and one writer generally it isn't too hard. In fact, CARingBuffer is thread safe for this case.
The general flow should look like:
From the main thread:
Create the producer thread
Tell it which file to process
From the producer thread:
Open the specified file
Fill the empty space in the ring buffer with audio data
Wait until signaled or a timeout happens, and go back to #2
In your IOProc/callback:
Read data from the ring buffer
Signal the producer that more data is needed
Posting code to do this here would be much too long to read, but here are a few pointers to get you started. None of these are for the iPhone, but the principles are the same.
https://github.com/sbooth/SFBAudioEngine/blob/master/Player/AudioPlayer.cpp
http://www.snoize.com/

Saving painting app data to cope with interruptions on iPhone?

I'm making an iPhone bitmap painting app. I want to support about five 1024x768 layers (~15Mb of data). My problem is that I do not know what strategy to use for saving the user's document to cope with my app being interrupted.
My document file format at the moment is to save each layer as a .png file and then save a short text file that contains the layer metadata. My problem is that, if the app is interrupted by something like a phone call, I'm unlikely going to have enough time for my app to be able to save all the data to disk as saving all the .png files can take ~10 seconds. What options do I have?
I've considered adding an autosave feature that would be called every five minutes. In the worst case, the user will lose a few minutes of work if the app fails to save on interruption (which isn't ideal).
An idea I've considered is to keep track of which layers have changed since the last autosave and only update the layer files that need to be updated. This means that, when interrupted, my app might only need to save one layer in the typical case. However, the worse case is still having to save several layers.
I'm not sure what to do. On a practical note, I've noticed many popular iPhone painting apps with good reviews will lose the current document progress if interrupted with a phone call. I'm beginning to doubt there is a way to solve this particular problem and that I might just have to go with something less than ideal.
The IOS4 SDK provides support for long-running background tasks, which would be the perfect place to save your layers. From the documentation:
You can use task completion to ensure that important but potentially long-running operations do not end abruptly when the user leaves the application. For example, you might use this technique to save user data to disk or finish downloading an important file from a network server.
Any time before it is suspended, an application can call the beginBackgroundTaskWithExpirationHandler: method to ask the system for extra time to complete some long-running task in the background. If the request is granted, and if the application goes into the background while the task is in progress, the system lets the application run for an additional amount of time instead of suspending it. (The backgroundTimeRemaining property of the UIApplication object contains the amount of time the application has to run.)
I'm not sure if this is feasible (you don't state how the user interacts with the layers, or indeed if this interaction is transparent from their perspective), but as a suggestion why not simply save the "active" layer out (via a background thread) when the user switches layers, as you'd then only need to save a single layer when your app is backgrounded.