Looped Audio samples in the iPhone - iphone

I am trying to made a musical iPhone application and I am having some problems playing looped samples.
I have read this question:
audio-on-the-iphone
and several other posts and blogs in the web about the "RemoteIO"/AudioUnits framework but without success.
I have been able to do a sample application that plays a finite sound with a predefined duration (I am using a playbackCallback) but I need the sound to start playing with the user touches the screen and stop playing when the user lift the finger.
Any ideas?
Thanks in advance.

Assuming your code is correct, you're probably omitting one (or more) of these steps:
Stopping reading/writing at the wrong times (because you're likely writing 'a power of 2' samples per render call)
Not providing a fade in/fade out (start at 10 ms fade out and adjust as desired)
Not stopping write on a zero crossing
not resetting the read position to 0 when the user lifts their finger - resuming in the middle of the sample
Your samples are not properly trimmed to zero crossings at start, end, and/or loop positions
Not resetting internal effects, filters, or convertors
You will not need all of these to avoid clicks.

Related

Displaying information over video

I am building an app for iOS (iPhone and iPad) where the user is able to watch video clips of therapeutic exercises. However, I want to overlay some dynamically generated information (the amount of reps and sets assigned to them by their physio) either "over" or next to the video playing. The amount of reps and sets will be like a counter counting down the amount of work they have left before the next exercise will start playing.
Here is a mock-up of what I would like to achieve if possible Video with dynamic information overlay
So while the video is playing the "Hold" will count for a specified number of seconds. When the time limit is reached, "Sets" is increased by 1 and Hold starts from 0 again. When the Sets are all completed, "Reps" increase by 1 and Hold and Reps start back at 0. Etc.
Can the video playing and all this information be displayed simultaneously on the iPhone/iPad?
I have looked at a number of video hosting solutions that might have this feature built into it, but couldnt find anything that would suit my needs.
Is this possible at all as I have never seen anything like this done before?
Could a solution be to use a iFrame to display the video and then have all the other information that I want on the screen separate to this? Just a thought...
Yes, this is possible, have a look at this example project:
http://www.musicalgeometry.com/?p=1273
This is for a camera overlay, but it also works for existing videos.

Reduced quality OpenGL ES screenshots (iPhone)

I'm currently using this method from Apple to take screenshots of my OpenGL ES iPhone game. The screenshots look great. However taking a screenshot causes a small stutter in the game play (which otherwise runs smoothly at 60 fps). How can I modify the method from Apple to take lower quality screenshots (hence eliminating the stutter caused by taking the screenshot)?
Edit #1: the end goal is to create a video of the game play using AVAssetWriter. Perhaps there's a more efficient way to generate the CVPixelBuffers referenced in this SO post.
What is the purpose of the recording?
If you want to replay a sequence on the device you can look into saving the object positions etc instead and redraw the sequence in 3D. This also makes it possible to replay sequences from other view positions.
If you want to show the game play on i.e. youtube or other you can look into recording the game play with another device/camera or record some game play running in the simulator using some screen capture software as ScreenFlow.
The Apple method uses glReadPixels() which just pulls all the data across from the display buffer, and probably triggers sync barriers, etc, between GPU and CPU. You can't make that part faster or lower resolution.
Are you doing this to create a one-off video? Or are you wanting the user to be able to trigger this behavior in the production code? If the former, you could do all sorts of trickery to speed it up-- render to a smaller size for everything, don't present at all and just capture frames based on a recording of the input data running into the game, or other such tricks, or going even further run that whole simulation at half speed to get all the frames.
I'm less helpful if you need an actual in-game function for this. Perhaps someone else will be.
If all else fails.
Get one of these
http://store.apple.com/us/product/MC748ZM/A
And then convert that composite video to digital through some sort of external device.
I've done this when I converted vhs movies to dvd a long time ago.

iOS frame by frame animation, by script

There are a few SO questions regarding frame by frame animation (such as frame by frame animation and other similar questions), however I feel mine is different so here goes.
This is partially a design question from someone with very little ios experience.
I'm not sure "frame by frame" is the correct description of what I want to do so let me describe that. Basically, I have a "script" of an animated movie and I'd like to play this script.
This script is a json file which describes a set of scenes. In each scene there are a few elements such as a background image, a list of actors with their positions and a background sound clip. Further, for each actor and background there's an image file that represents it. (it's a bit more complex - each actor has a "behavior" such as how it blinks, how he talks etc). So my job is to follow the given script referencing actors and background and with every frame, place the actors in their designated position, draw the correct background and play the sound file.
The movie may be paused, scrubbed forward or backward similar to youtube's movie player functionality.
Most of the questions I've seen which refer to frame-by-frame animation have different requirements than I do (I'll list some more requirements later). They usually suggest to use animationImages property of a UIImageView. This is fine for animating a button or a checkbox but they all assume there's a short and predefined set of images that need to be played.
If I were to go with animationImages I'd have to pre-create all the images up front and my pure guess is that it won't scale (think about 30fps for one minute, you get 60*30=1800 images. Plus the scrub and pause/play abilities seem challenging in this case).
So I'm looking for the right way to do this. My instinct, and I'm learning more as I go, is that there are probably three or four main ways to achieve this.
By using Core Animations and defining "keypoints" and animated transitions b/w those keypoints. For example if an actor needs to be at point A at time t1 and point B at time t2 then all I need to do it animate what's in between. I've done something similar in ActionScript in the past and it was nice but was particularly challenging to implement the scrub action and keep everytyhing in sync so I'm not a big fan of the approach. Imagine that you have to implement a pause in the middle of an animation or scrub to a middle of an animation. It's doable but not pleasant.
Set a timer for, say 30 times a second and on every tick consult the model (the model is the script json file along with the description of the actors and the backgrounds) and draw what needs to be drawn at this time. Use Quartz 2D's API and drawRect. This is probably the simple approach and but I don't have enough experience to tell how well it's going to work on different devices, probably CPU wise, it all depends on the amount of calculations I need to make on each tick and the amount of effort it takes ios to draw everything. I don't have a hunch.
Similar to 2, but use OpenGL to draw. I prefer 2 b/c the API is easier but perhaps resource wise OpenGL is more suitable.
Use a game framework such as cocos2d which I'd never used before but seems to be solving more or less similar problems. They seem to have a nice API so I'd be happy if I could find all my requirements answered by them.
Atop of the requirements I'd just described (play a movie given it's "script" file and a description of the actors, backgrounds and sounds), there's another set of requirements -
The movie needs to be played in full screen mode or partial screen mode (where the rest of the screen is dedicated to other controls)
I'm starting with the iphone by naturally an ipad should follow.
I'd like to be able to create a thumbnail of this movie for local phone use (display it in a gallery in my application). The thumbnail may just be the first frame of the movie.
I want to be able to "export" the result as a movie, something that could be easily uploaded to youtube or facebook.
So the big question here is whether any of the suggested 1-4 implementations I have in mind (or others you might suggest) can somehow export such a movie.
If all four fail on the movie export task then I have an alternative in mind. The alternative is to use a server which runs ffmpeg and which accept a bundle of all the movie images (I'd have to draw them in the phone and upload them to the sever by their sequence) and then the server would compile all the images with their soundtrack to a single movie.
Obviously to keep things simple I'd prefer to do this server-less, i.e. be able to export the movie from the iphone but if that's too much to ask then the last requirement would be to at least be able to export the set of all images (keyframes in the movie) so I can bundle them and upload to a server.
The length of the movie is supposed to be a one or two minutes. I hope the question wasn't too long and that it's clear...
Thanks!
well written question. for your video export needs check out AVFoundation (available as of iOS 4). If I were going to implement this, I'd try #1 or #4. I think #1 might be the quickest to just try out, but that's probably because I don't have any experience with cocos2d. I think you will be able to pause and scrub CoreAnimation: check out the CAMediaTiming protocol it adopts.
Ran, you do have a number of options. You are not going to find a "complete solution" but it will be possible to make use of existing libraries in order to skip a bunch of implementation and performance issues. You can of course try to build this whole thing in OpenGL, but my advice is that you go with another approach. What I suggest is that you render the entire "video" frame by frame on the device based on your json settings. That basically comes down to setting up your scene elements and then determining the positions of each element for times [0, 1, 2] where each number indicates a frame at some framerate (15, 20, or 24 FPS would be more than enough). First off, please have a look at my library for non-trivial iOS animations, in it you will find a class named AVOfflineComposition that does the "comp items and save to a file on disk" step. Obviously, this class does not do everything you need, but it is a good starting point for the basic logic of creating a comp of N elements and writing the results out to a video file. The point of creating a comp is that all of your code that reads settings and places objects at a specific spot in the comp can be run in an offline mode and the result you get at the end is a video file. Compare this to all the details involved with maintaining all theses elements in memory and then going forward more quickly or slowly depending on how quickly everything is running.
The next step will be to create 1 audio file that is the length that the "movie" of all the comped frames and have it include any sounds at specific times. This basically means mixing the audio at runtime and saving the results to an output file so that the results are easy to play with AVAudioPlayer. You can have a look at some very simple PCM mixer code that I wrote for this type of thing. But, you might want to consider a more complete audio engine like theamazingaudioengine.
Once you have an audio file and a movie file, these can be played together and kept in sync quite easily using the AVAnimatorMedia class. Take a look at this AVSync example for source code that shows a tightly synced example of playing a video and showing a movie.
Your last requirement can be implemented with the AVAssetWriterConvertFromMaxvid class, it implements logic that will read a .mvid movie file and write it as a h.264 encoded video using the h.264 encoder hardware on the iPhone or iPad. With this code, you will not need to write a ffmpeg based server module. Also, that would not work anyway because it would take too long to upload all the uncompressed video to your server. You need to compress the video to h.264 before it can be uploaded or emailed from the app.

seamless dynamic audio loop on iPhone

Ok,
So I am trying to seamlessly loop together three sound files with the second file being looped against itself n times. Let's assume I can get them to loop seemlessly together in another program by butting them together. however when I use audioPlayerDidFinishPlaying delegate method of avaudioplayer there is a slight delay in the butting together (even with using the "prepareToPlay" method. The other bit of complexity is that the middle sound needs to continue looping n times AKA as long as the touchesDidEnd method has not been called, at which point the program would play the 3rd and final clip to end the sound "playlist."
As an example think an audience giving applause after a show. sound file A would contain the initial swell of applause, sound file 2 would contain a loopable sample of the crowd cheering at a continuous rate until the user lifts up on the button, at which time the program should kick into the 3rd sound file, which would be the applause of the crowd tailing off.
So my initial attempt using AVAudioPlayer seemed not quick enough to loop back to back without delay so I need another quicker way to do this, suggestions?
For the looping, check out the numberOfLoops property of AVAudioPlayer. Works better than trying to handle the loop logic yourself using the player's delegate.
As for fading smoothly from one looping sample to a 'tail' sample, that's going to be difficult to pull off with AVAudioPlayer. Probably your best bet is to start another player with the trailing sound file and use a timer to rapidly cross fade between the two using the volume property of each.
If that's not satisfactory (and I suspect it may not be) I don't think you'll have much choice but to drop down to the AudioQueue API to pull this off.

Motion detection using iPhone

I saw at least 6 apps in AppStore that take photos when detect motion (i.e. a kind of Spy stuff). Does anybody know what is a general way to do such thing using iPhone SDK?
I guess their apps take photos each X seconds and compare current image with previous to determine if there any difference (read "motion"). Any better ideas?
Thank you!
You could probably also use the microphone to detect noise. That's actually how many security system motion detectors work - but they listen in on ultrasonic sound waves. The success of this greatly depends on the iPhone's mic sensitivity and what sort of API access you have to the signal. If the mic's not sensitive enough, listening for regular human-hearing-range noise might be good enough for your needs (although this isn't "true" motion-detection).
As for images - look into using some sort of string-edit-distance algorithm, but for images. Something that takes a picture every X amount of time, and compares it to the previous image taken. If the images are too different (edit distance too big), then the alarm sounds. This will account for slow changes in daylight, and will probably work better than taking a single reference image at the beginning of the surveillance period and then comparing all other images to that.
If you combine these two methods (image and sound), it may get you what you need.
You could have the phone detect light changes meaning using the sensor at the top front of the phone. I just don't know how you would access that part of the phone
I think you've about got it figured out- the phone probably keeps images where the delta between image B and image A is over some predefined threshold.
You'd have to find an image library written in Objective-C in order to do the analysis.
I have this application. I wrote a library for Delphi 10 years ago but the analysis is the same.
The point is to make a matrix from whole the screen, e.g. 25x25, and then make an average color for each cell. After that, compare the R,G,B,H,S,V of average color from one picture to another and, if the difference is more than set, you have motion.
In my application I use fragment shader to show motion in real time. Any question, feel free to ask ;)