How do I extract a screenshot from a video in the iPhone SDK? - iphone

I'd like to be able to take a screenshot of an MPEG recorded using the iPhone camera at set intervals.
I've seen a few ways to do this; namely compiling and using FFmpeg (Using FFMPEG library with iPhone SDK for video encoding), however it seems it's quite difficult to comply with the LGPL (http://ffmpeg.org/legal.html) for commercial use.
This term of the contract pretty much makes it useless to us:
Q: Is it perfectly alright to incorporate the whole FFmpeg core into my own commercial product?
A: You might have a problem here. There have been cases where companies have used FFmpeg in their products. These companies found out that once you start trying to make money from patented technologies, the owners of the patents will come after their licensing fees. Notably, MPEG LA is vigilant and diligent about collecting for MPEG-related technologies.
Is there any other way? - or simply by accessing the rendering layer of an MPEG am I going to be "making money from patented technologies"?
As usual - any help on this would be greatly appreciated.
Cheers!

Yes, you can do it - if I am not wrong, since iOS 3.2... at least for the videos you have on your library. After loading the movie on your MPMoviePlayerController object, do this
UIImage *aThumbnail = [player thumbnailImageAtTime:timeCode timeOption:MPMovieTimeOptionExact];
//timeCode is a time within de video length, for example: 3.12 seconds.
//player is the MPMoviePlayerController object.

Unfortunately there is no official way to grab image frames from the camera in realtime.
I encourage you to file a bug report / feature request with Apple. Many people want this. If many people request a specific feature then they might consider to actually put this in.

Related

CoreAudio tempo change (iOS)

I'm very new to audio programming, but I know this must be possible. (This is an iOS/iPhone related question).
How would I go about changing the tempo of a loaded audio file without changing the pitch, and then playing it back?
I think I need to delve into the CoreAudio framework, but I'm not sure where to begin.
If anyone could let me know what classes I need to look at, or the general process involved, that would help me get started and I'd really appreciate it!
Cheers!
This question is highly related: it relates to pitch shifting, rather than time shifting, but I'd check out the comments and links.
Real-time Pitch Shifting on the iPhone
What you are looking for is a time-pitch modification library. Core Audio on iOS currently does not contain such, but there appear to be some 3rd party libraries available (commercially). There are also time pitch tutorials on the web, such as at dspdimention, which require a large amount of DSP development to get working.

How to develop an iphone app with reverb functionality?

I am developing an iPhone application (like Audio Processing). I have to give some effect to the audios.
If it is desktop app, many options are there. We can get good examples and full project like audacity. But I want to develop for iPhone.
I got an app with reverb option; (take a look at following link). Just I watch the "video", I did not test this application in my iPhone device.
http://www.appstorehq.com/reverb-iphone-89870/app
My question is; How can I develop the app with reverb functionality ? Is there any documentation for that ? If it is, just share with us.
NOTE: We can use AudioUnit to develop the app with reverb functionality (I am not clear with this.).
EDIT: I don't like to use any third party library.
If anybody having knowledge about this, please share with us.
Thanks.
if yourre targeting ios5 you can just the audio unit subtype kAudioUnitSubType_Reverb2 of the effect audio unit.
reverb unit
AudioComponentDescription auEffectUnitDescription;
auEffectUnitDescription.componentType = kAudioUnitType_Effect;
auEffectUnitDescription.componentSubType = kAudioUnitSubType_Reverb2;
auEffectUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
AUGraphAddNode(
processingGraph,
&auEffectUnitDescription,
&auEffectNode),
Failing that you could just write your own reverb code in the remoteio callback. A simple delay might be easier to do and would sound similar.
iOS 5.0 brings native OpenAL support, so it is now much easier - you don't have to code the algorithm yourself. It also bring support for a variety of reverb spaces:
Small Room
Medium Room
Large Room (2 configurations)
Medium Hall (3 configurations)
Large Hall (2 configurations)
Plate
Medium Chamber
Large Chamber
Cathedral
I suggest that you try the ObjectAL wrapper which already has a great support for the reverb effect:
https://github.com/kstenerud/ObjectAL-for-iPhone
Grab the source from this repository, load "ObjectAL.xcodeproj" and run the ObjectALDemo target on any iOS 5.0 device (should also work on the simulator). This will give you a good starting point and feeling of what the reverb effect is capable of.
If you still don't to use any 3rd party library, you can just grab the relevant pieces from ObjectAL. Look for the reverb-related code in the following source files (and their corresponding headers):
https://github.com/kstenerud/ObjectAL-for-iPhone/blob/master/ObjectAL/ObjectAL/OpenAL/ALListener.m
https://github.com/kstenerud/ObjectAL-for-iPhone/blob/master/ObjectAL/ObjectAL/OpenAL/ALSource.m
https://github.com/kstenerud/ObjectAL-for-iPhone/blob/master/ObjectAL/ObjectAL/OpenAL/ALWrapper.m
Good luck with your project!
AUs are a good place to start.
write your own reverb AU which contains a reverb implementation. there are tons of ways to implement a reverb. a medium/long convolution reverb is much to ask from a phone, but something such as a FDN (feedback delay network) will not require a lot of memory or CPU.
both implementations are easy to implement, if you're familiar with audio programming and optimization. the tough part is actually making one that sounds very good and performs well.
if you're unable to write optimal low level code or you do not (presently) understand basic audio signal processing, then you'll have a few obstacles to overcome -- it may be a long road in that case.
Searching the iOS documentation for "reverb" produces a link to the Core Audio Overview, which references reverb as an "effect unit." Perhaps that's worth further study?
No good, I have attempted the audio unit approach and even though it is in the documentation it is "not" implemented yet by the apple engineers. Each time you call the function to set the reverb property you will only get failure status code. You would have to implement your own reverb effect. Try reading some DSP book and you might find a clue.
you need to learn some DSP-level coding, the DSP cookbook book is okay and there are others out there. But basically you need to be comfortable with handling audio signal in the frequency domain and things such as FFT's. Once you have that, implementing a reverb filter should be straight-forward.
This is an answer I've given before, but I believe it is relevant here. I am going to agree with the others and say that you are going to have to become a bit more familiar with core-audio if you want to do this properly.
I highly recommend this core-audio book. It will teach what you need to do this right and will save you a lot of frustration.
The chapter on audio effects has not been published yet, but if it is anything like the rest of the book it's worth the wait.
EDIT
You will most likely need to do this with an audio effect (which is a form of an audio unit).

iPhone streaming debugging information

I'm looking for a way (doesn't need to be app-store save!!) to get ahold of video-streaming-relevant debugging information.
What I'm trying to do, is to write an application that opens a video stream and displays information like:
framerate
bitrate audio / video
etc etc.
codec information
basically i want to display as much information for any given stream.
Thanks for any information in advance,
best regards
sam
Even though you tagged your question with MPMoviePlayerController, that class probably isn't going to help you out very much. First of all, there's a limited amount of information you can access from it at a high level, certainly nothing about codecs and audio bitrate. And even if the class does store this type of information somehow, your app would be disqualified from being in the iTunes AppĀ Store if you access non-public methods or properties.
Secondly, MPMoviePlayerController only supports a limited number of codecs itself, namely the ones that can be decoded in hardware on the iPhone/iPad (H.264 baseline and MPEG-4 videos).
Anyways, a good option could be FFMpeg for the iPhone. Getting the information you need seems to be much more straightforward this way; check out this blog post for a nice tutorial for using the libraries.
I'm not sure about the potential legal issues concerning distributing such a program in the App Store, but if you statically link it with your binary that would at least satisfy Apple... you'll have to check the FFMpeg Legal Site for their end.

Encode an Array of Images into a movie file? (iPhone)

My app takes time-lapse photos, and also records audio to go with it. The problem is, I have absolutely no idea how to go about turning it into a .mov/.mpeg file (I am new to this type of iPhone development). I have heard some things about FFMPEG, but apparently the license doesn't cover the public distribution of iPhone apps. Anyone have any suggestions?
you can use Theora aka VP3. it is free to use in any application and has a pretty decent quality/bitrate ratio
I do not know whether the necessary parts of FFmpeg to do this is GPL or not, but there are parts of FFmpeg that are LGPL-licensed.
They have a legal page that covers this in detail, so FFmpeg might be worth a closer look.
FFmpeg itself can be used in iphone apps distributed on the appstore. See wunderradio as an example: http://www.wunderradio.com/code.html
BUT... I am experimenting with it right now and I am kinda disappointed with the quality of the result. (not to mention that encoding is sloooow on the iphone) It seems to me that without the x264 library it is impossible to create mpeg-4 videos with decent quality. And x264 is GPL licensed, so if you use it, you must disclose the full source of your project. (Or did anyone figure out how to select some usable codec from the LGPL-d FFmpeg?)
What I don't understand is that the appstore has now a lot of video editing apps. How do they work? I made a pretty thorough search, and couldn't find any mpeg-4 codec with a permissive enough license. Do they violate GPL? Do they use private API? I really don't believe that they built a homebrew mpeg4 encoder.

MPMoviePlayerController alternatives on iPhone?

I am looking for alternatives to the MPMoviePlayerController on the iPhone. As a video player its functionality is very limited. According to the class reference there is no way to get the current play back time or set a new time, for example. It's just play and stop.
Are there any middleware solutions out there for iPhone video playback that offer more functionality? CRI has something in development but it has not been released. I haven't been able to find anything else.
Thanks.
Keep in mind that even though a project is GPL, that does not mean you can't contact the author's about an LGPL option on the underlying code.
A possible roll your own solution would be to use openGL as a compositing surface for the video and obtain a behind the scenes library like ffmpeg if you need to process specific video types.
NeHe has an example of rendering AVI's to openGL: http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=35
FFMpeg has recently been ported to iPhone and is an LGPL based product: http://geek.thinkunique.org/2008/03/05/ffmpeg-on-the-iphone/
(Note: There is some debate over the commercial use of LGPL on iPhone because the license references the phrase "dynamic" when referring to library linkage, which iPhone doesn't allow. I have not seen any project teams balk at their code being used on the iPhone statically, but you should contact the authors directly for clarification.)
Another (though GPL) version of an OpenGL video player is: http://code.google.com/p/glover/
What your getting through a solution like this is basically a bypass on the iPhone/Mac/CALayer specific technical details and leveraging an existing knowledge base of video through OpenGL which although not extensive, is still broadly supported.
If you are dealing with a specific video style, then you may want to see if a library is avaiable for the specific video format direct from the vendor instead of using a multi-purpose tool like FFMpeg. Once you have the compositing working, the video can come from most any library.
Barney
You could use AVPlayer. See the documentation
You can then get the current playback time with currentTime and seek to a specified time with seekToTime:.
You have to direct the visual output of an AVPlayer instance to an AVPlayerLayer object (subclass of CALayer). See the first listing here.
VLC has been ported to iPhone but not using the official SDK.