I am trying to process videos in some way:
Cut and merge videos.
Record the screen and make the video out of that recording.
I look up on the Internet and stackoverflow as well and see that there is a code sample from apple called AVEditDemo but I could never find it out. If anybody has that example and willing to share with me or has any similar examples that can teach me how to do the 2 above jobs, it would be excellent.
I see there are some similar questions like this one, but I would love to have the code sample. It would help me move forward quickly.
There is a WWDC 2010 video called Editing Media with AV Foundation which may be useful to you and is available through the Developer Portal.
The AVEditDemo application is included in the WWDC 2010 Sample Code because it goes with the video. This should contain the AVEditDemo Application which goes with the Video. Unfortunately you need to download all the WWDC 2010 Sample Code to get it (232.6mb). You can get the entire download of all the code here: http://connect.apple.com/cgi-bin/WebObjects/MemberSite.woa/wa/getSoftware?code=y&source=x&bundleID=20645
Related
Hello I was watching Apple WWDC 2010 and saw Steve playing jenga... Now I am die to see what is under the hood. Is there an code sample from Apple of that game? I was looking and can't find it...
May be someone can give me a link to it or something similar, perhaps tutorial?
I am dying to know how it works. Please help.
P.S: Here is a link ( http://www.youtube.com/watch?v=ORcu-c-qnjg ) to youtube, just so you know what I am talking about.
Thanks!
Jenga was an actual application, available on the app store. As such, you won't be able to find the source code for it. However, you may be interested in the following:
Apple gyroscope sample code
Which point you to Core Motion:
http://developer.apple.com/library/ios/#documentation/CoreMotion/Reference/CoreMotion_Reference/CoreMotion_Reference.pdf
Here is another example with source code:
http://cs491f10.wordpress.com/2010/10/28/core-motion-gyroscope-example/
I'm doing a project where we want to create a video inside an iPhone app and upload it to YouTube. I've seen the you upload the video using Google's Data API (http://code.google.com/p/gdata-objectivec-client/).
However it seems that you need to upload the movie as an actual movie. Has anyone got any experience on making a movie in a format that YouTube will accept via the Data API and care to give me a few pointers on what would work?
(Just a quick note, I cannot use hidden APIs for this project)
Many thanks
Youtube accepts a broad range of formats. Just try it yourself, use any free video editing software to create a short movie and upload that movie to youtube, you're almost guaranteed that youtube would be able to process that.
The second part of your question is whether ios is able to produce a movie from still frames, then the answer is - yes - and you want to look at AVFoundation, particularly at AVAssetWriter
i am developing an application in which i am trying to record a video using AVCaptureSession class...i came across a few good tutorials like this one
http://www.benjaminloulier.com/posts/2-ios4-and-direct-access-to-the-camera
but in these tutorials only capturing images from the video frames is done while i want to record a full length video which i can either save in my device or upload it to my server....How can i achieve this using AVFoundation framework??
Check out the sample code from the WWDC 2010 http://developer.apple.com/videos/wwdc/2010/ for example the AVCamDemo.
Login to your iPhone developer account (the box with "Download WWDC 2010 Session Videos for Free") then you see a new page and there you can click on "View in iTunes" and then in iTunes(!) you get the link to the WWDC 2010 example source code... or you can try the link
http://connect.apple.com/cgi-bin/WebObjects/MemberSite.woa/wa/getSoftware?code=y&source=x&bundleID=20645
but to download the source code you must also login to your developer account.
I have been searching for resources to display a video in OpenGL ES for iPhone. Can't seem to find any sample code for doing so. The only link i found was a blog which speaks of it but does not have a guide on implementing it.
Would appreciate if anyone could point out any resources they know of or guide me of what are the steps to doing it?
Cheers.
there's a sample project for this called GLVideoFrame from WWDC 2010 sample code collection
Prior to the release of the new SDK there has been some buzz about Apple finally providing access to live camera data:
I've been reading through Apple's documentation but have not found any reference to this. Now that the NDA has been lifted, does anyone know where this new functionality is documented?
This is thoroughly demonstrated in the WWDC 2010 session video 409: Using the Camera with AV Foundation. If you download the WWDC sample code, you'll find three or four sample applications that show how to use the various aspects of live camera capture and processing.
As Shaji points out, all of this is done through the AV Foundation framework using the new capture classes AVCaptureSession, AVCaptureInput, AVCaptureDevice, and AVCaptureOutput.
Have a look at the the AV Foundation framework specially the AVCapture* classes.