what is difference between AVCapture and camera default of iPhone - iphone

my app use AVCapture for capture image, this is my supervisor's ideal. But i research in internet and a can't get any information about the difference between AVCapture and default camera of iPhone or iPop (tab focus or camera quality...). please tell me what advance of AVFoundation framework ...

with the AVCaptureSession you can give your recorder a lot more functionality. You can customize nearly every aspect of the recording session. and you can ever get the raw data straight from the camera. the code can get quite complex however, and nothing is taken care of for you.
With the iOS default image capture controller you will be stuck with a few presets, and you will only have a little bit of camera functionality. But it is really simple to implement.
updated with link to apple code
If you want to see how to use the AVFoundation to do you camera recording you will probably like this app from apple.
Like I said, you will have to do everything manually. so be prepared for a handful of work.
AVCam demo app by Apple

Related

Focus to a object in square form using ios Camera?

Actually I need a region over camera overlay like mostly of the QR code scanner app.
And when a square box comes within it just focus and click picture from it. Any idea how to implement it. I was using the UIIMAGEPICKER class but after doing some googling I found that I need to use the AVFoundation framework. But unfortunately I am not the near one.
Any code or any tutorial will be helpful. Please let me know how can I implement this.
One more thing if i need to take picture can i make the picture only to the region size?
Yes, you are correct. You will need to use AV Foundation to implement this. Have a look at the 'Using the Camera with AV Foundation' video from the WWDC 2010 session videos, to get an overview of the framework.
AvFoundation has no Dependancies on UIKit. So you will get some nice performance increases over using UIImagePickerController. It will also give you Full Access to the camera.
When using AV Foundation you are in control of the 'Device Capture Settings' i.e. Flash as well as Focus Mode and Exposure; including their points of interest. Have a look at the Programming Guide to see how to use these, or the device behaviour may differ from what you expected.
You can also download and example of an application that uses AV Foundation to implement the camera here.
Once you're up and running with that, have a look at this tutorial to get started with the overlay on the camera.
One more thing if i need to take picture can i make the picture only to the region size?
Yes you will be able to implement this. You can also configure the AVFoundation session itself to output the lowest practical resolution.

Export CoreAnimation to video file

I wrote a basic animation framework using Core Animation on iPhone. It has the functionality for pause and resume of animations and also can run animations at specified time. My basic problem is that I cannot find a way to export my animations to a video file (.mov, .avi etc). I have read about AVAssetWriter and AVComposition but cannot understand how to make them work in my case.
By searching the internet the closest I was able to get was it by doing frame by frame reading of my animations. Even for it I could not find a way to make it work and could not find that whether iPhone SDK have something to do that for this kind of frame by frame reading in my case. I also came close to this question on stackoverflow and still could not figure out (sorry if I feel that I am beginner in those things, but I am not, just could not understand some things)
If anyone know how to make it work or even how to something similar to it please share. And if there is no way then if you know any other way to do i.e using OpenGL ES instead of CoreAnimation please share it too.
Check this presentation: http://www.slideshare.net/invalidname/advanced-media-manipulation-with-av-foundation
Around page 84 he talks about adding animation to video compositions. I believe this will help you get what you need.
EDIT: Specifically you need to look at the animationTool of your video composition. This is a AVVideoCompositionCoreAnimationTool object that allows you to add a core animation layer to your output video. See also this question:
Recording custom overlay on iPhone
I am sorry I do not have the time to get you a full code snippet, but basically you set this animation tool of your video composition, and then create an AVAssetExportSession and set its videoComposition to the one you made.

iPhone demo help: anyone know of a faster screen capture alternative to UIGetScreenImage()?

I'm working on an iPhone app that I'm going to be demo'ing to a live audience soon.
I'd really like to demo the app live over VGA to a projector, rather than show screenshots.
I bought a VGA adapter for iPhone, and have adapted Rob Terrell's TVOutManager to suit my needs. Unfortunately, the frame rate after testing on my television at home just isn't that good - even on an iPhone 4 (perhaps 4-5 frames per second, it varies).
I believe the reason for this slowness is that the main routine I'm using to capture the device's screen (which is then being displayed on an external display) is UIGetScreenImage(). This routine, which is no longer allowed to be part of shipping apps, is actually quite slow. Here's the code I'm using to capture the screen (FYI mirrorView is a UIImageView):
CGImageRef cgScreen = UIGetScreenImage();
self.mirrorView.image = [UIImage imageWithCGImage:cgScreen];
CGImageRelease(cgScreen);
Is there a faster method I can use to capture the iPhone's screen and achieve a better frame rate (shooting for 20+ fps)? It doesn't need to pass Apple's app review - this demo code won't be in the shipping app. If anyone knows of any faster private APIs, I'd really appreciate the help!
Also, the above code is being executed using a repeating NSTimer which fires every 1.0/desiredFrameRate seconds (currently every 0.1 seconds). I'm wondering if instead wrapping those calls in a block and using GCD or an NSOperationQueue might be more efficient than having the NSTimer invoke my updateTVOut obj-c method that currently contains those calls. Would appreciate some input on that too - some searching seems to indicate that obj-c message sending is somewhat slow compared to other operations.
Finally, as you can see above, the CGImageRef that UIGetScreenImage() returns is being turned into a UIImage and then that UIImage is being passed to a UIImageView, which is probably resizing the image on the fly. I'm wondering if the resizing might be slowing things down even more. Ideas of how to do this faster?
Have you looked at Apple's recommended alternatives to UIGetScreenImage? From the "Notice regarding UIGetScreenImage()" thread:
Applications using UIGetScreenImage() to capture images from the camera should instead use AVCaptureSession and related classes in the AV Foundation Framework. For an example, see Technical Q&A QA1702, "How to capture video frames from the camera as images using AV Foundation". Note that use of AVCaptureSession is supported in iOS4 and above only.
Applications using UIGetScreenImage() to capture the contents of interface views and layers should instead use the -renderInContext: method of CALayer in the QuartzCore framework. For an example, see Technical Q&A QA1703, "Screen capture in UIKit applications".
Applications using UIGetScreenImage() to capture the contents of OpenGL ES based views and layers should instead use the glReadPixels() function to obtain pixel data. For an example, see Technical Q&A QA1704, "OpenGL ES View Snapshot".
New solution: get an iPad 2 and mirror the output! :)
I don't know how fast is this but it worth a try ;)
CGImageRef screenshot = [[UIApplication sharedApplication] _createDefaultImageSnapshot];
[myVGAView.layer setContents:(id)screenshot];
where _createDefaultImageSnapshot is a private API. (Since is for demo... its ok I suppose)
and myVGAView is a normal UIView.
If you get CGImageRefs then just pass them to the contents of a layer, its lighter and should be a little bit faster (but just a little bit ;) )
I haven't got the solution you want (simulating video mirroring) but you can move your views to the external display. This is what I do and there is no appreciable impact on the frame rate. However, obviously since the view is no longer on the device's screen you can no longer directly interact with it or see it. If you have something like a game controlled with the accelerometer this shouldn't be a problem, however something touch based will require some work. What I do is have an alternative view on the device when the primary view is external. For me this is a 2D control view to "command" the normal 3D view. If you have a game you could perhaps create an alternative input view to control the game with (virtual buttons/joystick etc.) really depends on what you have as to how to work around it best.
Not having jailbroken myself I can't say for sure but I am under the impression that a jailbroken device can essentially enable video mirroring (like they use at the apple demos...). If true, that is likely your easiest route if all you want is a demo.

Can the exposure time be manually adjusted for an iOS cameras?

I want to adjust the exposure of the iPhone/iPod touch camera with intimate detail. I would prefer to take a series of photos with decreasing exposure times to obtain a sequence of images (for HDR reconstruction). Is this possible?
If not, what's the next best thing? It seems you can set a point of interest in the image for the autoexposure. Perhaps I could search for a dark/light region of the image and then use this exposurePointOfInterest to adjust the exposure, but this seems like a very indirect solution that is also error-prone. If anybody has tried an alternative, such an answer is also desirable.
As iOS gives control of frame durations by
MinFrameDuration
MaxFrameDuration
since exposure times vary based on fram rate and frame duration
By setting min and max frame rate to a particular value
You will be locking the fram rate.
That will effect your exposure times.
This is also very indirect way of controlling, may be it helps your case
some example would be like this:
if (conn.isVideoMinFrameDurationSupported)
conn.videoMinFrameDuration = CMTimeMake(1, CAPTURE_FRAMES_PER_SECOND);
if (conn.isVideoMaxFrameDurationSupported)
conn.videoMaxFrameDuration = CMTimeMake(1, CAPTURE_FRAMES_PER_SECOND);
Since you would have to decrease the shutter speed of the camera, this unfortunately does not appear to be possible, and more importantly, against the HIG:
Changing the behavior of iPhone external hardware is a violation of
the iPhone Developer Program License Agreement. Applications must
adhere to the iPhone Human Interface Guidelines as outlined in the
iPhone Developer Program License Agreement section 3.3.7
Related article Apple Removes Camera+ iPhone App From The App Store After Developer Reveals Hack To Enable Hidden Feature.
If it can be done programatically, instead of with the hardware, you might have a chance, but then its just an effect on an image,not a true long exposure picture.
There are some simulated slow shutter apps that do get approved like Slow Shutter or Magic Shutter.
Related article: New iPhone Camera App “Magic Shutter” Hits The App Store.
This is supported since iOS 8:
http://developer.xamarin.com/guides/ios/platform_features/intro_to_manual_camera_controls/
Have a look at AVCaptureExposureModeCustom and CaptureDevice.LockExposure...
I tried to do this for my motion activated camera app (Pocket Sentry) and I found that it is not possible to do this AND get approved in the app store.
I have been trying to do this myself. I think its possible only by using the exposure point of interest property. I am detecting the dark and bright spots and then adjusting the point accordingly.
Please refer : Detecting bright/dark points on iPhone screen
Does anyone know a better way to do this?
I am not too sure, but you should try using AVFoundation class to build the camera app, following the apple's sample code:
AVCam Sample Code
And then try to leverage the exposureMode property of the Class:
exposureMode Class Reference

Small video playback

From what I have gathered from internets the MPMoviePlayerController class doesn't support small video playback. So, in an effort to beat a dead horse I was wondering what kind of methods could be used to get a small video playing in a corner of the screen without interrupting the rest of the screen.
So far we've come across two solutions that may work: using a UIImageView and flopping images through it like a madman and using a large fullscreen video with all the animations we need already on it and skipping around as needed.
Am I wrong about the MPMoviePlayerController not supporting non-fullscreen video? Is their an easier solution than making UIImageView flip-books? Is cutting around a video a performance hazard?
I think you're stuck with flip books. Pretty sure the fullscreen video issue is a limitation of the hardware video decoder.
After researching for about 1 hour, I didn't find anything. It appears impossible to play video non-fullscreen on the iPhone. I didn't check for openGL ES though.
well.. i have been looking for and haven't found the alternate yet!
But there are some applications already does it!
check TVUlite from TVUNetworks
As I mentioned in another reply, this blog post http://www.nightirion.com/2010/01/scaling-a-movie-on-the-iphone/ mentions a method that will allow you to play non-fullscreen video. However, I'm not sure if this method will be approved by the app store verification process.