Actually I need a region over camera overlay like mostly of the QR code scanner app.
And when a square box comes within it just focus and click picture from it. Any idea how to implement it. I was using the UIIMAGEPICKER class but after doing some googling I found that I need to use the AVFoundation framework. But unfortunately I am not the near one.
Any code or any tutorial will be helpful. Please let me know how can I implement this.
One more thing if i need to take picture can i make the picture only to the region size?
Yes, you are correct. You will need to use AV Foundation to implement this. Have a look at the 'Using the Camera with AV Foundation' video from the WWDC 2010 session videos, to get an overview of the framework.
AvFoundation has no Dependancies on UIKit. So you will get some nice performance increases over using UIImagePickerController. It will also give you Full Access to the camera.
When using AV Foundation you are in control of the 'Device Capture Settings' i.e. Flash as well as Focus Mode and Exposure; including their points of interest. Have a look at the Programming Guide to see how to use these, or the device behaviour may differ from what you expected.
You can also download and example of an application that uses AV Foundation to implement the camera here.
Once you're up and running with that, have a look at this tutorial to get started with the overlay on the camera.
One more thing if i need to take picture can i make the picture only to the region size?
Yes you will be able to implement this. You can also configure the AVFoundation session itself to output the lowest practical resolution.
Related
I'm wondering if there's a way to recreate the "Object" experience when viewing a .usdz file through Apple's AR Quick Look. I want an experience that showcases a 3D object without "augmenting reality".
Some options that I'm thinking of that might be able to recreate this feature:
1) Using ARKit, disabling the camera and setting my own background with a custom image. I would then set the usdz/object in the center of the device's screen while having all the interaction functionalityfor the 3D object.
2) Web AR - recreate this 3D experience elsewhere and showcase this on a webview.
Any guidance or discussion about this is much appreciated - thank you!
You can use Google's model-viewer if you are going with the web solution. Another easy and effective solution would be echoAR (full disclosure, this is where I work). You can simply upload your models there and then get a link to thier model-view. You can upload models in different formats (obj, fbx, glTF, glb, USDZ) and it'll automatically convert it to the format you need to view on any device.
i have hired a programmer to create an iPhone app for me. The purpose of the app is to take a photo and upload it to a server. We want to make a special purpose screen to review the photo before uploading it. This specially developed screen will crucially have zooming functionality.
He claims that after taking a photo, it is impossible to avoid the "use"/"reuse" screen to show up, so now we have two screens to review the photo. First the standard one from Apple, then our own with zoom. Is he right about that? It just sounds so unreasonable that Apple would put such a restriction.
Edit: I mean taking a photo using the camera.
As par Apple's documentation
To perform fully-customized image or movie capture, instead use the AV
Foundation framework as described in “Media Capture and Access to
Camera” in AV Foundation Programming Guide. To create a
fully-customized image picker for browsing the photo library, use
classes from the Assets Library framework. For example, you could
create a custom image picker that displays larger thumbnail images,
that makes use of EXIF metadata including timestamp and location
information, or that integrates with other frameworks such as Map Kit.
For more information, see Assets Library Framework Reference. Media
browsing using the Assets Library framework is available starting in
iOS 4.0
In short yes it is possible check out this sample
[Update]
Use the allowsEditing property on your UIImagePickerController
imagePickerController.allowsEditing = NO;
Previous answer was a bit of a hack to take advantage of a code path that didn't show the buttons but wasn't awesome.
[Previous answer]
You can actually avoid it without going through the hassle of setting up your own image capture from AV Foundation.
Including the following will remove the need to show the "review" screen. All you have to do is put in a few of your own buttons and wire them up to the appropriate functionality.
[self.imagePickerController setShowsCameraControls:NO];
It is a little bit too late I know, but for future reference:
This is far more simple that the answers already provided,
what you are looking for is the allowsEditing option.
imagePickerController.allowsEditing = NO;
That should be enough to avoid showing the "Retake"/"Use" screen after the user takes a picture.
my app use AVCapture for capture image, this is my supervisor's ideal. But i research in internet and a can't get any information about the difference between AVCapture and default camera of iPhone or iPop (tab focus or camera quality...). please tell me what advance of AVFoundation framework ...
with the AVCaptureSession you can give your recorder a lot more functionality. You can customize nearly every aspect of the recording session. and you can ever get the raw data straight from the camera. the code can get quite complex however, and nothing is taken care of for you.
With the iOS default image capture controller you will be stuck with a few presets, and you will only have a little bit of camera functionality. But it is really simple to implement.
updated with link to apple code
If you want to see how to use the AVFoundation to do you camera recording you will probably like this app from apple.
Like I said, you will have to do everything manually. so be prepared for a handful of work.
AVCam demo app by Apple
Does anyone know how to do this?
Is anyone able to provide an example? I believe this is out of NDA now as was available in version 4.0 ?
Take a look to AVFoundation framework , specially to avcapture avsession avinputdevice, etc. You can find some listings in the iPhone dev center forums: search "avcapture"
AVFoundation is the framework you want to use to record, modify raw frames, show them, an offcourse add some overlay
If you want to do only overlay then, UIImagePickerController should b enough.
If i understand, what you are trying to do corectly, you have set up video capture someting like suggested by Apple in this Q&A:http://developer.apple.com/iphone/library/qa/qa2010/qa1702.html
From there it shouldn't be much of a Problem to use the method described in this answer:blend two uiimages based on alpha/transparency of top image to blend the preview with your overlay, provided you draw it in a UIImage first. Feed the resulting images to a buffer and save it accordingly.
Though, this method most certainly wouldn't give you a lot of frames per second.
I'm wanting to put together an application which plays video fullscreen with an interface overlaying it that basically chooses the video that is played underneath it (think 'Gym Babes' but nowhere near as risqé!). I don't wish to use private headers so MPMoviePlayerController is out of the question.
I've been digging through stackoverflow for a while and have come to the conclusion that I would need to use some sort of custom codec/video library that I assume would be written in C.
My question is basically has anyone had success doing this? And can anyone share any code, tutorials etc they can share?
You probably want to take a look at AVPlayerLayer: http://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVPlayerLayer_Class/Reference/Reference.html it gives you basic playback abilities with no (read fully customizable) interface…