I am building an ARKit app where we want to be able to take a photo of the scene. I am finding the image quality of the ARCamera view is not good enough to take photos with on an iPad Pro.
Standard camera image:
ARCamera image:
I have seen an Apple forum post that mentions this could be iPad Pro 10.5 specific and is related to fixed lens position (https://forums.developer.apple.com/message/262950#262950).
Is there are public way to change the setting?
Alternatively, I have tried to use AVCaptureSession to take a normal photo and apply it to sceneView.scene.background.contents to switch out a blurred image for higher res image at the point the photo is taken but can't get AVCapturePhotoOutput to work with ARKit
Update: Congrats to whoever filed feature requests! In iOS 11.3 (aka "ARKit 1.5"), you can control at least some of the capture settings. And you now get 1080p with autofocus enabled by default.
Check ARWorldTrackingConfiguration.supportedVideoFormats for a list of ARConfiguration.VideoFormat objects, each of which defines a resolution and frame rate. The first in the list is the default (and best) option supported on your current device, so if you just want the best resolution/framerate available you don't have to do anything. (And if you want to step down for performance reasons by setting videoFormat, it's probably better to do that based on array order rather than hardcoding sizes.)
Autofocus is on by default in iOS 11.3, so your example picture (with a subject relatively close to the camera) should come out much better. If for some reason you need to turn it off, there's a switch for that.
There's still no API for changing the camera settings for the underlying capture session used by ARKit.
According to engineers back at WWDC, ARKit uses a limited subset of camera capture capabilities to ensure a high frame rate with minimal impact on CPU and GPU usage. There's some processing overhead to producing higher quality live video, but there's also some processing overhead to the computer vision and motion sensor integration systems that make ARKit work — increase the overhead too much, and you start adding latency. And for a technology that's supposed to show users a "live" augmented view of their world, you don't want the "augmented" part to lag camera motion by multiple frames. (Plus, on top of all that, you probably want some CPU/GPU time left over for your app to render spiffy 3D content on top of the camera view.)
The situation is the same between iPhone and iPad devices, but you notice it more on the iPad just because the screen is so much larger — 720p video doesn't look so bad on a 4-5" screen, but it looks awful stretched to fill a 10-13" screen. (Luckily you get 1080p by default in iOS 11.3, which should look better.)
The AVCapture system does provide for taking higher resolution / higher quality still photos during video capture, but ARKit doesn't expose its internal capture session in any way, so you can't use AVCapturePhotoOutput with it. (Capturing high resolution stills during a session probably remains a good feature request.)
config.videoFormat = ARWorldTrackingConfiguration.supportedVideoFormats[1]
I had to look for a while on how to set the config, so maybe it will help somebody.
This lets you pick the one with the highest resolution, you can change it so that it picks by most fps, etc...
if let videoFormat = ARWorldTrackingConfiguration.supportedVideoFormats.sorted { ($0.imageResolution.width * $0.imageResolution.height) < ($1.imageResolution.width * $1.imageResolution.height) }.last{
configuration.videoFormat = videoFormat
}
Related
I'm trying to doing live camera filter like Instagram and Path. Since I'm not skilled enough to handle OpenGL ES. I use iOS 5's CoreImage instead.
I use this call back method to intercept and filter each frame from the camera:
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
The session preset I use is AVCaptureSessionPresetPhoto, since I need to take high-quality photos in the end.
If I just present the buffer to screen without any CIImage filtering, the average FPS would reach 26 or even more, which is great.
If I start to apply some CIFilters to the image data, the FPS would drop to as low as 10 or even lower. The live video would start to look bad.
I understand that Instagram and Path would use OpenGL ES directly rather than wrapped frameworks such as CoreImage, so that they could build more efficient code for GPU rendering. At the same time, I also notice that Instagram actually lowers the sample video quality to further reduce the GPU burden. Below is the screenshot I took when my app (left) and Instagram (right) are both capturing live video. Pay attention to the letter Z and S in both pictures. You can see that Instagram's video quality is slightly lower than mine.
So right now I'm considering various ways to reduce the live video frame quality. But I really have no idea which way is better. And how should I implement it.
Try to reduce this (CMSampleBufferRef)sampleBuffer's quality before converting it into a CIImage object.
Try to find some APIs from CIImage or CIFilter or CIContext to lower the video frame quality.
Try to find some APIs from OpenGL ES to lower the video frame quality.
Again, I don't have any clues now. So any suggestions would be greatly appreciated!
AVcaptureSession has the property sessionPreset which allow video quality to set low medium or high.
Below code set quality to medium.
[self.mSession setSessionPreset:AVCaptureSessionPresetMedium];
If you're unwilling to lower the video quality (and I think AVCaptureSessionPresetPhoto is pretty low anyway), your best bet is either optimizing your filters, or lowering the frame-rate.
You may think, well, I'm already getting a lower frame-rate. However, setting this in advance will optimize the way the frames are dropped. So, if you're getting 10fps now, try setting the max frame rate to, say 15, and you might just get that 15. 15fps is plenty good for a preview.
(I've worked on Flare, on one of the most demanding apps out there itun.es/iLQ3bR.)
I want to adjust the exposure of the iPhone/iPod touch camera with intimate detail. I would prefer to take a series of photos with decreasing exposure times to obtain a sequence of images (for HDR reconstruction). Is this possible?
If not, what's the next best thing? It seems you can set a point of interest in the image for the autoexposure. Perhaps I could search for a dark/light region of the image and then use this exposurePointOfInterest to adjust the exposure, but this seems like a very indirect solution that is also error-prone. If anybody has tried an alternative, such an answer is also desirable.
As iOS gives control of frame durations by
MinFrameDuration
MaxFrameDuration
since exposure times vary based on fram rate and frame duration
By setting min and max frame rate to a particular value
You will be locking the fram rate.
That will effect your exposure times.
This is also very indirect way of controlling, may be it helps your case
some example would be like this:
if (conn.isVideoMinFrameDurationSupported)
conn.videoMinFrameDuration = CMTimeMake(1, CAPTURE_FRAMES_PER_SECOND);
if (conn.isVideoMaxFrameDurationSupported)
conn.videoMaxFrameDuration = CMTimeMake(1, CAPTURE_FRAMES_PER_SECOND);
Since you would have to decrease the shutter speed of the camera, this unfortunately does not appear to be possible, and more importantly, against the HIG:
Changing the behavior of iPhone external hardware is a violation of
the iPhone Developer Program License Agreement. Applications must
adhere to the iPhone Human Interface Guidelines as outlined in the
iPhone Developer Program License Agreement section 3.3.7
Related article Apple Removes Camera+ iPhone App From The App Store After Developer Reveals Hack To Enable Hidden Feature.
If it can be done programatically, instead of with the hardware, you might have a chance, but then its just an effect on an image,not a true long exposure picture.
There are some simulated slow shutter apps that do get approved like Slow Shutter or Magic Shutter.
Related article: New iPhone Camera App “Magic Shutter” Hits The App Store.
This is supported since iOS 8:
http://developer.xamarin.com/guides/ios/platform_features/intro_to_manual_camera_controls/
Have a look at AVCaptureExposureModeCustom and CaptureDevice.LockExposure...
I tried to do this for my motion activated camera app (Pocket Sentry) and I found that it is not possible to do this AND get approved in the app store.
I have been trying to do this myself. I think its possible only by using the exposure point of interest property. I am detecting the dark and bright spots and then adjusting the point accordingly.
Please refer : Detecting bright/dark points on iPhone screen
Does anyone know a better way to do this?
I am not too sure, but you should try using AVFoundation class to build the camera app, following the apple's sample code:
AVCam Sample Code
And then try to leverage the exposureMode property of the Class:
exposureMode Class Reference
According to Apple, the iPhone 4 has a new and better screen resolution:
3.5-inch (diagonal) widescreen Multi-Touch display
960-by-640-pixel resolution at 326 ppi
This little detail affects our apps in a heavy way. Most of the demo apps on the net have one thing in common: They position views in the believe that the screen has a fixed size of 320 x 480 pixels. So what most (if not all) developers do is: they designed everything in such a way, that a touchable area is (for example) 50 x 50 pixels big. Just enough to tap it. Things have been positioned relative to the upper left, to reach a specific position on screen - let's say the center, or somewhere at the bottom.
When we develop high-resolution apps, they probably won't work on older devices. And if they do, they would suffer a lot from 4-times the size of any image, having to scale them down in memory.
According to Supporting High-Resolution Screens In Views, from the Apple docs:
On devices with high-resolution screens, the imageNamed:,
imageWithContentsOfFile:, and
initWithContentsOfFile: methods
automatically looks for a version of
the requested image with the #2x
modifier in its name. It if finds one,
it loads that image instead. If you do
not provide a high-resolution version
of a given image, the image object
still loads a standard-resolution
image (if one exists) and scales it
during drawing.
When it loads an image, a UIImage object automatically sets the size and
scale properties to appropriate values
based on the suffix of the image file.
For standard resolution images, it
sets the scale property to 1.0 and
sets the size of the image to the
image’s pixel dimensions. For images
with the #2x suffix in the filename,
it sets the scale property to 2.0 and
halves the width and height values to
compensate for the scale factor. These
halved values correlate correctly to
the point-based dimensions you need to
use in the logical coordinate space to
render the image.
This is purely speculation, but if the resolution really is 960 x 640 - that's exactly twice as high a resolution as the current version. It would be trivially simple for the iPhone to check the apps build target and detect a legacy version of the app and simply scale it by 2. You'd never notice the difference.
Engadget's reporting of the keynote included the following transcript from Steve Jobs
...It makes it so your apps run
automatically on this, but it renders
your text and controls in the higher
resolution. Your apps look even
better, but if you do a little bit of
work, then they will look stunning. So
we suggest that you do that
So I infer from that, if you use existing APIs your app will get scaled up. If you take advantage of new iOS4 APIs, you can get all groovy with the new pixels.
It sounds like the display will be ok but I'm concerned about the logic in my game. Will touchesBegan positions return points in the new resolution? The screen bounds will be different, these types of things could potentially be problems for me.
Scaling to a double resolution for display purpose is straight forward, but will this scalling apply to all api's that input/output a screen coordinate? If not things are going to break aren't they?
Fair enough if it's been handled extensively throughout the framework.. I would imagine there are a lot of potential api's this effects.
For people who are coming to this thread looking for a solution to a mobile web interface, check out this post on the Webkit blog: http://webkit.org/blog/55/high-dpi-web-sites/
It seems that Webkit has solved this problem four years ago.
Yes it is true.
According to WWDC it appears that apple has build it some form of automatic conversion so that the resolution for applications will not be completely off. Think up-convert for dvd to HDTV's.
My guess would be that apple knows what most of the standards developers have been using and will already be using these for an immediate conversion. Of course if you are programming an application to take advantage of the new resolution it will look much nicer than whatever the result of apples auto-conversion is.
All of your labels and system buttons will be at 326dpi but your images will still be pixel doubled until you add the hi-res resources. I am currently updating my apps. If you build and run on the iPhone 4 sim then it is presented at 50%, go to Window > Scale > 100% to see the real difference! Labels are smooth, my images look shocking!
In My application as the user opens the camera, camera should capture a image as soon as there is a difference in image when compared to previous image and camera should always be in capturing mode.
This should be done automatically without any user interaction.Please Help me out as i couldn't find the solution asap.
Thanks,
ravi
I don't think the iPhone camera can do what you want.
It sounds like your doing a type of motion detection by comparing two snapshots taken at different times and seeing if something has changed between the older and the newer image. To that you need:
I don't think the iPhone can do what you want. The camera is not setup to automatically take photos and I don't think the hardware can support the level of processing needed to compare two images in enough detail to detect motion.
Hmmmm, in thinking about it, you might be able to detect motion by somehow measuring the frame differentials in the compression of video. All the video codecs save space by only registering the parts of the video that change from frame-to-frame. So, a large change in the saved data would indicate a large change in the environment.
I have no idea how to go about doing that but it might give you a starting point.
You could try using opencv for motion detection based on differences between captured frames but I'm not sure if the iPhone API allows reading multiple frames from the camera.
Look for motempl.c in the opencv distribution.
You can do a screenshot to automatically capture the image, using the UIGetScreenImage function.
I am interested in doing some image-hacking apps. To get a better sense of expected performance can someone give me some idea of the overhead of touching each pixel at fullscreen resolution?
Typical use case: The use pulls a photo out of the Photo Album, selects a visual effect and - unlike a Photoshop filter - gestural manipulation of the device drives the effect in realtime.
I'm just looking for ballpark performace numbers here. Obviously, the more compute intensive my effect the more lag I can expect.
Cheers,
Doug
You will need to know OpenGL well to do this. The iPhone OpenGL ES hardware has a distinct advantage over many desktop systems in that there is only one place for memory - so textures don't really need to be 'uploaded to the card'. There are ways to access the memory of a texture pretty well directly.
The 3GS has a much faster OpenGL stack than the 3G, you will need to try it on the 3GS/equivalent touch.
Also compile and run the GLImageProcessing example code.
One thing that will make a big difference is if you're going to do this at device resolution or at the resolution of the photo itself. Typically, photos transferred from iTunes are scaled to 640x480 (4 times the number of pixels as the screen). Pictures from the camera roll will be larger than that - up to 3Mpix for 3GS photos.
I've only played around with this a little bit, but doing it the obvious way - i.e. a CGImage backed by an array in your code - you could see in the range of 5-10 FPS. If you want something more responsive than that, you'll have to come up with a more-creative solution. Maybe map the image as textures on a grid of points, and render with OpenGL?
Look up FaceGoo in the App Store. That's an example of an app that uses a straightforward OpenGL rendering loop to do something similar to what you're talking about.
Not doable, not with the current APIs and a generic image filter. Currently you can only access the screen through OpenGL or higher abstractions and OpenGL is not much suited to framebuffer operations. (Certainly not the OpenGL ES implementation on iPhone.) If you change the image every frame you have to upload new textures, which is too expensive. In my opinion the only solution is to do the effects on the GPU, using OpenGL operations on the texture.
My answer is just wait a litle until they get rid of the openGL 1.0 devices and finally bring Core Image over to the iphone SDK.
With Fragment shaders this is very doable on the newer devices.
I'm beginning to think the only way to pull this off is to write a suite of vertex/fragment shaders and do it all in OpenGL ES 2.0. I'd prefer not to incur the restriction of limiting the app to iPhone 3GS but I think thats the only viable way to go here.
I was really hoping there was some CoreGraphics approach that would work but that does not appear to be the case.
Thanks,
Doug