I am interested in doing some image-hacking apps. To get a better sense of expected performance can someone give me some idea of the overhead of touching each pixel at fullscreen resolution?
Typical use case: The use pulls a photo out of the Photo Album, selects a visual effect and - unlike a Photoshop filter - gestural manipulation of the device drives the effect in realtime.
I'm just looking for ballpark performace numbers here. Obviously, the more compute intensive my effect the more lag I can expect.
Cheers,
Doug
You will need to know OpenGL well to do this. The iPhone OpenGL ES hardware has a distinct advantage over many desktop systems in that there is only one place for memory - so textures don't really need to be 'uploaded to the card'. There are ways to access the memory of a texture pretty well directly.
The 3GS has a much faster OpenGL stack than the 3G, you will need to try it on the 3GS/equivalent touch.
Also compile and run the GLImageProcessing example code.
One thing that will make a big difference is if you're going to do this at device resolution or at the resolution of the photo itself. Typically, photos transferred from iTunes are scaled to 640x480 (4 times the number of pixels as the screen). Pictures from the camera roll will be larger than that - up to 3Mpix for 3GS photos.
I've only played around with this a little bit, but doing it the obvious way - i.e. a CGImage backed by an array in your code - you could see in the range of 5-10 FPS. If you want something more responsive than that, you'll have to come up with a more-creative solution. Maybe map the image as textures on a grid of points, and render with OpenGL?
Look up FaceGoo in the App Store. That's an example of an app that uses a straightforward OpenGL rendering loop to do something similar to what you're talking about.
Not doable, not with the current APIs and a generic image filter. Currently you can only access the screen through OpenGL or higher abstractions and OpenGL is not much suited to framebuffer operations. (Certainly not the OpenGL ES implementation on iPhone.) If you change the image every frame you have to upload new textures, which is too expensive. In my opinion the only solution is to do the effects on the GPU, using OpenGL operations on the texture.
My answer is just wait a litle until they get rid of the openGL 1.0 devices and finally bring Core Image over to the iphone SDK.
With Fragment shaders this is very doable on the newer devices.
I'm beginning to think the only way to pull this off is to write a suite of vertex/fragment shaders and do it all in OpenGL ES 2.0. I'd prefer not to incur the restriction of limiting the app to iPhone 3GS but I think thats the only viable way to go here.
I was really hoping there was some CoreGraphics approach that would work but that does not appear to be the case.
Thanks,
Doug
Related
I am building an ARKit app where we want to be able to take a photo of the scene. I am finding the image quality of the ARCamera view is not good enough to take photos with on an iPad Pro.
Standard camera image:
ARCamera image:
I have seen an Apple forum post that mentions this could be iPad Pro 10.5 specific and is related to fixed lens position (https://forums.developer.apple.com/message/262950#262950).
Is there are public way to change the setting?
Alternatively, I have tried to use AVCaptureSession to take a normal photo and apply it to sceneView.scene.background.contents to switch out a blurred image for higher res image at the point the photo is taken but can't get AVCapturePhotoOutput to work with ARKit
Update: Congrats to whoever filed feature requests! In iOS 11.3 (aka "ARKit 1.5"), you can control at least some of the capture settings. And you now get 1080p with autofocus enabled by default.
Check ARWorldTrackingConfiguration.supportedVideoFormats for a list of ARConfiguration.VideoFormat objects, each of which defines a resolution and frame rate. The first in the list is the default (and best) option supported on your current device, so if you just want the best resolution/framerate available you don't have to do anything. (And if you want to step down for performance reasons by setting videoFormat, it's probably better to do that based on array order rather than hardcoding sizes.)
Autofocus is on by default in iOS 11.3, so your example picture (with a subject relatively close to the camera) should come out much better. If for some reason you need to turn it off, there's a switch for that.
There's still no API for changing the camera settings for the underlying capture session used by ARKit.
According to engineers back at WWDC, ARKit uses a limited subset of camera capture capabilities to ensure a high frame rate with minimal impact on CPU and GPU usage. There's some processing overhead to producing higher quality live video, but there's also some processing overhead to the computer vision and motion sensor integration systems that make ARKit work — increase the overhead too much, and you start adding latency. And for a technology that's supposed to show users a "live" augmented view of their world, you don't want the "augmented" part to lag camera motion by multiple frames. (Plus, on top of all that, you probably want some CPU/GPU time left over for your app to render spiffy 3D content on top of the camera view.)
The situation is the same between iPhone and iPad devices, but you notice it more on the iPad just because the screen is so much larger — 720p video doesn't look so bad on a 4-5" screen, but it looks awful stretched to fill a 10-13" screen. (Luckily you get 1080p by default in iOS 11.3, which should look better.)
The AVCapture system does provide for taking higher resolution / higher quality still photos during video capture, but ARKit doesn't expose its internal capture session in any way, so you can't use AVCapturePhotoOutput with it. (Capturing high resolution stills during a session probably remains a good feature request.)
config.videoFormat = ARWorldTrackingConfiguration.supportedVideoFormats[1]
I had to look for a while on how to set the config, so maybe it will help somebody.
This lets you pick the one with the highest resolution, you can change it so that it picks by most fps, etc...
if let videoFormat = ARWorldTrackingConfiguration.supportedVideoFormats.sorted { ($0.imageResolution.width * $0.imageResolution.height) < ($1.imageResolution.width * $1.imageResolution.height) }.last{
configuration.videoFormat = videoFormat
}
I'm trying to doing live camera filter like Instagram and Path. Since I'm not skilled enough to handle OpenGL ES. I use iOS 5's CoreImage instead.
I use this call back method to intercept and filter each frame from the camera:
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
The session preset I use is AVCaptureSessionPresetPhoto, since I need to take high-quality photos in the end.
If I just present the buffer to screen without any CIImage filtering, the average FPS would reach 26 or even more, which is great.
If I start to apply some CIFilters to the image data, the FPS would drop to as low as 10 or even lower. The live video would start to look bad.
I understand that Instagram and Path would use OpenGL ES directly rather than wrapped frameworks such as CoreImage, so that they could build more efficient code for GPU rendering. At the same time, I also notice that Instagram actually lowers the sample video quality to further reduce the GPU burden. Below is the screenshot I took when my app (left) and Instagram (right) are both capturing live video. Pay attention to the letter Z and S in both pictures. You can see that Instagram's video quality is slightly lower than mine.
So right now I'm considering various ways to reduce the live video frame quality. But I really have no idea which way is better. And how should I implement it.
Try to reduce this (CMSampleBufferRef)sampleBuffer's quality before converting it into a CIImage object.
Try to find some APIs from CIImage or CIFilter or CIContext to lower the video frame quality.
Try to find some APIs from OpenGL ES to lower the video frame quality.
Again, I don't have any clues now. So any suggestions would be greatly appreciated!
AVcaptureSession has the property sessionPreset which allow video quality to set low medium or high.
Below code set quality to medium.
[self.mSession setSessionPreset:AVCaptureSessionPresetMedium];
If you're unwilling to lower the video quality (and I think AVCaptureSessionPresetPhoto is pretty low anyway), your best bet is either optimizing your filters, or lowering the frame-rate.
You may think, well, I'm already getting a lower frame-rate. However, setting this in advance will optimize the way the frames are dropped. So, if you're getting 10fps now, try setting the max frame rate to, say 15, and you might just get that 15. 15fps is plenty good for a preview.
(I've worked on Flare, on one of the most demanding apps out there itun.es/iLQ3bR.)
I have a vector graphics .svg to create a image for iPhone. I know I can't use a .svg directly to iPhone because cocoa touch don't render it (maybe with others libs).
So, I have to convert the image for a format to use in iPhone. So, what is the best format to have the best quality in iPhone? I will create image for Retina (2x) and for normal.
Is it .png the best choice? Any idea?
PNG is definitely the best choice as it's the de-facto standard on iOS. The SDK processes PNGs in build phase and they can be loaded into memory directly. So they're super fast. You need to have normal PNGs and #2x to support retina display.
Rendering vector-based graphics usually costs much resources (just as writing codes to draw with the Core Graphics API). On a mobile device, using raster graphics balances the performance and the quality. I think there's less cases in which we want to scale the UI elements to any level.
Creating 2 sets of design elements do cost you more time, but it's acceptable. With batch processing, you can save some time. Be sure to check each graphic on the real devices and manually refine it for better looking.
I'm working on an iPhone game that involves only two dimensional, translation-based animation of just one object. This object is subclassed from UIView and drawn with Quartz-2D. The translation is currently put into effect by an NSTimer that ticks each frame and tells the UIView to change its location.
However, there is some fairly complex math that goes behind determining where the UIView should move during the next frame. Testing the game on the iOS simulator works fine, but when testing on an iPhone it definitely seems to be skipping frames.
My question is this: is my method of translating the view frame by frame simply a bad method? I know OpenGL is more typically used for games, but it seems a shame to set up OpenGL for such a simple animation. Nonetheless, is it worth the hassle?
It's hard to say without knowing what kind of complex math is going on to calculate the translations. Using OpenGL for this only makes sense if the GPU is really the bottleneck. I would suspect that this is not the case, but you have to test which parts are causing the skipped frames.
Generally, UIView and CALayer are implemented on top of OpenGL, so animating the translation of a UIView already makes use of the GPU.
As an aside, using CADisplayLink instead of NSTimer would probably be better for a game loop.
The problem with the iPhone simulator is it has access to the same resources as your mac. Your macs ram, video card etc. What I would suggest doing is opening instruments.app that comes with the iPhone SDK, and using the CoreAnimation template to have a look at how your resources are being managed. You could also look at allocations to see if its something hogging ram. CPU could also help.
tl;dr iPhone sim uses your macs ram and GFX card. Try looking at the sequence in Instruments to see if theres some lag.
I'm working on an iPhone app that I'm going to be demo'ing to a live audience soon.
I'd really like to demo the app live over VGA to a projector, rather than show screenshots.
I bought a VGA adapter for iPhone, and have adapted Rob Terrell's TVOutManager to suit my needs. Unfortunately, the frame rate after testing on my television at home just isn't that good - even on an iPhone 4 (perhaps 4-5 frames per second, it varies).
I believe the reason for this slowness is that the main routine I'm using to capture the device's screen (which is then being displayed on an external display) is UIGetScreenImage(). This routine, which is no longer allowed to be part of shipping apps, is actually quite slow. Here's the code I'm using to capture the screen (FYI mirrorView is a UIImageView):
CGImageRef cgScreen = UIGetScreenImage();
self.mirrorView.image = [UIImage imageWithCGImage:cgScreen];
CGImageRelease(cgScreen);
Is there a faster method I can use to capture the iPhone's screen and achieve a better frame rate (shooting for 20+ fps)? It doesn't need to pass Apple's app review - this demo code won't be in the shipping app. If anyone knows of any faster private APIs, I'd really appreciate the help!
Also, the above code is being executed using a repeating NSTimer which fires every 1.0/desiredFrameRate seconds (currently every 0.1 seconds). I'm wondering if instead wrapping those calls in a block and using GCD or an NSOperationQueue might be more efficient than having the NSTimer invoke my updateTVOut obj-c method that currently contains those calls. Would appreciate some input on that too - some searching seems to indicate that obj-c message sending is somewhat slow compared to other operations.
Finally, as you can see above, the CGImageRef that UIGetScreenImage() returns is being turned into a UIImage and then that UIImage is being passed to a UIImageView, which is probably resizing the image on the fly. I'm wondering if the resizing might be slowing things down even more. Ideas of how to do this faster?
Have you looked at Apple's recommended alternatives to UIGetScreenImage? From the "Notice regarding UIGetScreenImage()" thread:
Applications using UIGetScreenImage() to capture images from the camera should instead use AVCaptureSession and related classes in the AV Foundation Framework. For an example, see Technical Q&A QA1702, "How to capture video frames from the camera as images using AV Foundation". Note that use of AVCaptureSession is supported in iOS4 and above only.
Applications using UIGetScreenImage() to capture the contents of interface views and layers should instead use the -renderInContext: method of CALayer in the QuartzCore framework. For an example, see Technical Q&A QA1703, "Screen capture in UIKit applications".
Applications using UIGetScreenImage() to capture the contents of OpenGL ES based views and layers should instead use the glReadPixels() function to obtain pixel data. For an example, see Technical Q&A QA1704, "OpenGL ES View Snapshot".
New solution: get an iPad 2 and mirror the output! :)
I don't know how fast is this but it worth a try ;)
CGImageRef screenshot = [[UIApplication sharedApplication] _createDefaultImageSnapshot];
[myVGAView.layer setContents:(id)screenshot];
where _createDefaultImageSnapshot is a private API. (Since is for demo... its ok I suppose)
and myVGAView is a normal UIView.
If you get CGImageRefs then just pass them to the contents of a layer, its lighter and should be a little bit faster (but just a little bit ;) )
I haven't got the solution you want (simulating video mirroring) but you can move your views to the external display. This is what I do and there is no appreciable impact on the frame rate. However, obviously since the view is no longer on the device's screen you can no longer directly interact with it or see it. If you have something like a game controlled with the accelerometer this shouldn't be a problem, however something touch based will require some work. What I do is have an alternative view on the device when the primary view is external. For me this is a 2D control view to "command" the normal 3D view. If you have a game you could perhaps create an alternative input view to control the game with (virtual buttons/joystick etc.) really depends on what you have as to how to work around it best.
Not having jailbroken myself I can't say for sure but I am under the impression that a jailbroken device can essentially enable video mirroring (like they use at the apple demos...). If true, that is likely your easiest route if all you want is a demo.