How to deal with the layouts of presentable images? - framebuffer

A presentable image starts out in VK_IMAGE_LAYOUT_UNDEFINED but will be VK_IMAGE_LAYOUT_PRESENT_SRC_KHR after they have been presented once.
A lot of examples do a transition of all vkImages to VK_IMAGE_LAYOUT_PRESENT_SRC_KHR immediately after creating the vkSwapchain. Which allows them to use an VK_IMAGE_LAYOUT_PRESENT_SRC_KHR for oldLayout. But doing the transition right after creation of the swapchain is not allowed.
Use of a presentable image must occur only after the image is returned by vkAcquireNextImageKHR, and before it is presented by vkQueuePresentKHR. This includes transitioning the image layout and rendering commands.
What are my options to deal with the swapchain image layouts correctly?

There are 3 options. Ordered from best to worst (IMO):
Simply set the initialLayout of the attachment in the renderPass to VK_IMAGE_LAYOUT_UNDEFINED or transition from VK_IMAGE_LAYOUT_UNDEFINED every time. This is allowed and will imply that you don't care about the data still in the image. Most often you will be clearing or fully overwriting the image anyway.
valid Usage [of VkImageMemoryBarrier]
[...]
oldLayout must be VK_IMAGE_LAYOUT_UNDEFINED, VK_IMAGE_LAYOUT_PREINITIALIZED or the current layout of the image region affected by the barrier
Keep track of which images have been through the pipeline already and select the oldLayout accordingly when recording the commandBuffer.
Do the transitions after creating the swapchain but using vkAcquireNextImageKHR and vkQueuePresentKHR to ensure the application owns the image while transitioning. There is no guarantee in which order you get the images So it may be possible one image never gets returned.

I've been trying a fourth option but some input on its validity would be useful. When creating the swapchain the images are in VK_IMAGE_LAYOUT_UNDEFINED, which to me seems to indicate that they're all available to the application because they need VK_IMAGE_LAYOUT_PRESENT_SRC_KHR for presentation and so shouldn't be displayed or in queue. I didn't find anything in the spec that would guarantee this however.
The spec says that we can acquire multiple images from the swapchain if we want:
If a swapchain has enough presentable images, applications can acquire multiple images without an intervening vkQueuePresentKHR. Applications can present images in a different order than the order in which they were acquired.
Using the conclusion above I just called vkAcquireNextImageKHR to get every image of the swapchain and changed layout on all of them at once. After that I presented all of them to get them into the system so to speak.
It seems to work in the sense that all images are handed to me by the swapchain, but then again, I found no guarantee that all of them can actually be acquired immediately after creating the swapchain.

Related

What is suppressesIncrementalRendering doing?

I'm using the newest version of Xcode and Swift.
I was googling around to make my KWWebView even faster and found the following:
webConfiguration.suppressesIncrementalRendering = true
Documentation says the following:
A Boolean value indicating whether the web view suppresses content
rendering until it is fully loaded into memory.
But what does this mean? Does it mean, the html doesn't not get rendered and shown as long as not all resources like images and javascript files are completely loaded by the WKWebView?
As stated in the documentation it's a flag to tell the webview engine to wait or not till things are set and ready. whether to scan the document (html+related resources) to check for what to be redrawn periodically, or just await the full stuff to be loaded and ready.
WebEngine:
Rendering is a progressive process that depends on the assets (js, Css, images..) that will compose the page. It is important to understand that Turning this feature on or off will simply turn the algorithm of rendering on/off for loaded content.
How to make my page faster?
A lot of factors, rendering algorithm (engine's), how heavy your scripts are(bundle, memory allocation,event pass through and handling, etc..), the size of the images, how well structured your CSS is, and its hierarchical selector orgnisation(css parsing and application).
The order of which the assets are loaded( included) in the page.
You can always check the profiling of your page in (devtools for example) on a modern browser to see how things go, what size of memory it allocates, the bundle size, time for scripting, how the page is designed to consume/utilise device resources.
To make the long story short:
Generally speaking there are THREE main phases with a total of five steps through which your page has to go while living in the browser:
PHASE A: MEMORY/CALCULATION (CPU)
1- Scripting:
PHASE B:( PROCESSING CPU mainly)
2- Styling
3- Layout
PHASE C: (GPU POWER!)
4- Paint
5- Composition
When the browser decides to to update it has to go through these, either a full pass or a partial pass will make a lot of difference. consider the following example
if you have a div, and you decided to create an animation that moves it from the left edge of the screen to the right edge, you see developers doing two approaches:
THOSE WHO JUST WRITE CODE:
change the left value of the div style over time. ( simple right?)
THOSE WHO KNOW THE STUFF:
do a transfom by using translateX or translate3D.
both ways will work, the first will eat up your CPU, while the second will run at a very high FPS.
WHY ?
The first approach plays with sacred left value, that means the browser will have to re-calculate the new left (STEP1) > check style (STEP2) > THEN do a new LAYOUT (STEP 3) > Do a Paint ( step 4) > THEN enter Composition phase (STEP 5)
this will cost a full pass 5 stages that is totally unnecessary!!
The other approach on the other hand, will not require anything but a composition (one step #5) because the matrix manipulation in the GPU (pretty strong ability!) can handle displacements implied by using the translate3d or translateX!! you will see people talking about including a translate3d prop on your CSS elements to push performance (huh!) but the real reason is the above explained. so knowing what happens under the hood can save you.
supperess rendering is about waiting to load everything before starting to show things up, or simply try to handle things as they load..

Perform Selector causes a delay for applying effect to an UIImage

Good morning, Let me first explain you all the scene behind my question
Currently I'm working with image processing app.In that I'm applying filter effect. There is a number of examples exist in market for this. I check most of them. And I found none of them cause a delay while applying effect. Most of them has decreased delay time by scaling uiimage
But in my case I don't want to lose the pixels and though want to cut off the delay time.
In my app it takes about 5 secs for applying any filtereffect.
For filtering I'm using,
[<Image_View> setImage:[<Image Name> performSelector:#selector(<effect name>)]];
where is a function name resides into UIImage+FiltrrCompositions file.
Any help is appreciated !
Operations on large images take time depending on the image size, that's not new.
You want to
keep your app responsive as well as
maintain high quality large image sizes.
If you really want to manage this challenge, I'd recommend that you leverage the logic of your program code by supporting
downscaled thumbnail images as well as
background threads for operations on the large size image.
This will make your code more complex. But there's no abbreviation to speed up a complex operation on complex data. It's up to you.
For more information about multithreading and background tasks with iOS I recommend this tutorial

How Can I Record the Screen with Acceptable Performance While Keeping the UI Responsive?

I'm looking for help with a performance issue in an Objective-C based iOS app.
I have an iOS application that captures the screen's contents using CALayer's renderInContext method. It attempts to capture enough screen frames to create a video using AVFoundation. The screen recording is then combined with other elements for research purposes on usability. While the screen is being captured, the app may also be displaying the contents of a UIWebView, going out over the network to fetch data, etc... The content of the Web view is not under my control - it is arbitrary content from the Web.
This setup is working but as you might imagine, it's not buttery smooth. Since the layer must be rendered on the main thread, there's more UI contention than I'd like. What I'd like to do is to have a setup where the responsiveness of the UI is prioritized over the screen capture. For instance, if the user is scrolling the Web view, I'd rather drop frames on the recording than have a terrible scrolling experience.
I've experimented with several techniques, from dispatch_source coalescing to submitting the frame capture requests as blocks to the main queue to CADisplayLink. So far they all seem to perform about the same. The frame capture is currently being triggered in the drawRect of the screen's main view.
What I'm asking here is: given the above, what techniques would you suggest I try to achieve my goals? I realize the answer may be that there is no great answer... but I'd like to try anything, however wacky it might sound.
NOTE: Whatever techniques need to be App Store friendly. Can't use something like the CoreSurface hack that Display Recorder used/uses.
Thanks for your help!
"Since the layer must be rendered on the main thread" this is not true, as long as you don't touch UIKit.
Please see https://stackoverflow.com/a/12844171/136305
Maybe you can record at half resolution to speed up things, if that fits the requirements?

iPad image ghosting when displaying images rapidly

I'm displaying a series of images on an iPad that are being sent over a network connection. It seems to work fine, but the images have a lot of ghosting for some reason (see image below). Is there some kind of technique for drawing that would eliminate this? I'd say it's an issue with the refresh rate of the screen, but that wouldn't explain why using the iPad's screenshot functionality captures the phenomenon.
You are probably switching between images in a way that triggers an implicit animation that cross-fades between the old image and the new one.
The documentation for layer actions explains how CoreAnimation is deciding to run that implicit animation, and how to override it.
The two easiest ways IMHO are:
When you change the image, use a CATransaction to disable actions
In your layer's delegate, implement -actionForLayer:forKey: and return [NSNull null].
(If you are using a UIView, it's already the layer's delegate.)
This question gives a few more options -- it might even be a duplicate of your situation.

Saving a UIView for next launch (not as an image)

I am building an app with several UIViews which are generated dynamically, based on user inputs. These UIViews may contain labels, images and text. They take some time to generate so I would like the user to be able to load them up quickly on future launches of the app without having to redraw them again. One requirement is that they need to keep their interactive state so the user can continue to edit them.
I looked into NSKeyedArchiver but this doesn't seem to support UIImage. Also, I can't save it as PNG since I would like to retain their interactive state.
Is there any way to do this?
You should consider keeping the model of your data separate from the interface. You can then use this stored model to generate the interface. I know you specifically said that you don't want to do this. However, any built in method is going to have to rebuild the UIViews in exactly the same way.
If the processing of the model data is the issue, try to come up with a way to efficiently represent the state of the interface so that you don't have to start from scratch. However, that will be a lot more work.