I'm trying to take a screenshot of a MetalKit view (MTKView) like in the answer Take a snapshot of current screen with Metal in swift but it requires that the MTKView set framebufferOnly to false which disables some optimizations according to Apple.
Is there a way to copy the MTKView texture (e.g. view.currentDrawable.texture) so that I can read the pixels? I don't need to take screenshots often so it would be a shame to disable the optimization for the entire lifecycle of the program running.
I tried using MTLTexture.newTextureViewWithPixelFormat and blit buffers but I still get the same exception about the frame buffer only being true.
When a screenshot is requested, you could toggle framebufferOnly, do one rendering pass, and then toggle it back.
Alternatively, you can do one rendering pass targeting a texture of your own specification, blit that to the drawable's texture (so as not to visually drop a frame), and then save the contents of your own texture.
Related
I am using Unity 2019.3.5f1 and I am using the Universal Render Pipeline (URP).
I am trying to use the post processing in URP for the foreground only (in my case, the players), and I want the leave the background (which in my case, is just a quad with a texture), as is.
I have tried using the Camera Stack but it wont work for me because the overlay camera can't have post processing effects according to the documentation.
The only solution that I could come up with is create the some sort of custom render which:
Render the background to buffer A.
Render the foreground to buffer B, and save the depth.
Combine the two using a shader that gets both textures, and the depth texture, and based on the depth, takes buffer A or buffer B.
The problem with this I believe is that I cant use it with the post processing for Unity.
Any idea what I can do?
EDIT:
I tried another thing in Unity which seem to not be working (might be a bug):
I created 3 cameras: Foreground Camera, Depth Camera (only render the foreground), and a Background Camera.
I set up the Depth camera so it will render to a render texture and, indeed now I have a render texture with the proper depth I need.
Now, from here everything went wrong, there seem to be odd things happening when using Unity new Post processing (the built in one):
The Foreground Camera is set to Tag=MainCamera, and when I enable Post Processing and add an effect, indeed we can see it. (as expected)
The Background Camera is essentially a duplicate of the Foreground one, but with Tag=Untagged, I use the same options (enabled Post).
Now, the expected thing is we see the Background Camera with effects like Foreground, but no:
When using Volume Mask on my background layer, the Post processing just turns off, no effect at all no matter what (and I have set my background the Background layer).
When I disable the Foreground Camera (or remove its tag) and set the Background Camera to MainCamera, still nothing changes, the post still wont work.
When I set Volume Mask to Default (or everything), the result is shown ONLY in the scene view, I tried rendering the camera to a RenderTexture but still, you clearly see no effect applied!
I am currently reading an iPhone OpenGL ES project that draws some 3D shapes (shpere, cone, ..). I am a little bit confused about the behavior of glDrawElements.
After binding the vertexbuffer to GL_ARRAY_BUFFER, and the indexbuffer to GL_ELEMENT_ARRAY_BUFFER, the function glDrawElements is called:
glDrawElements(GL_TRIANGLES, IndexCount, GL_UNSIGNED_SHORT, 0);
At first I thought this function draws the shapes on screen, but actually the shapes are later drawn on the screen using:
[m_context presentRenderbuffer:GL_RENDERBUFFER];
So what does glDrawElements do? The manual describes it as render primitives from array data. But I don't understand the real meaning of render & it's difference from draw (my native language is not english)
The DrawElements call is really what "does" the drawing. Or rather it tells the GPU to draw. And the GPU will do that eventually.
The present call is only needed because the GPU usually works double buffered: One buffer that you don't see but draw to, and one buffer that is currently on display on the screen. Once you are done with all the drawing you flip them.
If you would not do this you would see flickering while drawing.
Also it allows for parallel operation. When you call DrawElements you call it multiple times for one frame. Only when you call present does the GPU have to wait for all of them to be done.
It's true that glDraw commands are responsible for your drawing and that you don't see any visible results until you call the presentRenderbuffer: method, but it's not about double buffering.
All iOS devices use GPUs designed around Tile-Based Deferred Rendering (TBDR). This is an algorithm that works very well in embedded environments with less compute resources, memory, and power-usage budget than a desktop GPU. On a desktop (stream-based) GPU, every draw command immediately does work that (typically) ends with some pixels being painted into a renderbuffer. (Usually, you don't see this because, usually, desktop GL apps are set up to use double or triple buffering, drawing a complete frame into an offscreen buffer and then swapping it onto the screen.)
TBDR is different. You can think about it being sort of like a laser printer: for that, you put together a bunch of PostScript commands (set state, draw this, set a different state, draw that, and so on), but the printer doesn't do any work until you've sent all the draw commands and finished with the one that says "okay, start laying down ink!" Then the printer computes the areas that need to be black and prints them all in one pass (instead of running up and down the page printing over areas it's already printed).
The advantage of TBDR is that the GPU knows about the whole scene before it starts painting -- this lets it use tricks like hidden surface removal to avoid doing work that won't result in visible pixels being painted.
So, on an iOS device, glDraw still "does" the drawing, but that work doesn't happen until the GPU needs it to happen. In the simple case, the GPU doesn't need to start working until you call presentRenderbuffer: (or, if you're using GLKView, until you return from its drawRect: or its delegate's glkView:drawInRect: method, which implicitly presents the renderbuffer). If you're using more advanced tricks, like rendering into textures and then using those textures to render to the screen, the GPU starts working for one render target as soon as you switch to another (using glBindFramebuffer or similar).
There's a great explanation of how TBDR works in the Advances in OpenGL ES talk from WWDC 2013.
i would like to make an app where you can paint like in the GLPaint sample code, but also zoom in to paint in more detail within your painting.
but i get the feeling, that using OpenGL ES 1.0 which is used in the GLPaint app, is pretty difficult to learn and could be a little bit of an overkill for my need.
if i am chaning the main views frame with the setFrame method to zoom with gesturerecognizer, the already painted lines get erased with every change of the frames size.
so i tried to realize it with another idea: in the touchmoves method i add at "many" positions uiimageviews with an image of the brush, it is slower than the glpaint app and a little bit of a memomy management mess, but i donĀ“t see another way to go there.
any suggestions, learn openGL ES 1.0 or 2.0 or trying to realise the last idea
You can certainly achieve what you are doing, however it will require some effort.
Usually zooming is quite straight-forward as most OpenGL scenes typically do not rely on the the accumulation buffer as the GLPaint sample code does.
If you try and just zoom your the view in GLPaint, your new painting will be drawn at some adjusted scale over your original drawing - which is almost certainly not what you want.
A work-around is instead of drawing directly to your presenting screen buffer, you would first render to a texture buffer, then render said texture buffer on a quad (or equivalent). That way the quad scene can be cleared and re-rendered every frame refresh (at any scale you choose) while your paint buffer retains its accumulation buffer.
This has been tested and works.
I am quite sure the image view method will be an overkill after drawing for a few minutes... You can do all the zooming quite nicely with openGL and I suggest you do that. The bast practice would be to create a canvas as large as possible so when you zoom in you will not lose any resolution.
About zooming: Do not try to resize the GL frame or any frame for that matter because even if you manage to do that successfully you will lose resolution. You should use standard matrices to translate and scale the scene or just play around with glOrtho (set its values to the rect you are currently seeing). Once you get that part there are sadly 2 more things to do that require a bit of math, first is you will have to compute the new touch positions in the openGL scene as location in view will not know about your zooming and translating, second is you probably need to scale the brush as well (make smaller when the scene is bigger so you can draw details).
About the canvas: I do suggest you draw to a FBO rather then your main render buffer and present the texture to your main render scene. Note here that FBO will have attached texture and will be a size of power of 2 (create 2048x2048 or 4096x4096 for newer devices) but you will probably just be using some part of it to keep the same ratio as the screen (glViewport should do the job) so you will have to compute the texture coordinates. Overall the drawing mechanism doesn't change much.
So to sum this up, imagine you have a canvas (FBO) to which you apply the brush of certain size and position on touches events, then you use that canvas as a texture and draw it on your main GL view.
I am developing an iPhone app that uses OpenGL ES to draw the graphics onscreen. With each new frame i draw to the entire screen, which brings me to my question. Is it necessary to call glClear to clear the color buffer each frame or can i just overwrite it with the new data? The glClear function is currently 9.9% of the frametime so if i could get rid of it it would provide a nice boost.
Forgive me if this is a very naive question as i am something of a noob on the topic. Thanks in advance.
Theoretically you can leave it out.
But if you don't clear the color buffer, the previously drawn data will stay on screen.
That limits applications to such with non-modifying data.
If you have a static scene that doesn't need re-drawing, then you could skip the call to glClear() for the color buffer.
From the Khronos OpenGL ES docs:
glClear sets the bitplane area of the window to values previously selected by glClearColor.
glClear takes a single argument indicating which buffer is to be cleared.
GL_COLOR_BUFFER_BIT Indicates the buffers currently enabled for color writing.
Typically (when making 3D games) you don't clear just the color buffer, but the depth buffer as well and use depth test to make sure that the stuff that's closest to the camera is drawn. Therefore it is important to clear the depth buffer, otherwise only pixels with lower Z value than in the previous frame would be redrawn. Not clearing the color buffer would also be problematic when drawing transparent things.
If you're making a 2D application and manage the drawing order yourself (first draw the stuff that's furthest away from camera, and then what is closer a.k.a. painters algorithm) then you can leave out the clear call. One thing to note is that with such algorithm you have another problem - overdrawing i.e. drawing the same pixels more often than necessary.
Greetings.
My goal is to implement a pipeline for processing video frames that has a "current" and "previous" frame. It needs to copy a subregion from the "previous" frame into the "current" frame on some occasions. This is a kind of decoder which is part of a larger application.
This is what I have running so far, on the iPhone using OpenGL-ES 1.1
glCopyTexSubImage2D
.--------------------\
glTexSubImage2D V glDrawArray |
image --------------------------> Texture --------------> FBO/Texture -------> render buffer
The Texture gets each new frame or partial frame updated as usual.
The Texture is drawn into the frame buffer object and eventually rendered.
These parts perform very nicely.
The problem is the glCopyTexSubImage2D, which profiled using Instruments shows that it takes about 50% of the CPU; it looks like it's doing the copy using the CPU. Yuck.
Before I post the code (and I will happily do that), I wanted to ask if this architecture is sound?
The next phase of the project is share the final FBO/Texture with another GL context to render to an external screen. I've read other posts here about the delicate nature of shared resources.
Thanks for any guidance.
Cheers, Chris
P.S. I had some trouble getting the diagram to look right. The back-flow line should go from the FBO/Texture node to the Texture node.
glCopyTexImage*D and glGetTexImage*D is know to be slow as hell, no matter what platform you're on.
To replace glCopyTexSubImage2D you could just add another FBO/Texture and render the other texture you want to copy into it, then you can get the texture from the FBO. I'm not sure It'll be faster but it should be.
Render to FBO
.--------------------\
glTexSubImage2D V glDrawArray |
image --------------------------> FBO/Texture --------------> FBO/Texture -------> render buffer