MTKView updating frame buffer without clearing previous contents - swift

I am working on a painting program where I draw interactive strokes via an MTKView. If I set the renderPassDescriptor loadAction to 'clear':
renderPassDescriptor?.colorAttachments[0].loadAction = .clear
The frame buffer, as expected, shows the latest contents of renderCommandEncoder?.drawPrimitives, which is this case is the leading edge of the brushstroke.
If I set loadAction to 'load':
renderPassDescriptor?.colorAttachments[0].loadAction = .load
The frame buffer flashes like crazy and shows a patchy trail of what I've just drawn. I now understand that the flashing is likely caused by MTKView's default triple buffering in place. Thus, each time I write to the currentDrawable, I'm likely writing to one of 3 cycling buffers. Please correct me if I'm wrong.
My question is, what do I need to do to draw a clean brushstroke without the frame buffer flashing as it does now? In other words, is there a way to have a master buffer that gets updated with the latest contents of commandEncoder?

You can use a texture of your own as the color attachment of a render pass. You don't have to use the texture of a drawable. In that way, you can use the .load action without getting garbage or weird flashing or whatever. You will have full control over which texture you're rendering to and what its contents are.
After rendering to that texture for a render pass, you then need to blit that to the drawable's texture for display.
The main complication here is that you won't have the benefits of double- or triple-buffering. You'll lose a certain amount of performance, since everything will have to be synced to that one texture's state. I suspect, though, that you don't need that much performance, since this is interactive and only has to keep up with the speed of a human.

Related

Check if object is all painted

I have a brown sprite, which contains a hole with triangular shape.
I've added a trail renderer (and set its order in layer to appear behind the sprite), so the user can paint the sprite's hole without painting the sprite itself.
My question is: how can it detect when the hole is all painted?
I thought about using a shader to check if there is any black pixel in the screen, but I don't know if it's possible, because the shader won't know in what percentage of the image it is.
One way would be to take a screenshot with the ScreenCapture.CaptureScreenshotAsTexture method and then loop through an array of pixel colors from Texture2D.GetPixels32. You could then check if the array contains 'black' pixels.
I would do it in a coroutine for better performance as doing it every frame may slow down your application. Also what is important when it comes to CaptureScreenshotAsTexture according to unity docs:
To get a reliable output from this method you must make sure it is called once the frame rendering has ended, and not during the rendering process. A simple way of ensuring this is to call it from a coroutine that yields on WaitForEndOfFrame. If you call this method during the rendering process you will get unpredictable and undefined results.

Metal/OpenGL: How to set vertex buffer only once?

I have gone through https://www.raywenderlich.com/146414/metal-tutorial-swift-3-part-1-getting-started. For every frame
renderEncoder.setVertexBuffer(vertexBuffer, offset: 0, at: 0)
renderEncoder.setFragmentTexture(texture, at: 0)
is done. But vertex and texture data is never changed. Only Uniform matrices change. My object being rendered contains 8*4*4*4*4 triangles(yep, its a sphere). I could only get 4FPS. I am skeptical about setting the vertexBuffer every frame.
Its done similarly in OpenGL tutorials http://www.opengl-tutorial.org/beginners-tutorials/tutorial-5-a-textured-cube/
In OpenGL I could pull out vertex/texture buffer binding out of the render loop. But in Metal MTLRenderCommandEncoder needs CAMetalDrawable which is fetched for every frame.
You would typically use a new render command encoder for each frame. Anything you did with the previous render command encoder, like setting vertex buffers or fragment textures, is "lost" when that encoder is ended and you drop any references to it. So, yes, you need to set buffers and textures again.
However, that should not be expensive. Both of those methods just put a reference to the buffer or texture into a table. It's cheap. If you haven't modified their contents on the CPU, no data has to be copied. It shouldn't cause any state compilation, either. (Apple has said a design goal of Metal is to avoid any implicit state compilation. It's all explicit, such as when creating a render pipeline state object from a render pipeline descriptor.)
You need to profile your app to figure out what's limiting your frame rate, rather than guessing.

pyglet: synchronise event with frame drawing

The default method of animation is to have an independent timer set to execute at the same frequency as the frame rate. This is not what I'm looking for because it provides no guarantee the event is actually executed at the right time. The lack of synchronisation with the actual frame drawing leads to occasional animation jitters. The obvious solution is to have a function run once for every frame, but I can't find a way to do that. on_draw() only runs when I press a key on the keyboard. Can I get on_draw() to run once per frame, synchronised with the frame drawing?
The way to do this is to use pyglet.clock.schedule(update), making sure vsync is enabled (which it is by default). This ensures on_draw() is run once per frame. The update function passed to pyglet.clock.schedule() doesn't even need to do anything. It can just be blank. on_draw() is now being executed once per frame draw, so there's no point in having separate functions both of which are being executed once per frame drawing. It would have been a lot nicer if there were just an option somewhere in the Window class saying on_draw() should be drawn once per second, but at least it's working now.
I'm still getting bad tearing, but that's a bug in pyglet on my platform or system. There wasn't supposed to be any tearing.

Program received signal EXC_BAD_ACCESS accessing array

I am using the routine for getting pixel colour (this one: http://www.markj.net/iphone-uiimage-pixel-color/ ) and am faced with frequent app crashes when using it. The relevant portion of the code:
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
int offset = (some calculations here);
int alpha = data[offset]; // <<<< crashes here
}
This code is linked to be ran on touchesBegan, touchesEnded and touchesMoved. It appears that the crashes occur during touchesEnded and touchesMoved events only, particularly when I start the touch on the target image, but move it off the boundaries of the image in the process.
Is there any way to check what is the size of the data in the array pointed to by data object? What could be going wrong there?
Edit:
The calculation of offset:
int offset = 4*((w*round(point.y)*x)+round(point.x)*x);
Where point is the point where touch occurs, w is the width if the image, x is the scale of the image (for hi-res images on retina displays).
I don't see anything wrong with the cgctx either. Basically, I am running the code from the link above almost unmodified, the particular code snippet I have problems with is in the function (UIColor*) getPixelColorAtLocation:(CGPoint)point so if you want the details of what the code does, just read the source there.
Edit: another thing is that this never happens in the simulator, but often happens when testing on a device.
Edit: Ideally I'd want to do nothing if the finger is currently not over the image, but have trouble figuring out when that happens. It looks like the relevant methods in SDK only show what view the touch originated in, not where it is now. How can I figure that out?
You didn't show all your work. Your offset calculation is likely returning either a negative number or a number well beyond the end of the buffer. Since CG* APIs often allocate rather large chunks of memory, often memory mapped, it is quite likely that the addresses before and after the allocation are unallocated/unmapped and, thus, access outside of the buffer leads to an immediate crash (as opposed to returning garbage).
Which is good. Easier to debug.
You did provide a clue:
move it off the boundaries of the
image in the process
I'd guess you modified the offset calculation to take the location of the touch. And that location has moved beyond the bounds of the image and, thus, leads to a nonsense offset and a subsequent crash.
Fix that, and your app will stop crashing here.
Does your image exactly occupy the entire bounds of the item being touched? I.e. does the thing handling the touches*: events have a bounds whose width and height are exactly the same as the image?
If not, you need to make sure you are correctly translating the coordinates from whatever is handling the touches to coordinates within the image. Also note that the layout of bytes in an image is heavily dependent on exactly how the image was created and what internal color model it is using.

Undo in painting apps like Penultimate and iDraft

In apps like iDraft and Penultimate, they perform undos and redos very well without any delay.
I tried many approaches. Currently, my testing app writes raw pixel data directly to a file after each undo using [NSData writeToFile:atomically:] but I am getting 0.6s delay.
Can anyone give some hints on it?
I don’t know iDraft nor Penultimate, but chances are they have a simpler drawing model than you have. When writing a drawing app you can choose between two essential drawing representations: either you track raw pixels, or you track drawing objects like lines, circles and so on. (Or, in other words, you choose between pixel and vector representation.)
When you draw using vectors, you don’t track the individual pixels. Instead you know there should be line between points X and Y of given width, color and other params. And when you are to draw such a representation, you call Quartz to stroke the line. In this case the model (the drawing representation) consists of a few numbers, takes little memory and therefore you can have many versions of a single drawing in a memory, allowing for a quick and convenient undo and redo.
Keep your undo stack in memory. Don't write to disk for every operation. Whether you keep around bitmaps or vectors, your file ops shouldn't be on the critical path for every paint operation you do.
If your data model is full bitmaps, keep just the changed rect for undo/redo.
As previously said, you probably don't need to write the data to disk for every operation, also in a pixel based case, unless you are trying to undo a full screen filter all you need to keep is the data contained within the bounding rectangle of the brush stroke that the user performed.
You can double buffer your drawing, i.e. keep a copy of the image before the draw, draw into the copy, determine the bounding rect of the user operation, copy and retain the appropriate data from the original (with size and location information). On undo you take that copy and paste it over the modified area.
This method extends to redo, on undo take the area that you are going to be overwriting and store it.