I have a Metal based rendering pipeline that renders a few overlapping squares. It loops over all the squares and draws them to the screen starting from the one furthest to the back and ending with the one closest to the front (painter's algorithm).
However, I need some squares in the middle to hide the contents of the squares underneath them slightly. I'm rendering all of the squares in the same render pass, so the fragment shader has read/sample access to the color that was put down by the square drawn before it. But a later square does not have write access to the color that was already drawn to the screen. At the same time, an earlier draw call does not have access to the pixels that will be drawn using future draw calls.
Is there a way for a later draw call to override a color that was placed using an earlier draw call (i.e. tile shading, multiple render passes, etc.)?
Related
i have the requirement to perform a redaction in itext7. We have several rectangles which have been selected by the user. Some of these have been rotated. I have not found the ability to rotate rectangles in itext7. Usually, how we draw "rotated" rectangles is to perform some mathematical operations on a "fake" rectangle we draw in the code, and then draw them either using a series of lines like so:
if (rect.mRotation > 0)
{
r.Rotate(DegreeToRadian(rect.mRotation));
}
c.MoveTo(r.TopLeft.X, r.TopLeft.Y);
c.LineTo(r.TopRight.X, r.TopRight.Y);
c.LineTo(r.BottomRight.X, r.BottomRight.Y);
c.LineTo(r.BottomLeft.X, r.BottomLeft.Y);
c.LineTo(r.TopLeft.X, r.TopLeft.Y);
c.Stroke();
In the case of images, or something similar, we are unable to do the above. In this case we use an affinetransform to simulate the movement, which is applied to the image before it is added to the document. Both of the previous methods work perfectly.
Unfortunately for us, the pdfSweep tool only accepts (iText.Kernel.Geom) rectangles. We are looking for a way to be able to still pass an iText.Kernel.Geom.Rectangle which has had transforms applied (ie. a rectangle which has been rotated). We have tried setting the llx/urx values manually using the setBBox method, but this wont affect the rotation.
Does anyone know how we can go about redacting items over a given rectangular area which has been rotated?
Thanks
In Unity, when we write a custom shader with multiple passes, are they executed:
For each triangle do:
For each pass do:
Draw the pass
Or:
For each Pass do:
For each triangle do:
Draw the pass
And if I have multiple materials in the mesh, are the faces drawn grouped by material?
Question 1
Second variant:
In Unity, when we write a custom shader with multiple passes, are they executed:
For each Pass do:
For each triangle do:
Draw the pass
Question 2
And if I have multiple materials in the mesh, are the faces drawn grouped by material?
Yes.
Explanation
Basically, all the rendering (in both OpenGL and Direct3D) is done following way:
Setup rendering parameters (shaders, matrices, buffers, textures, etc.);
Draw a group of primitives (this is called draw call);
Repeat these steps for all the graphics that needs to be rendered.
And the following heuristic rule is applicable: the less draw calls invoked in scene the better (for performance). Thus, in order to fulfill this rule (without reducing amount of primitives you draw) you want to have smaller number of draw calls with bigger number of primitives in each. Which in turn makes it obvious why Unity goes with second variant of your first question. And also explains why does Unity groups faces by material (because it minimizes draw calls number).
I've made an iOS program that allows the user to draw a line on the screen with their finger. I used the touchesBegan, touchesMoved, and touchesEnded methods, along with creating a CGContext and drawing my line that way. I want the line to look as though it is beveled into the screen, almost as if it was carved. How would this be possible?
You can achieve a simple bevel by stroking your lines three times:
first, with a color brighter than the background at points p(x-1, y-1) relative to the actual line
then, your line color at the actual line position, points p(x, y)
then, brighter than the line color, but darker than the background at p(x+1, y+1)
You can think of this as a light shining onto your lines from above and to the left, making the lower coordinates brighter, passing over the bevel and having a little shadow cast on the higher coordinates.
Once you get the hang of thinking through the pseudo-3D geometry this way, you can create prettier bevels, including details inside the line. Those will take more strokes.
I am successfully rendering a bezier curve in real-time as the user draws with a finger (I modified glpaint). I can adjust the width of the line just prior to drawing. This results in the whole line drawing at this new width, but remaining constant at this width over the course of the line. But I want a smooth variance of width across the course of this one line. I can also adjust the brush width dynamically as the user draws, however this results in a blotchy line for the following reasons.
The curve is rendered in points using glDrawArray(). As the user draws, for about every few touchpoints my bezier function calculates potentially hundreds of points to render, at which point it sends these points into the gldrawarray function to be rendered. The problem is that the width varyiance really needs to be plotted along these points dynamically and must be able to change brush width over the course of the drawing of these passed points, but because they are sent into the function as a whole group to be drawn at once via glDrawArray achieving smooth width varyiance across the overall line has proven elusive thus far.
Do you know of a way to achieve a varying brush width in real time, across one bezier curve drawn with points, and ideally drawn with glDrawArray(), and without resorting to using triangles, etc?
AFAIK the only way to achieve this is to create a filled polygon, where the skeleton is determined by your original path, and the width is varied along the length by displacing vertices for each side tangential to the path.
So you end up constructing a closed path around your bézier curve, thus:
The width at each control point is varied by the distance between each side, shown in green.
I hope this rough diagram clarifies the description above!
Does anyone know why this may happen:
I draw to a 2D screen using glDrawArrays with GL_POINTS (interleaved array). If I flip the buffers (presentRenderbuffer) after ever call to glDrawArrays -- so after each "tile" is drawn -- everything works fine.
This is, of course, inefficient.. so if I move the presentRenderBuffer outside of the draw loop, I get the errors. Basically parts of the screen just don't draw, and it's always in the same place (the middle of the screen, horizontally).
I'm using retainedbacking (as I update only tiles that changed) so I need to rely on the frame buffer staying the same between draws so I can draw over it.
Any ideas why presentRenderBuffer after each tile works fine, while one final presenRenderBuffer after all of the draws wouldn't?
EDIT: Also, adding glFlush() in the tile draw loop, and moving presentRenderBuffer outside the loop produces the correct image as well.