In my iPhone application, many lines are drawn. Alas, some of them are neither vertical nor horizontal, meaning that they have jagged edges.
Enabling GL_LINE_SMOOTH seems to have no effect. Multisample antialiasing as described in Apple's documentation works, but I can't imagine that it's impossible to make nicer looking lines. With the antialiasing the line becomes smooth but the color is distorted along sections of the line.
Using OpenGL, how can you draw a line that looks like a line drawn by Quartz-2D, with a smooth border, consistent color, and variable width?
(To be clear, GL_LINE_WIDTH works perfectly, so variable width is not currently an issue.)
This is a common problem in 3D graphics. Most graphics cards are usually more concerned about fill rate and speed than visual quality in their basic geometry rendering. They use the Bresenham algorithm to draw lines and polygons, which does not support anti aliasing.
Opting for speed rather than visual quality is actually quite reasonable for a 3D API, since most of the visual detail eventually arises from the use of textures (which are interpolated) rather than from the geometry.
My suggestion would be to activate multi-sampling, or doing FSAA by hand using the accumulation buffer (if that's available under OpenGL ES). Or you could try to draw multiple lines with some transparency jittered around the intended coordinates.
In addition to what Fabian said, it may be that you just have to enable alpha blending for line smooth to show an effect, by calling
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
Related
In Unity 2018.3.8f1, I'm using a standard line renderer and noticed that when not horizontal or vertical, the edges become jagged. They appear this way in play mode and a build at Ultra quality.
I've tried enabling/maxing anti-aliasing in the quality settings but it has zero effect here. I've tried playing with width, vertex counts, adding points, etc.
Even when adding curves for a cubic bezier, the edges are still jagged.
I'm just not sure what else to do. The lines look awful.
I'm trying to draw semi-transparent primitives (lines, circles) in OpenGL ES using Cocos2d but can't avoid the visible overlap regions. Does anyone know how to solve this?
This is a problem you usually come across pretty often, even in 3D.
I'm not too familiar with Cocos2D, but one way to solve this in generic OpenGL is to fill the framebuffer with your desired alpha channel, switch the blending mode to glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA) and draw the rectangles. The idea behind this is that you draw a rectangle with the desired transparency which is taken from the framebuffer, but in the progress mask the area you've drawn to so that your subsequent rectangles will be masked there.
Another approach is to render the whole thing to a texture or assemble the shape using polygons that don't overlap.
I'm not sure whether Cocos2D supports any of these…
I do not know what capabilities Cocos2D specifically provides, but I can see two options,
One, do not overlap like that, but rather construct more complex geometry such that each pixel is only covered once,
Two, use stencil buffer to create a mask as you draw, and to reject any pixels that are already masked.
I have to draw the following shape in a rectangle. What is the best way to do it? The blue areas are the background color. The black is a border and the red is the interior color. I want to paint the black and red only.
Thanks
It totally depends on how you would use the shape; whether they will move, how many of them will be displayed, whether they will be scaled while being displayed, etc.
In general, OpenGL ES is considered to be the fastest way of drawing on iOS devices. However, if you have only a small number of those shape (say, <10~100?) and the other part of the application does not have a lot of fast animations Quart 2D is usually enough in terms of drawing, in order to, say, achieve a 30/60Hz drawing rate.
How you use Quartz2D still matters a lot. If you need to redraw the shapes frequently, you would need to draw the shape on CALayers, and rather than redrawing the shapes, you should move and transform the layers.
Comparing drawing as a bitmap and a vector shape, I believe both would work fine for this kind of shape (especially because you would not redraw the shape so often, but only work with the layer on which the image already is drawn). But if your shapes are scaled frequently, you would consider vector images for the quality of the image.
To summarize, learn (if you don't already know) how to draw into a graphics context first (see Drawing and Printing Guide for iOS). You should be able to draw a simple vector shape or a bitmap image by overriding drawRect or similar methods inside a UIView object. Then if you need to animate those shapes, learn how to create a CALayer and draw on the layer (see Core Animation Programming Guide). Finally, if you need to create many duplicates of the shape on the screen, learn how to use CGLayer to replicate an image (see Quartz 2D Programming Guide).
I'm running into the traditional tile/mipmap problem on the iPad using OpenGL ES. Basically, if you have a large texture (larger than 1k X 1k), you can break it up into pieces and map those pieces onto individual polygons. You can clamp the texture coordinates to the edges and it mostly works, but you get artifacts along the boundaries.
Now I know why you get this and know what the traditional solution is. To wit, you make a border around the outside of each littler texture (say 6 pixels). You sample from the little textures into the big one so you're just using those inside pixels (say 256-2*6). Then you smear the valid pixels out into the border area. Lastly, you map your texture coordinates to just use those valid inside pixels. Works okay.
If you're not nodding along at this point, don't try to answer. :-)
Anyway, OpenGL introduced clamping modes way back in the day to solve this. I don't see those modes in OpenGL ES (at least on this hardware) and I see other references to this problem. What I'm wonder is if I'm missing something. Is there a newer way to solve the tile/edge problem that I'm not aware of?
[Update]
A screen shot of the result is attached here. The visible line is at the end of one texture and the start of another. This is using CLAMP_TO_EDGE.
GLES supplies GL_CLAMP_TO_EDGE but not GL_CLAMP, which clamps to the centres of the outermost pixels in a texture rather than to the extreme edges. So out-of-bounds (border or wraparound) accesses are completely prevented with CLAMP_TO_EDGE but not with CLAMP.
CLAMP_TO_EDGE is a part of the GL ES specification (as per here for 1.1 and here for 2.0), so if your hardware doesn't support it then it's not technically GL ES compliant. It's also available in full Open GL, but I think only as of version 1.2. It's implied that CLAMP_TO_EDGE made the leap to ES but CLAMP didn't because the former is considered to be a fixed version of the latter.
It sounds to me like CLAMP_TO_EDGE should be suitable for what you're doing — have I misunderstood?
In the end the problem was related to texture compression. The lines were due to the compression method assuming the texture wrapped around.
I solved the problem by building slightly larger textures than needed, compressing and then using only an area within each texture, thus leaving a border.
I need in antialiasing in iPhone 3G (OpenGL ES1.1), NOT iPhone 3Gs with OpenGL ES.2.0.
I've draw 3d model and have next: pixels on the edges of the model look like teeth.
I've try set any filters for texture, but this filters making ONLY texture INSIDE look better.
How can i make good antialising ?
May be i should use any smooth for drawing triangles ? If yes, then how it possible in OpenGL ES1.1 ?
thanks.
As of iOS 4.0, full-screen anti-aliasing is directly supported via an Apple extension to OpenGL. The basic concept is similar to epatel's suggestion: render the scene onto a larger framebuffer, then copy that down to a screen-sized framebuffer, then copy that buffer to the screen. The difference is, instead of creating a texture and rendering it onto a quad, the copy/sample operation is performed by a single function call (specifically, glResolveMultisampleFramebufferAPPLE()).
For details on how to set up the buffers and modify your drawing code, you can read a tutorial on the Gando Games blog which is written for OpenGL ES 1.1; there is also a note on Apple's Developer Forums explaining the same thing.
Thanks to Bersaelor for pointing this out in another SO question.
You can render into a larger FBO and then use that as a texture on a square.
Have a look at this article for an explanation.
Check out the EGL_SAMPLE_BUFFERS and EGL_SAMPLES parameters to eglChooseConfig(), as well as glEnable(GL_MULTISAMPLE).
EDIT: Hrm, apparently you're out of luck, at least as far as standardized approaches go. As mentioned in that thread you can render to a large off-screen texture and scale to a smaller on-screen quad or jitter the view matrix several times.
We found another way to achieve this. If you edit your textures and add for example a 2 pixel frame of transparent pixels, the colored pixels in the texture are blended with the transparent pixels when necessary giving a basic anti-aliasing effect. You can read the full article here in our blog.
The advantage of this approach is that you are not rendering a bigger image, or copying a buffer, or even worse, making a texture from a buffer, so there is no impact in performance.