How can I draw 3D model outlines on the iPhone? (OpenGL ES) - iphone

I've got a pretty simple situation that calls for something I don't know how to do without a stencil buffer (which is not supported on the iPhone).
Basically, I've got a 3D model that gets drawn behind an image. I want an outline of that model to be drawn on top of it at all times. So when it's behind the image, you can see its outline, and when its not behind the image you can see a model with an outline.
An option to simply get an outline working would be to draw a wireframe of the model with thick lines and a z offset, then draw the regular model on top of it. The problem with this is obviously that I need the outline to be drawn after the model.
This method needs to be fast, as I'm already pushing a lot of polygons around - full-on drawing of the model again in one way or another is not really desired.
Also, is there any way to find out whether my model can be seen at the moment? That is, whether or not the image over top has an opaque section at the position of the model, or if it has a transparent section. If I can figure this out (again, very quickly), then I can just draw a wireframe instead of a textured model, depending on if it's visible.
Any ideas here? Thanks.

most of the time you can re-create stencil effects using the alpha channel and render-to-texture if you think about it ...

http://research.microsoft.com/en-us/um/people/hoppe/proj/silmap/ Is a technical paper on the matter. Hopefully there's an easier way for you to accomplish this ;)

Here is a general option that might produce the effect you want (I have experience with OGL, but not iPhone):
Method 1
Render object to texture as pure white, separate from scene. This will produce a white mask where the object would be rendered.
Either draw this directly to the screen with alpha fade for a "full object", or if you're intent on your outlines, you could try rendering THIS texture to another texture, slightly enlarged, then render the original "full object" shading over this enlarged texture as pure black. This will create a sort of outline texture that you could render over the top of the scene.
Method 2
Edit out. Just read the "no stencil buffer" stipulation.
Does that help?

Related

Unity - Guidelines for color isolation effect

I would like to highlight two objects in Unity so that they stand out. But instead of actually highlighting them, which I already know how to do, I would instead like to have some kind of color isolation effect, like the one we can see in the picture below :
However, I really have no idea about how I could acheive this !
Could I use some post processing effects to remove the saturation, expect for a set of objects ?
Should I instead desaturate all the materials of all the objects in the scene and also desaturate the sun color ?
Should I apply to all the other objects in the scene a shader that only renders grayscale colors ?
Could you point me into the right direction ? Thank you.
One approach would be:
- Add a desaturate post process to your main camera and set its culling mask to everything(but turn off the effect)
- Create a second camera, make it a child of the first one (so it keeps the same rotation and position) and set its culling mask to something else (a layer where you will place your highlighted objects)
- When an object needs to be highlighted, add it to the highlights layer and desaturate the main camera. The object will stay colored because it is rendered by the camera that does not have the desaturation effect.
You'll have to play with the "Clear Flags" option of both cameras to get this to work correctly
Still using LWRP Post Processing stack, but I'd add a Color Grading effect and use that to 'tune' your unwanted colours to grey.

Unity3D - Make texture edges not stretch

I've been searching around for this one for a bit, and unfortunately I can't seem to find any good, consistent results. So, in the Unity UI system, buttons can stretch without becoming pixelated or distorted. This is because the texture is split up into 9 parts - the corners, middle, and sides.
This works because the button's middle and sides are stretched, but not the corners. Then, the button appears not pixelated, at any dimension.
So, the question is as follows: How can I do the same thing for a transparent, unlit texture in 3D space? I have a speech bubble texture on a flat plane that I know how to re-scale to fit the text in the speech bubble.
I've set the texture type to Multiple Sprite, and divided it up into 9 parts. However, I cannot seem to find where I can set the texture to act like the UI button does, and I'm not sure that this is even possible in this way in 3D space.
Is there a way, or should I just make the different parts of the texture different objects, and move them together? That would seem very inefficient and ugly compared to this.
To accomplish what you are asking, you would need to create tiles for this speech bubble and then write a script that procedurally builds a speech bubble based on the plane's scale value. You could also try just changing the texture's Filter Mode to Point.
However I really don't think you should be using textures for this anyway. Why not just use a Unity Canvas and set the Render Mode to World Space? Then you can just set your text box to be a sprite, not a texture, and set its filter mode to Point (See below). This would also make it a lot easier for when you want there to be text in the speech bubble later on.

OpenGL, primitives with opacity without visible overlap

I'm trying to draw semi-transparent primitives (lines, circles) in OpenGL ES using Cocos2d but can't avoid the visible overlap regions. Does anyone know how to solve this?
This is a problem you usually come across pretty often, even in 3D.
I'm not too familiar with Cocos2D, but one way to solve this in generic OpenGL is to fill the framebuffer with your desired alpha channel, switch the blending mode to glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA) and draw the rectangles. The idea behind this is that you draw a rectangle with the desired transparency which is taken from the framebuffer, but in the progress mask the area you've drawn to so that your subsequent rectangles will be masked there.
Another approach is to render the whole thing to a texture or assemble the shape using polygons that don't overlap.
I'm not sure whether Cocos2D supports any of theseā€¦
I do not know what capabilities Cocos2D specifically provides, but I can see two options,
One, do not overlap like that, but rather construct more complex geometry such that each pixel is only covered once,
Two, use stencil buffer to create a mask as you draw, and to reject any pixels that are already masked.

How to draw a light effect over a texture on iPhone using UIKit/Quartz

I have a scene with a background image (a lit room), and a black image (shadow) over that. I need to be able to move my finger over the background and reveal some parts of the scene, simulating a dim light source in a dark room.
My current approach was to generate a mask depending on the position of the touch, and then applying that mask to the shadow image. The problem is I'm generating a new mask and applying it every time I receive a touch event. It's a large image (800x600) and this causes the performance to go down and it increases a lot the memory usage, eventually crashing the game (I think I don't have any memory leaks, but that's not warrantied... anyway the performance itself isn't acceptable).
Can anyone think of a better approach (which doesn't involve using OpenGL ES -- that's not an option in this project) to do this?
To go with my comments above.
Maybe to get around the different shadow levels you could also have a grid of views (squares) between the image and the shadow view. each grid square has a different alpha opacity and when the spot is over a grid square, the grid square's alpha opacity changes to 0. when the spot moves off the grid square it's alpha opacity changes back to it's default.
Without more information it is a little difficult to know whether this approach will work in your case but what you could do is generate a single mask image, say, a radial alpha gradient and then apply an affine transform to it to shape it according to the touches. This can be used to simulate a torch/flashlight beam.
I would try this: use one view with a custom drawRect implemetation: first draw the shadow image (in grayscale) then a bright spot image in white an alpha. And finally the background image in a 'multiply' blend mode.
Just a thought, does the shadow has to be an image? Perhaps you could simply fill the shadow layer with a color and mask it then? This way the memory usage should be less and the effect should be nearly identical (if not exactly the same).
There is no reason to generate a new mask on every touch move. Instead, let the mask be initialized once and manipulate it (reset it's frame) as needed upon touch events.

OpenGL ES for iPhone blending not working

I'm a beginner to 3D graphics in general and I'm trying to make a 3D game for the iPhone, and more specifically, to use textures that contain transparency. I am able to load a texture (an 8 bit .png file) into OpenGL and map it to a square (made from a triangle strip) but the transparent parts of the image are not transparent when I run the app in the simulator - they take on the background colour, whatever it is set to, but obscure images that are further away. I am unable to post a screenshot as I am a new user, so my apologies for that. I will try to upload and link it some other way.
Even more annoying is that when I load the image into Apple's GLSprite example code, it works exactly as I want it to. I have copied the code from GLSprite's setupView into my project and it still doesn't work properly.
I am using the blend function:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
I was under the impression that this is correct for what I want to do.
Is there something very basic I am missing here? Any help would be much appreciated as I am submitting this as a coursework project in a few weeks and would very much like it to work.
Let me break this down:
First of all your transparent object is drawn.
At this point two things happen:
The pixels are drawn correctly to the back buffer
The depth buffer pixels are set in the depth buffer. Note that the depth buffer will write values all across your object, and transparency does not affect it.
You then draw other objects behind the transparent object.
But any of these objects pixels will not be drawn, because their depth buffer value are less than those already drawn.
The solution to this problem is to draw your scene back-to-front (draw starting at the further away things).
Hope that helps.
Edit: I'm assuming you are using the depth buffer here. If this isn't correct I'll consider writing another answer.