In OpenGL which one would result in a better performance. To change the culled face or to rotate my object?
The scenario is the following:
I compute the matrix to feed into my shaders, this will draw texture A in a certain culling position (front). When I see the object from infront i can see it, but from behind i cant, this is my desired behavior. Now i would like to add "something" behind, lets say texture B, so that when the object is seen from behind this other texture will appear in the same position and orientation as was the texture A but now with texture B.
I thought that other than building a cube with 2 sides i could simply "redraw on top" of my previous object. If i were to rotate the object i suppose I can assume that OpenGL will simply not overwrite texture A since it will not pass the face culling test. This require one extra matrix multiplication to mirror mi object on its own axis. However what if i simply change the culling property and try to draw it? Wouldn't it have the same effect of failing the text from infront but passing it from behind?
Which of these 2 options is better?
(PD: I am doing this on the iPhone so every single bit of performance is quite relevant for older devices.)
Thank you.
I'm not sure if I understood you correctly, but it sounds like you are trying to draw a billboard here? In that case, you should disable backface culling altogether and in your fragment shader check gl_FrontFacing. Depending on this, you can flip your texture coordinates and change your texture (just bind both textures and send them to the shader). This way you have only one draw call and don't have to do any strange tricks or change the OpenGL state.
Related
I've been doing research around how to create a pixilated effect in Unity for 3D objects. I came across the following tutorials that were useful to me:
https://www.youtube.com/watch?v=dpNhymnBDQw
https://www.youtube.com/watch?v=Z8xB7i3W4CE
For the most part, it is what I am looking for, where I can pixelate certain objects in my scene (I don't want to render all objects as pixelated).
The effect, however, works based of the screen resolution so when you are closer to objects they become less pixelated and vice versa:
Notice how the cube on the right consists of less pixels as it is further away.
Do you perhaps have an idea as to how you would keep the resolution of the pixelated object consistent regardless of distance to them such as below (this effect was done in photoshop, I am unaware as to how to actually implement it):
I'm not sure if this even is possible with the method provided by most pixelart methods.
I was thinking maybe if you could use a shader per object in the scene that would render the pixelated object then you could do some fancy shader math to fix the resolution keeping it consistent per object, however I have no idea as to how you would even render a pixel effect with just a shader. (The only method I can think of is what is described in the videos in which you render the objects onto a smaller resolution via render texture then upscale to screen resolution, which you can't really do with a shader assigned to a material).
Another thought I had was to render each object separately using a separate camera for each object I wanted pixelated, then I could set the camera to be a fixed distance away from the object and blit the render together onto the main camera. This way since each pixelated object is rendered individually with their own camera at a fixed distance, they will retain a fixed pixel resolution regardless of distance from the main camera. (Essentially you can think of it as converting each object into a sprite and rendering that sprite in the scene, thus keeping the resolution of each object consistent despite distance) But this obviously has its own set of problems from performance, to different orientations etc...
Any ideas?
Ideally I am able to specify the resolution I want for a specific 3D object in my scene to be pixelated to and it retains that over any distance. This way I have the flexibility to have different objects rendered at different resolutions.
I should mention that I am using the Universal Render Pipeline at the moment with a custom render feature to achieve my current pixelated effect through downscaling a render texture and upscaling to screen resolution, in which I can change the resolution of the downscaled texture.
I've mocked up what I am trying to accomplish in the image below - trying to pinch the pixels in towards the center of an AR marker so when I overlay AR content the AR marker is less noticeable.
I am looking for some examples or tutorials that I can reference to start to learn how to create a shader to distort the texture but I am coming up with nothing.
What's the best way to accomplish this?
This can be achieved using GrabPass.
From the manual:
GrabPass is a special pass type - it grabs the contents of the screen where the object is about to be drawn into a texture. This texture can be used in subsequent passes to do advanced image based effects.
The way distortion effects work is basically that you render the contents of the GrabPass texture on top of your mesh, except with its UVs distorted. A common way of doing this (for effects such as heat distortion or shockwaves) is to render a billboarded plane with a normal map on it, where the normal map controls how much the UVs for the background sample are distorted. This works by transforming the normal from world space to screen space, multiplying it with a strength value, and applying it to the UV. There is a good example of such a shader here. You can also technically use any mesh and use its vertex normal for the displacement in a similar way.
Apart from normal mapped planes, another way of achieving this effect would be to pass in the screen-space position of the tracker into the shader using Shader.SetGlobalVector. Then, inside your shader, you can calculate the vector between your fragment and the object and use that to offset the UV, possibly using some remap function (like squaring the distance). For instance, you can use float2 uv_displace = normalize(delta) * saturate(1 - length(delta)).
If you want to control exactly how and when this effect is applied, make it so that has ZTest and ZWrite set to Off, and then set the render queue to be after the background but before your tracker.
For AR apps, it is likely possible to avoid the preformance overhead from using GrabPass by using the camera background texture instead of a GrabPass texture. You can try looking inside your camera background script to see how it passes over the camera texture to the shader and try to replicate that.
Here are two videos demonstrating how GrabPass works:
https://www.youtube.com/watch?v=OgsdGhY-TWM
https://www.youtube.com/watch?v=aX7wIp-r48c
I'm looking for ways to clip an entire unity scene to a set of 4 planes. This is for an AR game, where I want to be able to zoom into a terrain, yet still have it only take up a given amount of space on a table (i.e: not extend over the edges of the table).
Thus far I've got clipping working as I want for the terrain and a water effect:
The above shows a much larger terrain being clipped to the size of the table. The other scene objects aren't clipped, since they use unmodifed standard shaders.
Here's a pic showing the terrain clipping in the editor.
You can see the clipping planes around the visible part of the terrain, and that other objects (trees etc) are not clipped and appear off the edge of the table.
The way I've done it involves adding parameters to each shader to define the clipping planes. This means customizing every shader I want to clip, which was fine when I was considering just terrain.
While this works, I'm not sure it's a great approach for hundreds of scene objects. I would need to modify whatever shaders I'm using, and then I'd have to be setting additional shader parameters every update for every object.
Not being an expert in Unity, I'm wondering if there are other approaches that are not "per shader" based that I might investigate?
The end goal is to render a scene within the bounds of some plane.
One easy way would be to use Box Colliders as triggers on each side of your plane. You could then turn off Renderers on objects staying in the trigger with OnTriggerEnter/OnTriggerStay and turn them on with OnTriggerExit.
You can also use Bounds.Contains.
How would i go about creating as a background for a 3d scene a plane with a texture that stretches into the horzon? I have tried a skybox but i think a skybox will also be needed "behind" the infinite plane.
It depends whether you need to have an actual geometry that will be seen from close up - if not, you can bake it into the skybox.
In some cases (i.e. when the user has stereoscopic display on their head) you will need to have actual geometry.
Its not exactly clear from your question if you want to create a 'floor' or a 'wall', but in both cases I would link it with player position somehow. A floor could follow players X an Z, while a 'wall' could be made a child to the camera, this way it would never leave the viewport.
Skybox would still be the cheapest by a significant margin, we can give more advice if you provide some additional information. i.e. what are you trying to achieve
First, I just want to introduce to you guys my problem, because it is really complex so you need this to understand it properly.
I am trying to do something with Scene Kit and Swift : I want to reproduce what we can see in the TV Show Doctor Who where the Doctor's spaceship is bigger on the inside, as you can see in this video.
Of course the Scene Kit Framework doesn't support those kind of unreal dimensions so we need to do some sort of hackery to do achieve that.
Now let's talk about my idea in plain english
In fact, what we want to do is to display two completely different dimensions at the same place ; so I was thinking to :
A first dimension for the inside of the spaceship.
A second dimension for the outside of the spaceship.
Now, let's say that you are outside of the ship, you would be in the outside dimension, and in this outside dimension, my goal would be to display a portion of the inside dimension at the level of the door to give this effect where the camera is outside but where we can clearly see that the inside is bigger :
We would use an equivalent principle from the inside.
Now let's talk about the game logic :
I think that a good way to represent these dimensions would be two use two scenes.
We will call outsideScene the scene for the outside, and insideScene the scene for the inside.
So if we take again the picture, this would give this at the scene level :
To make it look realistic, the view of the inside needs to follow the movements of the outside camera, that's why I think that all the properties of these two cameras will be identical :
On the left is the outsideScene and on the right, the insideScene. I represent the camera field of view in orange.
If the outsideScene camera moves right, the insideScene camera will do exactly the same thing, if the outsideScene camera rotates, the insideScene camera will rotate in the same way... you get the principle.
So, my question is the following : what can I use to mask a certain portion of a certain scene (in this case the yellow zone in the outsideView) with what the camera of another view (the insideView) "sees" ?
First, I thought that I could simply get an NSImage from the insideScene and then put it as the texture of a surface in the outsideScene, but the problem would be that Scene Kit would compute it's perspective, lighting etc... so It would just look like we was displaying something on a screen and that's not what I want.
there is no super easy way to achieve this in SceneKit.
If your "inside scene" is static and can be baked into a cube map texture you can use shader modifiers and a technique called interior mapping (you can easily find examples on the web).
If you need a live, interactive "inside scene" you can use the sane technique but will have to render your scene in a texture first (or renderer your inside scene and outer scene one after the other with stencils). This can be done by leveraging SCNTechnique (new in Yosemite and iOS 8). On older versions you will have to write some OpenGL code in SCNSceneRenderer delegate methods.
I don't know if it's 'difficult'. As we have to in iOS , a lot of times the simplest answer ..is the simplest answer.
Maybe consider this:
Map a texture onto a cylinder sector prescribed by the geometry of the Tardis cube shape. Make sure the cylinder radius is equal of the focal point of the camera. Make sure you track the camera to the focal point.
The texture will be distorted because it is a cylinder making onto a cube. The actors' nodes in the Tardis will react properly to the camera but there should be two groups of light sources...One set for the Tardis and one outside the Tardis.