How would i go about creating as a background for a 3d scene a plane with a texture that stretches into the horzon? I have tried a skybox but i think a skybox will also be needed "behind" the infinite plane.
It depends whether you need to have an actual geometry that will be seen from close up - if not, you can bake it into the skybox.
In some cases (i.e. when the user has stereoscopic display on their head) you will need to have actual geometry.
Its not exactly clear from your question if you want to create a 'floor' or a 'wall', but in both cases I would link it with player position somehow. A floor could follow players X an Z, while a 'wall' could be made a child to the camera, this way it would never leave the viewport.
Skybox would still be the cheapest by a significant margin, we can give more advice if you provide some additional information. i.e. what are you trying to achieve
Related
I'm looking for ways to clip an entire unity scene to a set of 4 planes. This is for an AR game, where I want to be able to zoom into a terrain, yet still have it only take up a given amount of space on a table (i.e: not extend over the edges of the table).
Thus far I've got clipping working as I want for the terrain and a water effect:
The above shows a much larger terrain being clipped to the size of the table. The other scene objects aren't clipped, since they use unmodifed standard shaders.
Here's a pic showing the terrain clipping in the editor.
You can see the clipping planes around the visible part of the terrain, and that other objects (trees etc) are not clipped and appear off the edge of the table.
The way I've done it involves adding parameters to each shader to define the clipping planes. This means customizing every shader I want to clip, which was fine when I was considering just terrain.
While this works, I'm not sure it's a great approach for hundreds of scene objects. I would need to modify whatever shaders I'm using, and then I'd have to be setting additional shader parameters every update for every object.
Not being an expert in Unity, I'm wondering if there are other approaches that are not "per shader" based that I might investigate?
The end goal is to render a scene within the bounds of some plane.
One easy way would be to use Box Colliders as triggers on each side of your plane. You could then turn off Renderers on objects staying in the trigger with OnTriggerEnter/OnTriggerStay and turn them on with OnTriggerExit.
You can also use Bounds.Contains.
In current Unity,
For use in Unity.UI as conventional UI ..
for any "Sprite (2D and UI)", in fact it always defaults to having "Generate Mip Maps" turned ON. Every time you drop an image in, you have to turn that "off" and apply.
As noted in the comments, these days you can actually use world space UI canvasses, and indeed advanced users may indeed have (say) "buttons that float over the head of Zelda and they are in the far distance". However if you're a everyday Unity user adding a button, just turn it off :)
In Unity, "sprites" can still be positioned in 3D space. For example, on a world space canvas. Furthermore, mipmaps are used when the sprite is scaled. This is because the mipmap sampling is determined by the texel size rather than the distance.
If a sprite is flat and perfectly scaled then there is no reason to use mipmaps. This would likely apply to your icon example.
I suspect that it is enabled by default for 2D games where sprites will often not be perfectly scaled. To clarify, a sprite does not need to be on a canvas. Sprites can exist as their own GameObject with a Sprite Renderer (not on a canvas.) When this is the case, scaling the camera view will change the sprite's size on the screen resulting in mipmapping due to the texel size changing. This results in making the sprite always perfectly scaled challenging without a canvas.
I have a need for setting up clipping planes that aren't perpendicular to the camera. Doing that for the far plane was easy: I just added a shader that clears the background.
I just can't figure out how to do the same for the near clipping plane. I've tried to think of solutions dealing with multiple shaders and planes, a special cutting shader, having multiple cameras for this or somehow storing the view as a texture, but those ideas are mostly imperfect even if they were implementable. What I basically need is a shader that would say "don't render anything that's in front of me". Is that possible? Can I eg. make a shader to make the passed pixels "final"?
First, I just want to introduce to you guys my problem, because it is really complex so you need this to understand it properly.
I am trying to do something with Scene Kit and Swift : I want to reproduce what we can see in the TV Show Doctor Who where the Doctor's spaceship is bigger on the inside, as you can see in this video.
Of course the Scene Kit Framework doesn't support those kind of unreal dimensions so we need to do some sort of hackery to do achieve that.
Now let's talk about my idea in plain english
In fact, what we want to do is to display two completely different dimensions at the same place ; so I was thinking to :
A first dimension for the inside of the spaceship.
A second dimension for the outside of the spaceship.
Now, let's say that you are outside of the ship, you would be in the outside dimension, and in this outside dimension, my goal would be to display a portion of the inside dimension at the level of the door to give this effect where the camera is outside but where we can clearly see that the inside is bigger :
We would use an equivalent principle from the inside.
Now let's talk about the game logic :
I think that a good way to represent these dimensions would be two use two scenes.
We will call outsideScene the scene for the outside, and insideScene the scene for the inside.
So if we take again the picture, this would give this at the scene level :
To make it look realistic, the view of the inside needs to follow the movements of the outside camera, that's why I think that all the properties of these two cameras will be identical :
On the left is the outsideScene and on the right, the insideScene. I represent the camera field of view in orange.
If the outsideScene camera moves right, the insideScene camera will do exactly the same thing, if the outsideScene camera rotates, the insideScene camera will rotate in the same way... you get the principle.
So, my question is the following : what can I use to mask a certain portion of a certain scene (in this case the yellow zone in the outsideView) with what the camera of another view (the insideView) "sees" ?
First, I thought that I could simply get an NSImage from the insideScene and then put it as the texture of a surface in the outsideScene, but the problem would be that Scene Kit would compute it's perspective, lighting etc... so It would just look like we was displaying something on a screen and that's not what I want.
there is no super easy way to achieve this in SceneKit.
If your "inside scene" is static and can be baked into a cube map texture you can use shader modifiers and a technique called interior mapping (you can easily find examples on the web).
If you need a live, interactive "inside scene" you can use the sane technique but will have to render your scene in a texture first (or renderer your inside scene and outer scene one after the other with stencils). This can be done by leveraging SCNTechnique (new in Yosemite and iOS 8). On older versions you will have to write some OpenGL code in SCNSceneRenderer delegate methods.
I don't know if it's 'difficult'. As we have to in iOS , a lot of times the simplest answer ..is the simplest answer.
Maybe consider this:
Map a texture onto a cylinder sector prescribed by the geometry of the Tardis cube shape. Make sure the cylinder radius is equal of the focal point of the camera. Make sure you track the camera to the focal point.
The texture will be distorted because it is a cylinder making onto a cube. The actors' nodes in the Tardis will react properly to the camera but there should be two groups of light sources...One set for the Tardis and one outside the Tardis.
In OpenGL which one would result in a better performance. To change the culled face or to rotate my object?
The scenario is the following:
I compute the matrix to feed into my shaders, this will draw texture A in a certain culling position (front). When I see the object from infront i can see it, but from behind i cant, this is my desired behavior. Now i would like to add "something" behind, lets say texture B, so that when the object is seen from behind this other texture will appear in the same position and orientation as was the texture A but now with texture B.
I thought that other than building a cube with 2 sides i could simply "redraw on top" of my previous object. If i were to rotate the object i suppose I can assume that OpenGL will simply not overwrite texture A since it will not pass the face culling test. This require one extra matrix multiplication to mirror mi object on its own axis. However what if i simply change the culling property and try to draw it? Wouldn't it have the same effect of failing the text from infront but passing it from behind?
Which of these 2 options is better?
(PD: I am doing this on the iPhone so every single bit of performance is quite relevant for older devices.)
Thank you.
I'm not sure if I understood you correctly, but it sounds like you are trying to draw a billboard here? In that case, you should disable backface culling altogether and in your fragment shader check gl_FrontFacing. Depending on this, you can flip your texture coordinates and change your texture (just bind both textures and send them to the shader). This way you have only one draw call and don't have to do any strange tricks or change the OpenGL state.