For research purposes, I would like to create a Unity VR 3D application that (more or less) simulates the foveal field of view of a person. This means, in particular, I would like to render the whole environment of the application in the full field of view, but certain objects of interest I only want to render in the foveal area.
For the purpose of explaining the problem, I created a simple 2D picture. Please assume it's 3D. In the picture, the green area is the peripheral field of view, and the yellow area is the foveal field of view. The whole environment, like walls, sky, etc., should get rendered in the green and in the yellow area. Particular objects of interests, here the flowers, however, should only get rendered only in the yellow area and - importand - these objects should get cut off when reaching the green area. With this approach, I want to force people moving their head instead of just moving the eyes.
Any idea how to achieve this? Is it possible to use a kind of mask or filter? Or do I need a stencil shader? I looked around but could not find the correct approach.
Related
In Unity, is there a way to give slight color variations to a scene (a strain of purple here, some yellow blur there) without adjusting every single texture? And for that to work in VR stereo images too (and ideally in semi-consistent way as one moves around, and perhaps also without having to use computing-heavy colored lights)? Many thanks!
A simple way to achieve this if your color effect is fixed would be to add a canvas that renders a half transparent image to the whole screen. But I suppose that you might prefer some dynamic effect.
To achieve this, look at Unity's post processing stack. It allows you to add many post process effects, such as chromatic aberation and color grading, that might allow you to do what you want
I am trying to draw a box that can help someone understand the dimensions of an item, but I keep having the issue that since I first need to recognize a plane when I put my physical item on top of the plane, my box gets drawn in front of the item.
Is it possible to somehow overcome this?
#John Scalo is right, your problem is not having to first detect a plane, but it's that your render engine doesn't know that part of your green box frame is occluded (hidden) by a real-world object.
"…to somehow overcome this"
Yes, and by doing so you might be "solving" your original problem—help someone understand the dimensions of an item.
(Depending on your choice of render engine, e.g. SceneKit) You can add an invisible 3D object that has the same dimensions as the real-world object; so the render engine will "know" that some parts of your box frame are behind this (for the user invisible) 3D object. Therefor, you can tell it not to draw those parts of your box frame, which will give the illusion (borrowing from Apple here) that your soda can has the box around it.
These workarounds are inaccurate, but maybe their accuracy is enough for the level of realism you are trying to achieve:
Option 1: 1. After detecting the desk surface, place a semi-transparent 3D object over the soda can and then resize it (gestures/buttons your choice) until it's about the dimensions of the soda can. 2. Confirm that you're done, and just don't draw a texture on it at all just let it occlude the green box frame.
Option 2: Hold your device near the edges of the soda can and add "enough" ARAnchors to be able to create a "bounding shape" that (again) can be used to capture the real-word object and occlude that.
Option 3: (intense, and perhaps the least accurate) Use your finger to "brush" over the object from various angles, and on each touch perform a hit test (hopefully the top/nearest hit is a part of your soda can) and build up a "bounding shape" that way.
Option X: any combination of 1 - 2 - 3.
Good luck, there's lots of people trying to work around this device/ARKit limitation at them moment, so keep your eyes open for good ideas.
The problem you're dealing with is called occlusion, and ARKit doesn't (currently?) include occlusion support. Maybe some day soon iPhones and iPads will begin to ship with LIDAR (or similar), in which case ARKit will be able to detect objects in the scene, making occlusion much easier.
I've been searching around for this one for a bit, and unfortunately I can't seem to find any good, consistent results. So, in the Unity UI system, buttons can stretch without becoming pixelated or distorted. This is because the texture is split up into 9 parts - the corners, middle, and sides.
This works because the button's middle and sides are stretched, but not the corners. Then, the button appears not pixelated, at any dimension.
So, the question is as follows: How can I do the same thing for a transparent, unlit texture in 3D space? I have a speech bubble texture on a flat plane that I know how to re-scale to fit the text in the speech bubble.
I've set the texture type to Multiple Sprite, and divided it up into 9 parts. However, I cannot seem to find where I can set the texture to act like the UI button does, and I'm not sure that this is even possible in this way in 3D space.
Is there a way, or should I just make the different parts of the texture different objects, and move them together? That would seem very inefficient and ugly compared to this.
To accomplish what you are asking, you would need to create tiles for this speech bubble and then write a script that procedurally builds a speech bubble based on the plane's scale value. You could also try just changing the texture's Filter Mode to Point.
However I really don't think you should be using textures for this anyway. Why not just use a Unity Canvas and set the Render Mode to World Space? Then you can just set your text box to be a sprite, not a texture, and set its filter mode to Point (See below). This would also make it a lot easier for when you want there to be text in the speech bubble later on.
I have developed scratch card effect.I am stuck at logic of how can I know object got visible which is behind the scratch card image? So that I can show reward screen.
PS: with modifications in this link I able to work this scratch card effect in uGUI.
There are many ways you could go about this. Assuming you know the dimensions of the red "target image" that the user is trying to uncover, you could take a fixed number of samples from the area that the target is under. Once, say, 80% of those samples are transparent (i.e. the target is visible at those positions), you can consider the object visible and show the reward screen.
You can use GetPixel to get the individual samples from the scratch texture.
I want to show a transparent building in our project. I did that by setting the material of the mesh to be "transparent/diffuse". However, there exists some visibility problem of the mesh of the building. At some position, I can only see two or three sides of the cuboid(the transparent block, i.e the building). If I adjust my character position, I can see the whole cuboid. I googled the similar question online, someone mentioned about the frustum view of the camera. It seems like character has to be inside the frustum view of the camera, then user can see the whole mesh of the cuboid.
Can anyone give me some suggestions? I feel like it might be something about the way of how I build my mesh for the building, but at some position, I can see the whole cuboid.
I've solved this problem. It is just about the way how you construct the mesh.Basically, for the cuboid, I reconstructed the mesh in this way:
triangles[0]=topleft;
triangles[1]=topright;
triangles[2]=bottomright;
triangles[3]=bottomright;
triangles[4]=bottomleft;
triangles[5]=topleft;
Note: This is just front side, the other sides should be constructed in a same way.
Besides, in order to show the mesh when user enters the block, you have to construct the inside area of the block in previous way.