What is the proper way to light a scene for the Hololens? - unity3d

We have built an app for the Hololens that has 1 or 2 3D characters in the scene at any given time. What is the best practice for adding lighting to an AR scene for headsets like the Hololens? Should the scene be lighted at all?

So here's the thing with the Hololens:
Black is Transparent
This means that any shadows on objects will make the object fade out into transparency when viewed on the real device (the emulator does not show simulated environments). As such, environments should be brightly lit from a source that is a child of the main camera (you may still use a directional light pointing from above angle) and objects should not cast shadows (as it will appear that those shadows are punching holes in objects).
This also means that you will want textures that are brightly colored as well.
Brightly lit (real world) backgrounds will exacerbate the transparency effect (as the Hololens can't reduce incoming light).
You'll likely have to experiment to find something that works best for your project.

Related

Unity: Alternative for camera stacking/layering?

Unity 2021.3.16f1/URP 12.1.8
I've just started with Unity a few weeks ago and am still getting to grips with how everything works. So please don't assume I know everything there is to know about Unity. Treat me as a n00b. 😉
I'm building a VR game for the Quest/Quest2. I have a scene with a keypad on a wall. When the payer "clicks" it, I want the scene to go dark, and a large version of the (3D) keypad to appear with which he can then interact (enter numbers). This keypad must always stay in the middle of his view.
What I did was create a canvas, and added a black plane with 50% transparency and the large version of the keypad. I've set up the canvas as follows:
This works somewhat. It has two major disadvantages: 1) the keypad is receiving lighting from the scene while I want it to be fully lit all the time, and 2) the canvas and all its children clip through walls and objects while I always want it to be rendered in front of everything else (yes I know this will mess with your depth perception in VR, but I already have a solution for that).
So the next thing I tried, was stacking cameras. I created a second camera and set is as an overlay camera. I also set its Culling Mask to UI:
Additionally, I added the new camera to my Main Camera as a stacked camera. I changed the Culling Mask of the Main Camera to everything but UI:
This works they way I want it but at a cost: performance takes a huge hit. My frame rate actually halved. I read everywhere that this is a known problem for mobile devices (which the Quest really is).
Another solution I read about, is using RenderObjects. But I can't really find how to use this. I'm not even sure it really is a solution to what I'm trying to achieve.
So can anyone tell me how I should go about doing this? Thanks in advance!
The solution of lighting is the that you can setup the layer of your keyboard to something like "Keyboard"
And in Directional Light you can uncheck keyboard layer.
The solution of second problem is that you can change culling mask of camera on Run Time Like this:
~(1 << LayerMask.NameToLayer("Keyboard"))
renders everything except the transparent layer.
1 << LayerMask.NameToLayer("Keyboard")
renders only the transparent layer.
NOTE: You can set your own layer and check/uncheck what you want it is just a example

Custom 2D Lighting System in Unity

Ok so I've been trying to make a custom 2D lighting system in Unity, and I'm at that annoying stage where I know what I want to do but I'm not sure how to do it.
Here's the plan:
There will be dedicated light objects with their own meshes. These meshes determine the shape of the light.
Before the camera renders the whole scene, it does an extra render of just the light meshes with a black background to create a lightmap.
Then the camera renders the scene as normal (does NOT render the light meshes this time). Every object has a shader that will access the lightmap and shade itself appropriately depending on the color of the lightmap at that point.
That's the idea anyway. I sorta threw together a botched form of this. I used a separate camera to render the lightmap into a render texture with a culling mask so that it only rendered the light meshes, which are on their own layer. I then manually passed that texture to the shaders which use their screen uvs to sample from it.
This works sorta ok, but the scene view is completely messed up since it tries to light things as if you were looking at it from the perspective of the lighting camera. I feel like this would make the system hard to use, so I want to try to make some that feels a bit more cohesive.
Here's some screenshots to explain:
The tan-ish box is my "light," which gets rendered to the light cam, visible in scene. This next shot is what renders to the lightmap:
The black background is not from the big black box, the clear flag is just set to Black.
Now according to this lightmap, the middle of the screen should be lit up. and that's exactly what happens:
Notice that in the game view, since the light camera is set up with the same position/rotation/perspective settings as the game camera, it looks fine:
The main problem is figuring out that extra render. Is there anyway to create an extra pass for the main camera before the scene render that only renders the light meshes? I could probably figure out the rest from there. It would also be nice if I could make the lightmap a global shader variable, that way I don't have to pass it to each individual material, but one thing at a time, right?
Thanks so much to anyone who can shed some light on this subject. I'm still pretty new to shaders and rendering, so any help is much appreciated.
If I understand correctly, your problem is the appearance of your lights in Scene View, right ?
For that, you can create a custom Gizmos for them and hide the original objects. There's a tutorial:
https://learn.unity.com/tutorial/creating-custom-gizmos-for-development-2019-2#5fa30655edbc2a002192105c

How can I use baked lighting on sprites? / How to light up a large area in 2D?

I'm having trouble figuring out how to light up large area(s) of sprites in Unity 2D. My previous knowledge on Unity's lighting is zero.
I first tried using a large amount of point lights and using the "Sprites/Diffuse" material, but about only five would actually render at a time, so I guess there's a limit on that.
Then I tried putting in an area light. That didn't do anything, so that's when I started doing research about baked lighting on sprites (and baked lighting in general). I found stuff like this but I couldn't get it to work either because it's outdated or because I don't know what I'm doing. Other answers I've come across seem to assume that the reader knows anything about lighting in Unity in the first place which, to be honest, I don't. Unity's documentation website had some information on it, but no tutorials that go into how to set up baked lighting.
I've tried a bunch of different combinations of materials (like using the "Standard" shader for the sprites instead of "Sprites/Diffuse", emission, ect.) and I enabled "Baked Global Illumination" in Lighting>Settings.
If baked lighting isn't possible on sprites (or isn't worth the trouble), what are the alternatives?
Edit: I made sure not to have the lights pointing the wrong direction, and I do realise that Unity2D is just like painting onto a piece of paper in Unity3D. I was able to get point lights to work, but only a few at a time. I don't need to do the entire screen at once, I need to do a large specific area at once.
some tips...
working with sprites your in 2d... when you add a light, switch to 3d mode, and rotate to make sure your light is pointed at your objects, and oriented so as not to be on the same plane, or level with them, as this will cast all the light behind them.
if your trying to light up everything on the screen(in camera) attach an area light to the camera at the cameras position, point it where the camera points, and then in the inspector on the right, you can change its variables. intensity, range, width, height etc.
Emissive Texture:
https://www.youtube.com/watch?v=oa6kW5HhRd4
For some reason, I never even thought about going into the asset store. I found this for free, and it looks like it will work: Light2D.

How to make this lighting effect in HaxeFlixel or Unity?

How do I create this lighting effect in HaxeFlixel or Unity ?
I will tell you how it was created in this specific case. This question is very broad and there are very many ways to create lighting effects in both Unity and HaxeFlixel.
The image is of the game Beneath the City by Deepnight, accessible on his website. The game uses haxe although not with HaxeFlixel. It's deepnight's personal engine that works with the flash target. The source code is available here. The class where lighting takes place is in src/Level.hx and more specifically in the renderLights method. From what I gather, a light layer is layered above the sprites of the level. This layer (or bitmap data) has lights drawn as rectangles on it. This layer is then blurred, so that the lights don't appear as solid rectangles, but as faded blurs of spreading light. This takes place with flash blur filters. Blend modes are used to make the light Add in luminosity. A dark mask is then layered above the blur layer, presumably to prevent light in certain locations, such as in the fog of the game. (?). This all takes place between lines 208 and 248.
This game truly does have gorgous visuals, but the lighting goes beyond the initial blurred lights. Particles float around in the game that really add to the lightings aesthetic.
This is all how he does it though. How you do it is up to you. For HaxeFlixel, I would first consider alternatives such as this geometric lighting or this method of applying lighting to scenes, which looks closer to screenshot or even a very simple circle based lighting alternative. Searching Unity 2D lighting brings up plenty of options.
You've got plenty of options on how to approach the issue. I didn't answer this with a direct tutorial because the question isn't at the code level.

How to display a part of a scene in another scene (Scene Kit + Swift)

First, I just want to introduce to you guys my problem, because it is really complex so you need this to understand it properly.
I am trying to do something with Scene Kit and Swift : I want to reproduce what we can see in the TV Show Doctor Who where the Doctor's spaceship is bigger on the inside, as you can see in this video.
Of course the Scene Kit Framework doesn't support those kind of unreal dimensions so we need to do some sort of hackery to do achieve that.
Now let's talk about my idea in plain english
In fact, what we want to do is to display two completely different dimensions at the same place ; so I was thinking to :
A first dimension for the inside of the spaceship.
A second dimension for the outside of the spaceship.
Now, let's say that you are outside of the ship, you would be in the outside dimension, and in this outside dimension, my goal would be to display a portion of the inside dimension at the level of the door to give this effect where the camera is outside but where we can clearly see that the inside is bigger :
We would use an equivalent principle from the inside.
Now let's talk about the game logic :
I think that a good way to represent these dimensions would be two use two scenes.
We will call outsideScene the scene for the outside, and insideScene the scene for the inside.
So if we take again the picture, this would give this at the scene level :
To make it look realistic, the view of the inside needs to follow the movements of the outside camera, that's why I think that all the properties of these two cameras will be identical :
On the left is the outsideScene and on the right, the insideScene. I represent the camera field of view in orange.
If the outsideScene camera moves right, the insideScene camera will do exactly the same thing, if the outsideScene camera rotates, the insideScene camera will rotate in the same way... you get the principle.
So, my question is the following : what can I use to mask a certain portion of a certain scene (in this case the yellow zone in the outsideView) with what the camera of another view (the insideView) "sees" ?
First, I thought that I could simply get an NSImage from the insideScene and then put it as the texture of a surface in the outsideScene, but the problem would be that Scene Kit would compute it's perspective, lighting etc... so It would just look like we was displaying something on a screen and that's not what I want.
there is no super easy way to achieve this in SceneKit.
If your "inside scene" is static and can be baked into a cube map texture you can use shader modifiers and a technique called interior mapping (you can easily find examples on the web).
If you need a live, interactive "inside scene" you can use the sane technique but will have to render your scene in a texture first (or renderer your inside scene and outer scene one after the other with stencils). This can be done by leveraging SCNTechnique (new in Yosemite and iOS 8). On older versions you will have to write some OpenGL code in SCNSceneRenderer delegate methods.
I don't know if it's 'difficult'. As we have to in iOS , a lot of times the simplest answer ..is the simplest answer.
Maybe consider this:
Map a texture onto a cylinder sector prescribed by the geometry of the Tardis cube shape. Make sure the cylinder radius is equal of the focal point of the camera. Make sure you track the camera to the focal point.
The texture will be distorted because it is a cylinder making onto a cube. The actors' nodes in the Tardis will react properly to the camera but there should be two groups of light sources...One set for the Tardis and one outside the Tardis.