How i can Display Unity Game on multi monitors in unity? - unity3d

I want to display my unity game on multiple monitors, how I can do this ?
I have already tried the solution from unity documentation but it doesn't work.
How can I divide one scene in many displays and then play these on multi monitors?

solution from the documentation works (even if its a bit quirky). BUT in order to have the same scene seperated into multiple views, you need to either
1) Set up seperate cameras for each view. doing this correctly will require you to tweak the projection matrixes, but for some cases this might not be necessary
2) Set your camera to render to a rednertexture, display this texture on a worldspace canvas, set up cameras to view only parts of that render texture. This way is a bit easier to set up but (depending on the resolution) might get slow.
both ways will require additional work if you want anything to be clickable

Related

Unity AR Foundation: how to influence the renderorder

in my AR application I want to render a model on top over another. Is there any method or variable to influence this?
Yes,there are at least two options:
z-test set to always
this has to be set on the shader of the materal your model is using. Here's the documentation from unity, but there are also some guides online how to write a shader with custom z-test.
two Cameras
This is done by having two cameras. One is your ARCamera and a second one that always at the same pose as the ARCamera. This could for example be done by setting the 2nd camera as a child of the ARCamera with Identity Pose. Then you can create a special Layer for example "Always In Front" and assign the model to it. Then set the cullings Masks accordingly for both cameras such that the 2nd camera only renders the model and the ARCamera only everything else. There might be some overhead when rendering two cameras with this solution.
Your problem has a similar solution to preventing weapons clipping in FPS games. As seen in this blogpost .
MainCamera
SecondCamera
The two picutes show the properties I set in the inspector for each camera. Therefore I tried it also in two diffrent ways but neither made a diffrence.
I: SecondCamera is the Child of the MainCamera
II: SecondCamera and MainCamera are both on the same hierarchy level in the ARSessionOrigin.

Scale Player Size according to the screen size in Unity

I have an issue with different resolutions.
Everything perfectly works on 1920x1080 however when i set it to tablet size like 10x10 aspect ratio 'Player' isn't resizing.
My platforms created under Canvas due to scalement and correct positioning. However my player created outside of the canvas.
Should i create my character under canvas or should i create my platforms outside of it? Currently I am not sure how to solve the issue.
Since player is created outside the canvas, there's no way for canvas to affect it (also player is probably using SpriteRenderer not Image component).
One way would be to put player as Image inside a canvas, but to be honest, canavs is created for UI, not gameplay. Putting all gameplay into UI might (and probably will) create a lot of issues. I'm already surprised that player and platforms interact in your game well as they use different systems.
What you probably want to do is to put all gameplay elements (character, platforms, projectiles, etc.) outside the canvas as sprite renderers and leave canvas for what it's meant to be (UI, maybe backgrounds).
Then, you might come across a problem, where on different resolutions, you have smaller or larger area of gameplay. Your options will be to: live with that, create system that restricts gameplay and fills empty space with background or black bars, or something in between (which is for eg. let vertical gameplay area be different, but horizontal the same).
Here's idea how you could achieve it:
https://forum.unity.com/threads/maintain-the-game-content-area-on-different-types-of-screen-sizes.905384/

Google Cardboard "Game Over" screen with a designed image

I am trying to implement a game over screen for my Unity 3D based game for Google Cardboard. I have tried using the image UI component on a canvas component. The image appears just fine but the issue is that it seems to show up over the reticles. I would like to have the image perfectly fit the inside of the reticle. I am considering somehow finding out (somehow) the size of the reticle and exporting my image as a png with transparency around its graphics.
My Question:
Is there a better way to do this? If not, what are the dimensions i can use for the duplicate images (for the two reticles) to perfectly fit the Google Cardboard reticle openings.
When using Google Cardboard (Now GVR), I think it is best to use Quad GameObject for Game Over screens to which you can add images and child objects like 3D Text and collection of child quads.
The best thing about using quads and 3d text is the extent of scripting that can be done to make them dynamic as well as styling. And in your case, Physics Raycasters work well for the reticle to interact with those GameObjects.
You can add materials and textures to the quad and what not! I personally love quads for GameOver screens as well as a lot of other interact-able things.

Develop 2D game Inside Canvas Scaler

I'm new in Unity and i've realized that it's difficult do a multi resolution 2d game on unity without paid 3rd plugins available on Asset Store.
I've made some tests and i'm able to do multi resolution support in this way:
1- Put everything from UI (buttons etc) inside a Canvas object in Render Mode Screen Space - Overlay with 16:9 reference resolution and fixed width.
2- Put the rest of the game objects inside a Game Object called GameManager with the Canvas Scaler component in Render Mode Screen Space - Camera with 16:9 reference resolution, fixed width and the Main Camera attached. After that, all game objects like player, platforms etc inside GameManager need to have a RectTransform component, CanvasRenderer component and Image Component for example.
Can i continue developing the game in that way, or this is a wrong way to do the things?
Regards
Also don't forget GUI, Graphics. It's a common misconception that GUI it's depreciated and slow. No it's not. The GameObject helpers for GUI were bad and are depreciated, but the API for putting in OnGUI works great when all you need is to draw a texture or some text on a screen. They're called legacy, but there are no plans as to remove them, as the whole Unity UI is made out of it anyway.
I have made a few games just on these, using Unity as a very overengineered multiplatform API for draw quad.
There is also GL if you want something more.
Just remember - there will be no built-in physics, particle effects, path finding or anything - just a simple way to draw stuff on the screen. You will have total control over what will be drawn - and this is both a good and bad thing, depending on what you want to do.
I will not recommend you using Canvas Scaler for developing a complete game. Intended purpose of the canvas scaler was to create menus and you should use it to create menus only.
The 2D games created without the canvas scaler don't create much problems (mostly they don't cause any problems) on multiple resolutions.
So, your step 1 is correct but for step 2 you don't need to have a canvas scaler component attached.
Do remember to mark your scene as 2D (not necessary) and your camera to orthographic (necessary) while developing 2D games.

How to display a part of a scene in another scene (Scene Kit + Swift)

First, I just want to introduce to you guys my problem, because it is really complex so you need this to understand it properly.
I am trying to do something with Scene Kit and Swift : I want to reproduce what we can see in the TV Show Doctor Who where the Doctor's spaceship is bigger on the inside, as you can see in this video.
Of course the Scene Kit Framework doesn't support those kind of unreal dimensions so we need to do some sort of hackery to do achieve that.
Now let's talk about my idea in plain english
In fact, what we want to do is to display two completely different dimensions at the same place ; so I was thinking to :
A first dimension for the inside of the spaceship.
A second dimension for the outside of the spaceship.
Now, let's say that you are outside of the ship, you would be in the outside dimension, and in this outside dimension, my goal would be to display a portion of the inside dimension at the level of the door to give this effect where the camera is outside but where we can clearly see that the inside is bigger :
We would use an equivalent principle from the inside.
Now let's talk about the game logic :
I think that a good way to represent these dimensions would be two use two scenes.
We will call outsideScene the scene for the outside, and insideScene the scene for the inside.
So if we take again the picture, this would give this at the scene level :
To make it look realistic, the view of the inside needs to follow the movements of the outside camera, that's why I think that all the properties of these two cameras will be identical :
On the left is the outsideScene and on the right, the insideScene. I represent the camera field of view in orange.
If the outsideScene camera moves right, the insideScene camera will do exactly the same thing, if the outsideScene camera rotates, the insideScene camera will rotate in the same way... you get the principle.
So, my question is the following : what can I use to mask a certain portion of a certain scene (in this case the yellow zone in the outsideView) with what the camera of another view (the insideView) "sees" ?
First, I thought that I could simply get an NSImage from the insideScene and then put it as the texture of a surface in the outsideScene, but the problem would be that Scene Kit would compute it's perspective, lighting etc... so It would just look like we was displaying something on a screen and that's not what I want.
there is no super easy way to achieve this in SceneKit.
If your "inside scene" is static and can be baked into a cube map texture you can use shader modifiers and a technique called interior mapping (you can easily find examples on the web).
If you need a live, interactive "inside scene" you can use the sane technique but will have to render your scene in a texture first (or renderer your inside scene and outer scene one after the other with stencils). This can be done by leveraging SCNTechnique (new in Yosemite and iOS 8). On older versions you will have to write some OpenGL code in SCNSceneRenderer delegate methods.
I don't know if it's 'difficult'. As we have to in iOS , a lot of times the simplest answer ..is the simplest answer.
Maybe consider this:
Map a texture onto a cylinder sector prescribed by the geometry of the Tardis cube shape. Make sure the cylinder radius is equal of the focal point of the camera. Make sure you track the camera to the focal point.
The texture will be distorted because it is a cylinder making onto a cube. The actors' nodes in the Tardis will react properly to the camera but there should be two groups of light sources...One set for the Tardis and one outside the Tardis.