I'm new in Unity and i've realized that it's difficult do a multi resolution 2d game on unity without paid 3rd plugins available on Asset Store.
I've made some tests and i'm able to do multi resolution support in this way:
1- Put everything from UI (buttons etc) inside a Canvas object in Render Mode Screen Space - Overlay with 16:9 reference resolution and fixed width.
2- Put the rest of the game objects inside a Game Object called GameManager with the Canvas Scaler component in Render Mode Screen Space - Camera with 16:9 reference resolution, fixed width and the Main Camera attached. After that, all game objects like player, platforms etc inside GameManager need to have a RectTransform component, CanvasRenderer component and Image Component for example.
Can i continue developing the game in that way, or this is a wrong way to do the things?
Regards
Also don't forget GUI, Graphics. It's a common misconception that GUI it's depreciated and slow. No it's not. The GameObject helpers for GUI were bad and are depreciated, but the API for putting in OnGUI works great when all you need is to draw a texture or some text on a screen. They're called legacy, but there are no plans as to remove them, as the whole Unity UI is made out of it anyway.
I have made a few games just on these, using Unity as a very overengineered multiplatform API for draw quad.
There is also GL if you want something more.
Just remember - there will be no built-in physics, particle effects, path finding or anything - just a simple way to draw stuff on the screen. You will have total control over what will be drawn - and this is both a good and bad thing, depending on what you want to do.
I will not recommend you using Canvas Scaler for developing a complete game. Intended purpose of the canvas scaler was to create menus and you should use it to create menus only.
The 2D games created without the canvas scaler don't create much problems (mostly they don't cause any problems) on multiple resolutions.
So, your step 1 is correct but for step 2 you don't need to have a canvas scaler component attached.
Do remember to mark your scene as 2D (not necessary) and your camera to orthographic (necessary) while developing 2D games.
Related
I am using Unity version 2021.3.15f
I want to show the particles on the UI canvas.
I'd like to know two things, how to show particle on the canvas when the render mode on the canvas is screenspace-overlay and screenspace-camera.
Do I need to convert the particle's transform into a rectTransform?
Or should I use methods like Camera.ScreenToWorldPosition?
You could always move the camera into a ScreenToWorldPosition and it will work but keep in mind this is just a bandaid fix and won't be robust and maintainable. Usually anything ui related must be compatible with Unity's UI Render Pipeline.
There is this great resource for adding particle effects into UGUI from a github repository.
Use : https://github.com/mob-sakai/ParticleEffectForUGUI
take a look at the sample scenes it has everything you need.
I have an issue with different resolutions.
Everything perfectly works on 1920x1080 however when i set it to tablet size like 10x10 aspect ratio 'Player' isn't resizing.
My platforms created under Canvas due to scalement and correct positioning. However my player created outside of the canvas.
Should i create my character under canvas or should i create my platforms outside of it? Currently I am not sure how to solve the issue.
Since player is created outside the canvas, there's no way for canvas to affect it (also player is probably using SpriteRenderer not Image component).
One way would be to put player as Image inside a canvas, but to be honest, canavs is created for UI, not gameplay. Putting all gameplay into UI might (and probably will) create a lot of issues. I'm already surprised that player and platforms interact in your game well as they use different systems.
What you probably want to do is to put all gameplay elements (character, platforms, projectiles, etc.) outside the canvas as sprite renderers and leave canvas for what it's meant to be (UI, maybe backgrounds).
Then, you might come across a problem, where on different resolutions, you have smaller or larger area of gameplay. Your options will be to: live with that, create system that restricts gameplay and fills empty space with background or black bars, or something in between (which is for eg. let vertical gameplay area be different, but horizontal the same).
Here's idea how you could achieve it:
https://forum.unity.com/threads/maintain-the-game-content-area-on-different-types-of-screen-sizes.905384/
I need to build an app using Unity which doesn't use a traditional camera to generate the graphics. I'll build them using some custom shaders and a few cameras whose results get stuffed in rendertextures and then frobbed. (Think http://www.purplefrog.com/~thoth/art/kaleidescope/kaleid1.html but even weirder)
I'm not sure what objects I would put in the scene to accomplish this. In any normal app you just put a camera and point it at the right spot and Unity gets the pixels into the window, but that is just not how this thing will work.
I'm not sure if I should be using a UI Canvas or what APIs would be used to copy various render textures into the proper locations.
If you are not targeting WebGL you can create a RenderTexture of the proper size (maybe using RenderTexture.GetTemporary) and use Graphics.CopyTexture or other techniques to assemble the image you want displayed in the game window.
Once you have the pixels you want in the RenderTexture you can use Graphics.Blit(src, (RenderTexture)null); which will copy the pixels into the game window. These pixels will be stretched if the game window is not the same size as the RenderTexture.
This technique worked for me in the editor's game window, but when I compile it to WebGL, all I get is a mostly-grey screen with a really big black rectangle in the bottom left.
I want to implement as showed in the below image in 2D game, I can find so many tutorials for 3D like mini map concept but for 2D i couldn't find anything. In my game i want to show a secondary
camera as in below image and also i need to zoom the content that will show through it. I developed one concept but it can be done with sprites or with Canvas in World Space mode. So you can see
they won't resize or positioned according to the screen resolution. If you guys have any idea how to do this task,it will be very helpful for me. And i also tried with depth mask shader .Thanks in advance.
Use a camera with target texture
Follow my tutorial here: Particles with Dynamic Text but disregard the parts about the particle system, they are irrelevant for you.
Once you've made a material, stop following the tutorial, and instead:
Create a sprite renderer on a canvas that is "Screen Space - Overlay" and set the material to be the one that you created.
I am trying to implement a game over screen for my Unity 3D based game for Google Cardboard. I have tried using the image UI component on a canvas component. The image appears just fine but the issue is that it seems to show up over the reticles. I would like to have the image perfectly fit the inside of the reticle. I am considering somehow finding out (somehow) the size of the reticle and exporting my image as a png with transparency around its graphics.
My Question:
Is there a better way to do this? If not, what are the dimensions i can use for the duplicate images (for the two reticles) to perfectly fit the Google Cardboard reticle openings.
When using Google Cardboard (Now GVR), I think it is best to use Quad GameObject for Game Over screens to which you can add images and child objects like 3D Text and collection of child quads.
The best thing about using quads and 3d text is the extent of scripting that can be done to make them dynamic as well as styling. And in your case, Physics Raycasters work well for the reticle to interact with those GameObjects.
You can add materials and textures to the quad and what not! I personally love quads for GameOver screens as well as a lot of other interact-able things.