About runtime rendering - unity3d

I want to improve the graphic quality of the dynamically randomly generated house object.
Is there a real-time bake function provided by Unity? If that's not possible, is it possible to render just the scene from the camera? Like the way get images from blender rendering
Before-and-after example assuming rendering

You need to switch to High Definition Render Pipline. Then you will need to set up lights and all other settings. You can use Unity examples to get you started.

Related

Show the Unity particle on the Canvas

I am using Unity version 2021.3.15f
I want to show the particles on the UI canvas.
I'd like to know two things, how to show particle on the canvas when the render mode on the canvas is screenspace-overlay and screenspace-camera.
Do I need to convert the particle's transform into a rectTransform?
Or should I use methods like Camera.ScreenToWorldPosition?
You could always move the camera into a ScreenToWorldPosition and it will work but keep in mind this is just a bandaid fix and won't be robust and maintainable. Usually anything ui related must be compatible with Unity's UI Render Pipeline.
There is this great resource for adding particle effects into UGUI from a github repository.
Use : https://github.com/mob-sakai/ParticleEffectForUGUI
take a look at the sample scenes it has everything you need.

Is there a method or way to mask one ui image from multiple images?

Currently, i am working on a game project as a hobby and have been trying to make some game mechanics that i havent really made before. I am currently working on an inventory system for the player that displays the player's weapon collection as well as weapon parts.
I am currently working on making the weapon inventory. My current problem is rendering the weapon models as icons per inventory slot.
The current method that I am planning on rendering any 3D asset is by using a camera with a render texture and assigning that texture to a RawImage Component. I was planning to do the same for the inventory slots. However, I realised that i would need a camera per slot with its own render texture which would be both time consuming to create as well as making the game less performant (since there would be a lot of cameras).
I have searched if there is a way to mask a texture from multiple objects so that I can skip the part of creating 100 different cameras and instead use one. So far, nothing has come up in my findings that could remotely help me.
So, as stated in the title; Is there a method or way to mask one ui image from multiple images? If not, is there an alternative solution to my problem?

Unity apply same texture on boxes of different sizes

I'm working with Unity and C# making a platform game, i mostly used cubes/boxes of different scales to build the level and now i have to apply the texture, i played with the tiling but the texture obviously stretches to apply on different objects, making a different material for each object is too much, I heard that I should use shaders but I've never worked with them. Can anyone help me write a shader that can modify the tiling based on the size of the object? Thanks to everyone.

Import render Unity3d for VR

I am working on a VR app with Unity3d and I am making the scene rendering with Unity but I cannot achieve good image quality. Is it possible to import a full baked render from Maya, 3ds ( for example an .fbx ) with all the lightning and shading so that I can only work with Unity with the interactions ?
I need to get the highest image quality so I make it as realistic as possible.
Shaders will always be different, but in case your lighting conditions don't change, you can fully bake the result as albedo, and use unlit shader to just pass through the result. This will not work as expected if you want to move/rotate the resulting objects, however it might work for environments.
Have in mind that unlike max/maya, unity renders in realtime, in milliseconds per frame, not minutes per frame, so there are certain tradeoffs that are made to ensure speed, the reuslts will never be identical (as they are not identical between offline renderers).
Its probably best to just learn to use Unity shaders to its full, just as you had to learn max/maya.
you can bake lighting in unity3d too, which can give decent results

Develop 2D game Inside Canvas Scaler

I'm new in Unity and i've realized that it's difficult do a multi resolution 2d game on unity without paid 3rd plugins available on Asset Store.
I've made some tests and i'm able to do multi resolution support in this way:
1- Put everything from UI (buttons etc) inside a Canvas object in Render Mode Screen Space - Overlay with 16:9 reference resolution and fixed width.
2- Put the rest of the game objects inside a Game Object called GameManager with the Canvas Scaler component in Render Mode Screen Space - Camera with 16:9 reference resolution, fixed width and the Main Camera attached. After that, all game objects like player, platforms etc inside GameManager need to have a RectTransform component, CanvasRenderer component and Image Component for example.
Can i continue developing the game in that way, or this is a wrong way to do the things?
Regards
Also don't forget GUI, Graphics. It's a common misconception that GUI it's depreciated and slow. No it's not. The GameObject helpers for GUI were bad and are depreciated, but the API for putting in OnGUI works great when all you need is to draw a texture or some text on a screen. They're called legacy, but there are no plans as to remove them, as the whole Unity UI is made out of it anyway.
I have made a few games just on these, using Unity as a very overengineered multiplatform API for draw quad.
There is also GL if you want something more.
Just remember - there will be no built-in physics, particle effects, path finding or anything - just a simple way to draw stuff on the screen. You will have total control over what will be drawn - and this is both a good and bad thing, depending on what you want to do.
I will not recommend you using Canvas Scaler for developing a complete game. Intended purpose of the canvas scaler was to create menus and you should use it to create menus only.
The 2D games created without the canvas scaler don't create much problems (mostly they don't cause any problems) on multiple resolutions.
So, your step 1 is correct but for step 2 you don't need to have a canvas scaler component attached.
Do remember to mark your scene as 2D (not necessary) and your camera to orthographic (necessary) while developing 2D games.