I am using Unity version 2021.3.15f
I want to show the particles on the UI canvas.
I'd like to know two things, how to show particle on the canvas when the render mode on the canvas is screenspace-overlay and screenspace-camera.
Do I need to convert the particle's transform into a rectTransform?
Or should I use methods like Camera.ScreenToWorldPosition?
You could always move the camera into a ScreenToWorldPosition and it will work but keep in mind this is just a bandaid fix and won't be robust and maintainable. Usually anything ui related must be compatible with Unity's UI Render Pipeline.
There is this great resource for adding particle effects into UGUI from a github repository.
Use : https://github.com/mob-sakai/ParticleEffectForUGUI
take a look at the sample scenes it has everything you need.
Related
Is it possible to use the new UI builder and Toolkit for World Space UI in Virtual Reality use?
I have seen ways of doing it with Render Textures but not only does It not seem to be the same as a World Space Canvas (Which I did expect but it´s not even close) but I also don´t find a way of interacting using the VR Raycast method?
This isn't officially supported yet but is certainly possible for those willing to implement it themselves.
As the OP mentioned it is straightforward enough to render the UI, the main idea is to set panelSettings.targetTexture to some RenderTexture, which you can apply to a quad like any other texture.
Note that if you have multiple UIs you will need multiple instances of PanelSettings.
For raycasting there is a method panelSettings.SetScreenToPanelSpaceFunction which can be used for translating 2d coordinates to panel coordinates, here is an Official Unity Sample demonstrating how it can be implemented in camera space. The method is called every update so it can be hijacked to use a raycast from a controller instead of screen coordinates, although I've had mixed results with this approach.
Check out this repo for an example implementation in XR, it is a more sophisticated solution which makes extended use of the input system.
Before I explain my situation, it's important that I mention I'm using an older version of Unity, 2017.42f2 (for PSVITA support). for this reason, I'm also using the old Post-Processing Stack.
I'm making a game in which I want a lot of bloom on the UI, but not as much for the rest of the game. I used a camera to render the UI, gave it Post-Processing (Screen Space - Camera on Canvas),
and another one to render the rest of the game
Each is given a different profile to use different effects.
My expectation is that the UI renderer camera would only apply it's effects to the Canvas. Instead, it also applies them to the camera beneath it; the game renderer camera.
As you can see I used Don't Clear clear flags. I also tried Depth-only, to see if it would make a difference.
I'm lost as to what I could do. Grain and bloom get applied to everything
yet the profile for those effects is only given to the UI renderer Camera's Post Processing Behavior Script.
Does anyone have any suggestions or ideas? I'm lost.
I wanna render VR from a native plugin. Is this possible?
So, far, what I have is this:
void OnRenderObject()
{
GL.IssuePluginEvent(getRenderCallback(), 1);
}
This is calling my native plugin and rendering something, but it makes this mess here.
It seems that I am rendering to the wrong texture. In fact, how to get the correct texture/framebuffer or whatever trick necessary to render VR from the native plugin OpenGL? For example, I need information about the camera being rendered (left or right) in order to render properly.
Any hints? I am trying with both Gear VR AND Google VR. Any hint about any of them is welcome.
My interpretation is, instead of Unity's default render from MeshRenderer, you want to overwrite that with your custom OpenGL calls.
Without knowing what you are rendering and what should be the correct behavior, I can't help more.
I can imagine that if you set up your own states such as turning on/off particular depth tests, blends, these would overwrite whatever states Unity set up for you prior to your custom function. And if you don't restore those states Unity is not aware and produces incorrect result.
As a side note, you may want to check out
https://github.com/Samsung/GearVRf/
It's Samsung's open source framework engine for GearVR. Being similar to Unity but open source, you may do something similar to what you posted while knowing what happens underneath
I've read a few different posts on how to display the particle system on the canvas in Unity but I don't seem to be understanding it.
I'm trying to use the Particle Ribbon asset by Moonflower in my UI but can't get it to display in the UI. I tried adding another Canvas as suggested in other posts, with Render mode set to Screen-Space Camera but no luck.
At one point I saw the particle system but it was very, very small and wouldn't change size regardless of scaling.
you can set sortingOrder
ParticleSystemRenderer.sortingOrder / sortingLayerID, Canvas.overrideSorting / sortingOrder / sortingLayerID
canvas
particle System
I would recommend trying the UIParticleSystem script found here.
Generally speaking, this Unity UI Extension repository is full of amazing things created (and often updated) by the community : I'd advise you to bookmark it :)
I'm new in Unity and i've realized that it's difficult do a multi resolution 2d game on unity without paid 3rd plugins available on Asset Store.
I've made some tests and i'm able to do multi resolution support in this way:
1- Put everything from UI (buttons etc) inside a Canvas object in Render Mode Screen Space - Overlay with 16:9 reference resolution and fixed width.
2- Put the rest of the game objects inside a Game Object called GameManager with the Canvas Scaler component in Render Mode Screen Space - Camera with 16:9 reference resolution, fixed width and the Main Camera attached. After that, all game objects like player, platforms etc inside GameManager need to have a RectTransform component, CanvasRenderer component and Image Component for example.
Can i continue developing the game in that way, or this is a wrong way to do the things?
Regards
Also don't forget GUI, Graphics. It's a common misconception that GUI it's depreciated and slow. No it's not. The GameObject helpers for GUI were bad and are depreciated, but the API for putting in OnGUI works great when all you need is to draw a texture or some text on a screen. They're called legacy, but there are no plans as to remove them, as the whole Unity UI is made out of it anyway.
I have made a few games just on these, using Unity as a very overengineered multiplatform API for draw quad.
There is also GL if you want something more.
Just remember - there will be no built-in physics, particle effects, path finding or anything - just a simple way to draw stuff on the screen. You will have total control over what will be drawn - and this is both a good and bad thing, depending on what you want to do.
I will not recommend you using Canvas Scaler for developing a complete game. Intended purpose of the canvas scaler was to create menus and you should use it to create menus only.
The 2D games created without the canvas scaler don't create much problems (mostly they don't cause any problems) on multiple resolutions.
So, your step 1 is correct but for step 2 you don't need to have a canvas scaler component attached.
Do remember to mark your scene as 2D (not necessary) and your camera to orthographic (necessary) while developing 2D games.