I'm making a 2D game about launching objects towards eachother. It is almost complete. However, when I run it on different devices, the offsets of certain gameobjects are messed up due to the different screen size. In the Unity editor, I'm using the free aspect view, and I've created my gameobjects so that with a camera size of 80, they all align perfectly. I think the problem here is that the screen resolution messes up the display, and because I'm using a fixed amount of Unity units to position my gameobjects, they are displayed weirdly when I run the game on the standalone. I've written a pixel perfect camera script, but it doesn't seem to help. The camera is pixel perfect, however, in order to compensate, the camera size is turned into something extremely small, or extremely large. I just want the same look across all devices and screen resolutions. A main problem here is that I want my GUI elements to display next to where the player is standing. My script is here.
using UnityEngine;
using UnityEngine.UI;
using System.Collections;
public class HoldTimeMultiplierPlayerFollowController : MonoBehaviour {
public float textYOffset = 50f;
public Text holdTimeMultiplierDisplay;
void Update() {
Vector3 position = Camera.main.WorldToScreenPoint(transform.position);
holdTimeMultiplierDisplay.gameObject.transform.position = new Vector3(position.x, position.y + textYOffset, position.z);
}
}
Anyone got any ideas?
BTW,
all my art is 32x32. The pixels to units ratio is always 1 to 1. The camera size in the editor is 81.92, when I'm using a free aspect screen size of 686x269. At those measurements, everything is displayed perfectly.
Any help is appreciated. Maybe a script, or a suggestion on how to implement it?
Other scripts (If you see any issues that need to be resolved or improvements that could be added, please tell me):
using UnityEngine;
using UnityEngine.UI;
using System.Collections;
public class PlayerMovementController : MonoBehaviour {
private int holdTimeMultiplier = 0;
public int maxHoldTimeMultiplier = 215;
public Text holdTimeMultiplierDisplayText;
private Rigidbody2D rbody;
void Awake() {
rbody = GetComponent<Rigidbody2D>();
if(rbody == null) {
Debug.LogError ("No Rigidbody2D detected on player.");
}
}
void Update() {
Vector2 mousePosition = new Vector2(Camera.main.ScreenToWorldPoint(Input.mousePosition).x,
Camera.main.ScreenToWorldPoint(Input.mousePosition).y);
Vector2 mouseDirectionFromPlayer = mousePosition - new Vector2(transform.position.x, transform.position.y);
if(Input.GetMouseButton(0)) {
if(holdTimeMultiplier < maxHoldTimeMultiplier) {
holdTimeMultiplier ++;
} else {
holdTimeMultiplier = 0;
}
} else {
if(holdTimeMultiplier != 0) {
rbody.AddForce(mouseDirectionFromPlayer * holdTimeMultiplier * 200);
holdTimeMultiplier = 0;
}
}
holdTimeMultiplierDisplayText.text = holdTimeMultiplier.ToString();
}
}
...
using UnityEngine;
using System.Collections;
public class SpawnNewObject : MonoBehaviour {
public GameObject[] UseableObjects;
public Transform SpawnPoint;
void Update() {
if(Input.GetKeyDown(KeyCode.N)) {
var randomObject = Random.Range(0,UseableObjects.Length);
GameObject UseableObject;
UseableObject = Instantiate(UseableObjects[randomObject], SpawnPoint.position, SpawnPoint.rotation) as GameObject;
}
}
}
Related
I am making a small 2D game with a Youtube tutorial. Now I am trying to add floattext. When I show my text, the scale of my text goes from 1 to 540, thus text is as big as player cannot see it. An example image is below:
as shown, the text is so big. Its scale is somehow 540.
I make UI's (Canvas's) render mode to "Screen Space - Camera", then drag the main camera to canvas. After I've selected the "UI scale mode" to "Scale With Size". My reference resolution is 1920*1080. The reference pixel per unit is 1.
If I drop text prefab to the scene, the text is showing with normal size. But if I call it with collapsing (When my character touches a chest, then text occurs) size of the text is huge.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
public class FloatingTextManager : MonoBehaviour
{
public GameObject textContainer;
public GameObject textPrefab;
private List<FloatingText> floatingTexts = new List<FloatingText>();
private void Update() {
foreach (FloatingText txt in floatingTexts)
{
txt.UpdateFloationgText();
}
}
public void Show(string msg, int fontSize, Color color, Vector3 position, Vector3 motion, float duration) {
FloatingText floatingText = GetFloatingText();
floatingText.txt.text = msg;
floatingText.txt.fontSize = fontSize;
floatingText.txt.color = color;
floatingText.go.transform.position = position;
floatingText.motion = motion;
floatingText.duration = duration;
floatingText.Show();
}
private FloatingText GetFloatingText() {
FloatingText txt = floatingTexts.Find(t => !t.active);
if (txt == null){
txt = new FloatingText();
txt.go = Instantiate(textPrefab);
txt.go.transform.SetParent(textContainer.transform);
txt.txt = txt.go.GetComponent<Text>();
floatingTexts.Add(txt);
}
return txt;
}
}
those are (above) the "floating text manager script"
My hierarchy and project. "FloatingText" game object occur at under the "FLoatingTextManager
The codes at the below are chest's code.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Chest : Collectable // inherited everything from "collidable.
//including mono behaviour.
{
public Sprite emptyChest;
public int pesosAmount;
protected override void OnCollect() {
if (!collected) {
collected = true;
pesosAmount = Random.Range(5, 10);
GetComponent<SpriteRenderer>().sprite = emptyChest;
GameManager.instance.ShowText("+" + pesosAmount + " pesos!", 45, Color.yellow, gameObject.transform.position, Vector3.up * 0, 10);
}
}
}
I've solved it by adding a new "scale" variable to my floating text.
Hello so I've been looking for a solution for my problem but looks like there is absolutely nothing about it.
I'm working on a scene where I have some 3D object renderred on a ground plane and my goal is making an animation on that 3D object start by tapping it. I'm using latest version of vuforia 10.4 with Unity 2020.3.9f1. I have a script for instantiating the 3d Model and making the plane finder disappear as long as it doesn't lose tracking.`using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class sceneManager : MonoBehaviour
{
private string level = "snake";
public GameObject[] renderredPrefab;
public GameObject ground;
public GameObject groundFinder;
private int levelChecker(string levelName)
{
if (levelName == "snake")
return 0;
else return 1;
}
public void spawnObject(int i)
{
Instantiate(renderredPrefab[levelChecker(level)], new Vector3(0, 0, 0), Quaternion.identity, ground.transform);
}
public void groundFinderOff()
{
groundFinder.SetActive(false);
}
public void groundFinderOn()
{
groundFinder.SetActive(true);
}
}
And another one to trigger the animation following the game object's tag hereusing System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class animationTriggerManager : MonoBehaviour
{
private Animator m_Animator;
private string objectName;
private GameObject[] eos;
private GameObject snake;
[SerializeField]
private Camera ARCamera;
// Start is called before the first frame update
void Start()
{
// Get the different eos present on the scene
for (int i = 0; i < eos.Length; i++)
{
eos[i] = GameObject.FindWithTag("myEolienne" + i);
}
// Get snake game objecct in the scene
snake = GameObject.FindWithTag("Snake");
}
// Update is called once per frame
void Update()
{
if (Input.touchCount > 0 && Input.touches[0].phase == TouchPhase.Began)
{
Ray ray = ARCamera.ScreenPointToRay(Input.GetTouch(0).position);
if (Physics.Raycast(ray, out RaycastHit hit))
{
objectName = hit.collider.name;
Debug.Log("raycast touched " + hit.transform.name);
switch (objectName) //Get the Animator depending on which gameObject been tapped on.
{
case "myEolienne1":
m_Animator = eos[0].GetComponent<Animator>();
// Launch the animation on the gameObject that's been tapped
m_Animator.SetTrigger("Helice_Rotate");
Debug.Log("rotate launched");
break;
case "myEolienne2":
m_Animator = eos[1].GetComponent<Animator>();
// Launch the animation on the gameObject that's been tapped
m_Animator.SetTrigger("Helice_Rotate");
Debug.Log("rotate launched");
break;
case "myEolienne3":
m_Animator = eos[2].GetComponent<Animator>();
// Launch the animation on the gameObject that's been tapped
m_Animator.SetTrigger("Helice_Rotate");
Debug.Log("rotate launched");
break;
case "Snake":
m_Animator = snake.GetComponent<Animator>();
m_Animator.SetTrigger("snakeMoving");
break;
}
}
}
}
}
`
Note that each 3D model has different parts grouped in one parent that has a mesh collider on the parent only.enter image description here
The rendering works perfectly but I can't figure out what's wrong with my raycasting script. Note that I first tried with 3D model on image target and it worked fine.
Thanks in advance !
I use the objects that I scan with the photogrammetry method exactly in unity. So when I create a cube and expand this cube to the extent that I want to measure, I give an example. The z value gives the value in meters. What I want to do is to measure the distance with an object to be created between the two points I click on the object and the value I will get over it. How can I make this happen?
I've tried the code below but nothing draws.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
public class olcumYap : MonoBehaviour
{
private LineRenderer lineRend;
private Vector2 mousePos;
private Vector2 startMousePos;
[SerializeField]
private Text distanceText;
private float distance;
// Start is called before the first frame update
void Start()
{
lineRend = GetComponent<LineRenderer>();
lineRend.positionCount = 2;
}
// Update is called once per frame
void Update()
{
if (Input.GetMouseButtonDown(0))
{
startMousePos = Camera.main.ScreenToWorldPoint(Input.mousePosition);
}
if (Input.GetMouseButton(0))
{
mousePos = Camera.main.ScreenToWorldPoint(Input.mousePosition);
lineRend.SetPosition(0, new Vector3(startMousePos.x, startMousePos.y, 0f));
lineRend.SetPosition(1, new Vector3(mousePos.x, mousePos.y, 0f));
distance = (mousePos - startMousePos).magnitude;
distanceText.text = distance.ToString("F2") + "metre";
}
}
}
You could try debugging the line positions to see if the problem is the line renderer or the positions.
Debug.Log(<position>);
print(<position>);
It is generally a good practice to log something if it doesnt work to find the issuee.
Hey guys I want to make a multiplayer game with unity. But I cannot sync players.
I use Photon Transform View and Photon View. I have attached Photon Transform View to Photon View but still it doesnt work.
This is my player movement code:
using System.Collections;
using System.Collections.Generic;
using UnityEngine.UI;
using Photon.Realtime;
using UnityEngine;
public class Movement : Photon.MonoBehaviour
{
joystick griliVAl;
Animator animasyon;
int Idle = Animator.StringToHash("Idle");
// Use this for initialization
void Start () {
animasyon = GetComponent<Animator>();
griliVAl = GameObject.FindGameObjectWithTag("Joystick").GetComponent<joystick>();
}
public void OnButton()
{
animasyon.Play("attack_01");
}
// Update is called once per frame
void Update () {
float x = griliVAl.griliv.x;
float y = griliVAl.griliv.y;
animasyon.SetFloat("Valx", x);
animasyon.SetFloat("Valy", y);
Quaternion targetRotation = Quaternion.LookRotation(griliVAl.griliv*5);
transform.rotation = Quaternion.Lerp(transform.rotation, targetRotation, Time.deltaTime);
transform.position += griliVAl.griliv * 5 * Time.deltaTime;
}
}
It will be mobile game. So that these griliVal value is joysticks circle.
But can someone please help me to solve this issue?
If by whatever means it doesn't work, try using OnPhotonSerializeView().
Here is what you can put
public void OnPhotonSerializeView(PhotonStream stream, PhotonMessageInfo info)
{
if(stream.isWriting)
stream.SendNext(transform.position);
}
else if(stream.isReading)
{
transform.position = (Vector3)stream.ReceiveNext();
}
}
It is really similar to using Photon transform View, but you are manually synchronizing the player's position.
Check if PhotonView is attached to the same gameobject the script is in for OnPhotonSerializeView to work.
Don't forget to add IPunObservable in your code
public class Movement : Photon.MonoBehaviour, IPunObservable
I'm trying to draw to a texture with the mouse. Mouse coords are given to a shader which outputs to a render texture (I've tried both regular RenterTexture and CustomRenderTexture which is affected directly by a material) and it doesn't seem to work.
I can tell from the material that the mouse's input is obtained, but nothing is visible on the rendertexture.
I'm starting to suspect that rendertextures aren't fully working in HDRP?
Hoping someone can point in the direction of what could be the real issue.
I'm on Unity 2019.3.0f3, shaders is HDRP Unlit Graph
This is my script:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class DrawWithMouse : MonoBehaviour
{
public Camera _camera;
public CustomRenderTexture _splatmap;
public Material _drawMaterial;
private RaycastHit _hit;
[Range(1, 100)]
public float _brushSize = 1f;
[Range(0, 10)]
public float _brushStrength = 1f;
private readonly float m_GUIsize = 256;
private readonly int m_RenderTexSize = 1024;
void Start()
{
_splatmap = new CustomRenderTexture(m_RenderTexSize, m_RenderTexSize, RenderTextureFormat.ARGBFloat, RenderTextureReadWrite.Linear)
{
name = "splatmap_CRT_generated",
initializationColor = Color.black,
initializationSource = CustomRenderTextureInitializationSource.Material,
initializationMaterial = _drawMaterial,
material = _drawMaterial,
doubleBuffered = true,
updateMode = CustomRenderTextureUpdateMode.OnDemand
};
_drawMaterial.SetVector("_DrawColor", Color.red);
_drawMaterial.SetTexture("_SplatMap", _splatmap);
}
void Update()
{
if (Input.GetKey(KeyCode.Mouse0))
{
if (Physics.Raycast(_camera.ScreenPointToRay(Input.mousePosition), out _hit, 100f))
{
_drawMaterial.SetVector("_InputPoint", new Vector4(_hit.textureCoord.x, _hit.textureCoord.y, 0, 0));
_drawMaterial.SetFloat("_BrushStrength", _brushStrength);
_drawMaterial.SetFloat("_BrushSize", _brushSize);
}
}
}
private void OnGUI()
{
GUI.DrawTexture(new Rect(0, 0, m_GUIsize, m_GUIsize), _splatmap, ScaleMode.StretchToFill, false, 1);
}
}
CustomRenderTextures and URP do not seem to play nicely together either, at least not in builds, as of 2020.1.2f1. I eventually had to give up and hack a solution together with a dedicated camera, layer and quad, because nothing I tried (and I tried a lot) would persuade the CustomRenderTexture to even initialise in a PC build, despite working perfectly in the editor.