I am Working on an multiplayer fps game the code throws an missing reference exception in the line given below the error doesn't show up when there is only one player in the room but it starts coming as soon as another player joins the room
private IEnumerator Shoot()
{
ammoLeft -= 1;
animator.SetBool("fire", true);
RaycastHit hit;
if (Physics.Raycast(playerCam.position, playerCam.forward, out hit, range))
{
//Missing Reference Expection The transform you are accessing is destroyed
//code works in singleplayer
if (hit.transform.tag == "Player" && hit.transform != null)
{
HealthManager HealthScript = hit.transform.gameObject.GetComponent<HealthManager>();
if (HealthScript != null)
{
HealthScript.TakeDamage(damage);
}
}
}
yield return new WaitForSeconds(0.3f);
animator.SetBool("fire", false);
if (ammoLeft <= 0f)
{
StartCoroutine("Reload");
}
Error in this line
Physics.Raycast(playerCam.position, playerCam.forward, out hit, range)
I have checked if my camera is null but the problem doesn't fix so I am clueless
private void Awake()
{
if (!pv.IsMine)
{
Destroy(ui);
}
if (playerCam == null)
{
playerCam = FindObjectOfType<Camera>().transform;
}
ammoLeft = magazineSize;
animator = GetComponent<Animator>();
}
Inspector Window
https://i.stack.imgur.com/Tc69B.png
Related
I want to select an object on the scene touching it on the screen. I have made this code and it perfectly works on Unity player or when i compile the application for Windows. When i compile for WebGL i have strange behaviours (tested on Firefox/Chrome)
The error i get is that , if i keep my finger pressed on the object, i get multile continuous click instead of a single one even if i'm using TouchPhase.Began. Someone knows how to fix this problem? Is a known issue?
Here's my code
using UnityEngine;
using System.Collections.Generic;
using System.Xml.XPath;
using UnityEngine.UI;
public class RaycastObjHit : MonoBehaviour
{
private GameObject working_object;
private GameObject touchedObject;
void Update()
{
if (Input.touchCount == 1 && Input.GetTouch(0).phase == TouchPhase.Began)
{
Ray ray = Camera.current.ScreenPointToRay(Input.GetTouch(0).position);
RaycastHit hit;
if (Physics.Raycast(ray, out hit))
{
string ObjHitName = hit.transform.name;
Debug.Log(hit.transform.name);
if (hit.collider != null)
{
touchedObject = hit.transform.gameObject;
if (touchedObject.GetComponent<Renderer>().material.color != Color.red )
{
changeColor(touchedObject.transform.name, Color.red);
}else{
changeColor(touchedObject.transform.name, Color.green);
}
}
Debug.Log("Touched " + touchedObject.transform.name);
}
}
}
public void changeColor(string objId, Color color)
{
working_object = GameObject.Find(objId);
working_object.GetComponent<Renderer>().material.color = color;
}
}
If someone want to know how i "solved" the problem here's my code.
Is a simple workaround that is using a boolean to control if i'm keep pressing the monitor during the same event.
I repeat,is only a workaround to solve this "incompatibility" with WebGL
using UnityEngine;
using System.Collections.Generic;
using System.Xml.XPath;
using UnityEngine.UI;
public class RaycastObjHit : MonoBehaviour
{
private GameObject working_object;
private GameObject touchedObject;
private bool touchBegan = false;
void Update()
{
if (Input.touchCount == 1)
{
if (Input.GetTouch(0).phase == TouchPhase.Began && !touchBegan)
{
touchBegan = true;
Ray ray = Camera.current.ScreenPointToRay(Input.GetTouch(0).position);
RaycastHit hit;
if (Physics.Raycast(ray, out hit))
{
string ObjHitName = hit.transform.name;
Debug.Log(hit.transform.name);
if (hit.collider != null)
{
touchedObject = hit.transform.gameObject;
if (touchedObject.GetComponent<Renderer>().material.color != Color.red)
{
changeColor(touchedObject.transform.name, Color.red);
}
else
{
changeColor(touchedObject.transform.name, Color.green);
}
}
Debug.Log("Touched " + touchedObject.transform.name);
}
} else if (Input.GetTouch(0).phase == TouchPhase.Ended && touchBegan){
touchBegan = false;
}
}
}
public void changeColor(string objId, Color color)
{
working_object = GameObject.Find(objId);
working_object.GetComponent<Renderer>().material.color = color;
}
}
That's not the way you do this in 2018 (the year, not the Unity version). Put a PhysicsRaycaster on your camera, add in an EventSystem, then Implement IPointer***Handler (Down, Up, Enter, Exit, Click, you name it).
I have an Enumerator for the effects played when you shoot a gun. It is:
private IEnumerator ShotEffect()
{
gunAudio.Play();
laserLine.enabled = true;
yield return shotDuration;
laserLine.enabled = false;
}
When I take the gunAudio.Play() out, the code works fine. However, when I add it in, the sound does not play and the line doesn't disable. I assume there is a problem with the shotDuration?
The variable gunAudio is given the audio component in start, and shotDuration is a wait for seconds.
EDIT:
Also, if I set it to play on awake, it plays. Therefore I think there is a problem with the code, not the component.
Here is where I call the Enumerator:
if (Input.GetKey(KeyCode.Mouse0) && Time.time > nextFire)
{
torsoDamage = (int)Random.Range(50f, 70f);
legDamage = (int)Random.Range(20f, 30f);
handDamage = (int)Random.Range(5f, 15f);
nextFire = Time.time + fireRate;
StartCoroutine (ShotEffect());
Vector3 rayOrigin = fpsCam.ViewportToWorldPoint (new Vector3(0.5f, 0.5f, 0.0f));
RaycastHit hit;
laserLine.SetPosition (0, gunEnd.position);
if (Physics.Raycast (rayOrigin, fpsCam.transform.forward, out hit, weaponRange))
{
laserLine.SetPosition (1, hit.point);
if (enemyAi.enemy_currentHealth != null)
{
if (hit.collider.gameObject.tag == "head")
{
enemyAi.Damage(100);
}
if (hit.collider.gameObject.tag == "torso")
{
enemyAi.Damage(torsoDamage);
}
if (hit.collider.gameObject.tag == "leg" || hit.collider.gameObject.tag == "arm")
{
enemyAi.Damage(legDamage);
}
if (hit.collider.gameObject.tag == "hand" || hit.collider.gameObject.tag == "foot")
{
enemyAi.Damage(handDamage);
}
}
if (hit.rigidbody != null)
{
hit.rigidbody.AddForce (-hit.normal * hitForce);
}
}
else
{
laserLine.SetPosition (1, gunEnd.position + (fpsCam.transform.forward * weaponRange));
}
}
I fixed it - there was some weird problem with another variable that was being called from another script.
The code works fine now.
I am looking to show a line in my app from where the model is placed so that the user knows position where the model is kept in real world. When user changes device camera away from model the line gets turned on to show where the model is. Similarly it turns off when model is detected. I have attached images to show from a similar app white dotted lines show the path. Notice how the lines disappear when the model is detected.
LineRenderer lins;
public GameObject Lineprefab;
private GameObject newline;
public Transform startpoint;
public Renderer m_rend1;
bool HitTestWithResultType (ARPoint point, ARHitTestResultType resultTypes)
{
List<ARHitTestResult> hitResults = UnityARSessionNativeInterface.GetARSessionNativeInterface ().HitTest (point, resultTypes);
if (hitResults.Count > 0 && check==true)
{
foreach (var hitResult in hitResults)
{
Debug.Log ("Got hit!");
//obj.Hideplane();
Genplanes.SetActive(false);
if (Select == 0) {
Debug.Log("hit-zero!");
Instantiate(Instaobj[0], ForSelect);
check = false;
}
if (Select == 1) {
Debug.Log("hit-one!");
Instantiate(Instaobj[1], ForSelect);
check = false;
}
if (Select == 2) {
Debug.Log("hit-two!");
Instantiate(Instaobj[2], ForSelect);
check = false;
}
m_HitTransform.position = UnityARMatrixOps.GetPosition (hitResult.worldTransform);
m_HitTransform.rotation = UnityARMatrixOps.GetRotation (hitResult.worldTransform);
Debug.Log (string.Format ("x:{0:0.######} y:{1:0.######} z:{2:0.######}", m_HitTransform.position.x, m_HitTransform.position.y, m_HitTransform.position.z));
obj.StopPlaneTracking();
}
}
return false;
}
private void Start()
{
spawngenerator();
newline.SetActive(false);
m_rend1 = GetComponent<MeshRenderer>();
}
void spawngenerator()
{
GameObject newline = Instantiate(Lineprefab);
lins = newline.GetComponent<LineRenderer>();
}
private void LateUpdate()
{
lins.SetPosition(0, startpoint.position);
lins.SetPosition(1, m_HitTransform.position);
if( m_rend1.isVisible==true)
{
Debug.Log("Render is Visible");
newline.SetActive(false);
}
else if( m_rend1.isVisible==false)
{
newline.SetActive(true);
Debug.Log("It is InVisible");
Debug.Log("Render is InVisible");
}
}
void Update () {
#if UNITY_EDITOR //we will only use this script on the editor side, though there is nothing that would prevent it from working on device
if (Input.GetMouseButtonDown (0)) {
Ray ray = Camera.main.ScreenPointToRay (Input.mousePosition);
RaycastHit hit;
//we'll try to hit one of the plane collider gameobjects that were generated by the plugin
//effectively similar to calling HitTest with ARHitTestResultType.ARHitTestResultTypeExistingPlaneUsingExtent
if (Physics.Raycast (ray, out hit, maxRayDistance, collisionLayer)) {
//we're going to get the position from the contact point
m_HitTransform.position = hit.point;
Debug.Log (string.Format ("x:{0:0.######} y:{1:0.######} z:{2:0.######}", m_HitTransform.position.x, m_HitTransform.position.y, m_HitTransform.position.z));
//and the rotation from the transform of the plane collider
m_HitTransform.rotation = hit.transform.rotation;
}
}
#else
if (Input.touchCount > 0 && m_HitTransform != null )
{
var touch = Input.GetTouch(0);
if ((touch.phase == TouchPhase.Began || touch.phase == TouchPhase.Moved) && !EventSystem.current.IsPointerOverGameObject(touch.fingerId))
{
var screenPosition = Camera.main.ScreenToViewportPoint(touch.position);
ARPoint point = new ARPoint {
x = screenPosition.x,
y = screenPosition.y
};
// prioritize reults types
ARHitTestResultType[] resultTypes = {
//ARHitTestResultType.ARHitTestResultTypeExistingPlaneUsingGeometry,
ARHitTestResultType.ARHitTestResultTypeExistingPlaneUsingExtent,
// if you want to use infinite planes use this:
//ARHitTestResultType.ARHitTestResultTypeExistingPlane,
//ARHitTestResultType.ARHitTestResultTypeEstimatedHorizontalPlane,
//ARHitTestResultType.ARHitTestResultTypeEstimatedVerticalPlane,
//ARHitTestResultType.ARHitTestResultTypeFeaturePoint
};
foreach (ARHitTestResultType resultType in resultTypes)
{
if (HitTestWithResultType (point, resultType))
{
return;
}
}
}
}
#endif
}
.
First, I 'd start with checking if the model is within the bounding box of the camera https://docs.unity3d.com/ScriptReference/Renderer-isVisible.html
if the object is not visible (isVisible == false), create a line renderer from object position to wherever it should end.
The end point could be a camera child place just in front of it, so it looks like it starts from the user to the object.
As I am new to Unity3D and started with Space Shooter tutorial. Now I am unable to create a simple life system for the space ship, probably, it's a silly mistake but I've been on it for few hours already, searching for the solution.
The OnTriggerEnter code is:
void OnTriggerEnter (Collider other)
{
if (other.CompareTag ("Boundary") || other.CompareTag ("Enemy"))
{
return;
}
if (explosion != null) //hit object explosion
{
Instantiate (explosion, transform.position, transform.rotation);
}
if (other.tag == "Player" && playerHealth >= 1) {
playerHealth--;
gameController.SubLive (playerHealth);
}
if (other.tag == "Player" && playerHealth <= 0) {
Instantiate (playerExplosion, other.transform.position, other.transform.rotation);
Destroy (other.gameObject);
gameController.GameOver ();
}
Destroy (gameObject); //destroy hit object
gameController.AddScore (scoreValue);
/**/
}
I've found that the solution is to decrement player health every time collision happens, however, it does only work the first time once player ship collides with asteroid or enemy ship. The triggers are player ship, enemy ship and bolts (shot from ships). All of the objects has rigidbody. Could you please suggest what am I doing wrong?
Thanks in advance!
You can find non-edited Space Shooter script Here
Ugh! After getting some rest and coming back with a fresh mind I managed to solve it myself! (Was easier that i thought, though!)
void OnTriggerEnter (Collider other)
{
if (other.CompareTag ("Boundary") || other.CompareTag ("Enemy"))
{
return;
}
if (explosion != null)
{
Instantiate (explosion, transform.position, transform.rotation);
}
if (other.tag == "Bolt")
{
Destroy (other.gameObject);
}
if (other.tag == "Player")
{
gameController.SubLive (); //if player ship collides asteroid or enemy ship reduces 1 health
if (gameController.isDead == true) //explodes ship once playerHealth is 0
{
Instantiate (playerExplosion, other.transform.position, other.transform.rotation);
gameController.GameOver ();
Destroy (other.gameObject);
}
}
gameController.AddScore (scoreValue);
Destroy (gameObject);
}
Script piece in gameController:
<...>
private int playerHealth;
public bool isDead;
<...>
void Start ()
{
playerHealth = 3;
isDead = false;
}
<...>
public void SubLive()
{
playerHealth--;
UpdateLives ();
if (playerHealth <= 0)
{
isDead = true;
}
}
void UpdateLives ()
{
livesText.text = "Lives: " + playerHealth;
}
Okay, I am making a simple game mechanic where you are a ball rolling along a small panel. On the edge of the panel are 8 child objects. 4 of them are triggers on the edges of the panel, and 4 of them are empty game objects 1 unit away from each edge of the panel for the location of the next panel prefab to spawn at. The ball has a trigger on it that detects the location of the empty game objects to tell the panel prefab where to spawn. When the ball enters a specific trigger frm the panel the ball is suppose to instantiate a panel prefab on the location that I assign based on the trigger the ball enters. Here is my code:
public GameObject panelPrefab;
Transform frontSpawn;
Transform backSpawn;
Transform leftSpawn;
Transform rightSpawn;
private bool allowSpawn;
void Awake()
{
allowSpawn = true;
}
void OnTriggerStay(Collider spawn)
{
if (spawn.gameObject.tag == "FrontSpawn")
{
frontSpawn = spawn.transform;
}
else if (spawn.gameObject.tag == "BackSpawn")
{
backSpawn = spawn.transform;
}
else if (spawn.gameObject.tag == "LeftSpawn")
{
leftSpawn = spawn.transform;
}
else if (spawn.gameObject.tag == "RightSpawn")
{
rightSpawn = spawn.transform;
}
}
void OnTriggerEnter (Collider other)
{
if (other.gameObject.tag == "Front" && allowSpawn == true)
{
Instantiate (panelPrefab, frontSpawn.transform.position, Quaternion.identity);
allowSpawn = false;
}
else if (other.gameObject.tag == "Back" && allowSpawn == true)
{
Instantiate (panelPrefab, backSpawn.transform.position, Quaternion.identity);
allowSpawn = false;
}
else if (other.gameObject.tag == "Left" && allowSpawn == true)
{
Instantiate (panelPrefab, leftSpawn.transform.position, Quaternion.identity);
allowSpawn = false;
}
else if (other.gameObject.tag == "Right" && allowSpawn == true)
{
Instantiate (panelPrefab, rightSpawn.transform.position, Quaternion.identity);
allowSpawn = false;
}
}
void OnTriggerExit (Collider other)
{
allowSpawn = true;
}
My issue is on each of the Instantiate calls, I am getting a NullReferenceException. I have the panelPrefab assigned in the unity editor, and I don't know what could be causing this! If anyone can help me here it would be GREATLY appreciated... So thank you in advance!
OnTriggerEnter is called before OnTriggerStay. The error is not due to the panelPrefab object. It might happen that your rightSpawn, leftSpawn etc. objects are null and hence cannot access the transform property of a null object.
Before instantiating verify if rightSpawn etc. is null or not and then access it's position.