Getting started with pure ECS - unity3d

I am trying to get started with pure ECS in Unity 2018.3.6f1, I am starting simple by just having a sphere prefab move in one direction, but I am getting no prefabs created it seems.
I have a empty game object prefab with a RenderMesh that has a Sphere mesh and a simple material, and I have this script attached to the prefab also:
using System;
using Unity.Entities;
using UnityEngine;
[Serializable]
public struct Position : IComponentData
{
public Vector3 Value;
}
public class BoidPositionComponent : ComponentDataProxy<Position> { }
Then I have this SteeringSystem:
using System.Collections;
using System.Collections.Generic;
using Unity.Entities;
using Unity.Jobs;
using Unity.Burst;
using Unity.Mathematics;
using Unity.Transforms;
using UnityEngine;
public class SteeringSystem : JobComponentSystem
{
[BurstCompile]
struct SteeringJob : IJobProcessComponentData<Position>
{
public float deltaTime;
public void Execute(ref Position position)
{
Vector3 value = position.Value;
value = new Vector3(value.x + deltaTime + 1.0f, value.y, value.z);
position.Value = value;
}
}
protected override JobHandle OnUpdate(JobHandle inputDeps)
{
SteeringJob steeringJob = new SteeringJob
{
deltaTime = Time.deltaTime
};
JobHandle jobHandle = steeringJob.Schedule(this, inputDeps);
return jobHandle;
}
}
And lastly I have a empty game object in my scene with this script on:
using Unity.Entities;
using Unity.Rendering;
using Unity.Collections;
using UnityEngine;
public class ECSWorld : MonoBehaviour
{
public GameObject boidPrefab;
private static EntityManager entityManager;
private static RenderMesh renderMesh;
private static EntityArchetype entityArchetype;
// Start is called before the first frame update
void Start()
{
entityManager = World.Active.GetOrCreateManager<EntityManager>();
entityArchetype = entityManager.CreateArchetype(typeof(Position));
AddBoids();
}
void AddBoids()
{
int amount = 200;
NativeArray<Entity> entities = new NativeArray<Entity>(amount, Allocator.Temp);
entityManager.Instantiate(boidPrefab, entities);
for (int i = 0; i < amount; i++)
{
// Do stuff, like setting data...
entityManager.SetComponentData(entities[i], new Position { Value = Vector3.zero });
}
entities.Dispose();
}
}
But I am not seeing anything when I run the game. Should it not instanciate 200 og my prefab and have them move on the screen? What am I missing here?
Thank you
Søren

You will need a renderer that actually renders your boids. You have created a custom Position component, but there is no system that actually does rendering based on it. So all you do is create entities and modify your Position component in memory (you should see this in the entity debugger) but since you have no renderer, you will not see anything on screen.
For now I would suggest using the "Hybrid Renderer" package that is available in the package manager. It uses its own set of components:
Translation for the position in 3D space
Scale for the scale in world space
Rotation for the rotation in world space
RenderMeshfor the mesh to be renderer (you are already using this)
With the current ECS version you can actually just convert a classic game object into an entity by adding a "Convert To Entity" Monobehaviour to it. This makes editor-integration a lot easier as you don't need all those proxy components. The auto-conversion process will automatically add the Translation, Scale, Rotation and RenderMesh components to your ECS entity.

Related

Position of particlesystem isn't attached to object

I added a particle system to my unity project. But I can't make it attach to a certain game object or tag. How can I make a Particle System sit on the center of a cube, while it's getting played once if the player triggers with this cube?
Code used for the trigger to work:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Collision_Player_grower : MonoBehaviour
{
public GameObject Player_grower;
public ParticleSystem CollisionGrower;
// Start is called before the first frame update
void Start()
{
times_player_grower = 0;
CollisionGrower.transform.position = Player_grower.transform.position;
}
// Update is called once per frame
void OnTriggerEnter(Collider collision)
{
if (collision.gameObject.tag == "Player")
{
print("we hit an playergrower");
CollisionGrower.Play();
Destroy(Player_grower);
}
}
}
Note: It does work if I manually place the particle system in the heart of the cube, but I assume this can be done in an easier way.
From what I understand, you want the position of the particle system to be in the center of another object.
1)
You could do that simply by attaching a particle system game object as a child of the cube you were talking about.
2)
You could do this with code instead. In case you want the particle system to do something extra, you could just adjust the code.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Collision_Player_grower : MonoBehaviour
{
public GameObject ParticleSystemObject;
public GameObject Cube;
public GameObject Player_grower;
public ParticleSystem CollisionGrower;
// Start is called before the first frame update
void Start()
{
times_player_grower = 0;
CollisionGrower.transform.position = Player_grower.transform.position;
}
// Update is called once per frame
void Update()
{
ParticleSystemObject.transform.position = Cube.transform.position;
}
void OnTriggerEnter(Collider collision)
{
if (collision.gameObject.tag == "Player")
{
print("we hit an playergrower");
CollisionGrower.Play();
Destroy(Player_grower);
}
}
}
I don’t fully understand what the rest of the variables are for, and what they are, so I may have done some un-needed things.
First, I added two GameObject variables. One of them is the cube object you want the particle system to go to, and the other is the particle system as a game object. I set the position of the particle system to the position of the cube.
Important: if you have this script attached to the cube, remove the Cube variable.
Instead of using
... = Cube.transform.position;
Use
... = transform.position;

Making measurements on the object

I use the objects that I scan with the photogrammetry method exactly in unity. So when I create a cube and expand this cube to the extent that I want to measure, I give an example. The z value gives the value in meters. What I want to do is to measure the distance with an object to be created between the two points I click on the object and the value I will get over it. How can I make this happen?
I've tried the code below but nothing draws.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
public class olcumYap : MonoBehaviour
{
private LineRenderer lineRend;
private Vector2 mousePos;
private Vector2 startMousePos;
[SerializeField]
private Text distanceText;
private float distance;
// Start is called before the first frame update
void Start()
{
lineRend = GetComponent<LineRenderer>();
lineRend.positionCount = 2;
}
// Update is called once per frame
void Update()
{
if (Input.GetMouseButtonDown(0))
{
startMousePos = Camera.main.ScreenToWorldPoint(Input.mousePosition);
}
if (Input.GetMouseButton(0))
{
mousePos = Camera.main.ScreenToWorldPoint(Input.mousePosition);
lineRend.SetPosition(0, new Vector3(startMousePos.x, startMousePos.y, 0f));
lineRend.SetPosition(1, new Vector3(mousePos.x, mousePos.y, 0f));
distance = (mousePos - startMousePos).magnitude;
distanceText.text = distance.ToString("F2") + "metre";
}
}
}
You could try debugging the line positions to see if the problem is the line renderer or the positions.
Debug.Log(<position>);
print(<position>);
It is generally a good practice to log something if it doesnt work to find the issuee.

Unity - How to calculate new position for an object from pitch of another object in 3D

I would like to calculate a new position based on the pitch of a mesh in order to make an object following the top of my object which is rotated:
And result in:
I cannot make the square object as represented above as a child (in the Unity object hierarchy) of the line object because the rotated object can see its scale changed at anytime.
Does a mathematics solution can be used in this case?
Hotspots
If you'd like to place something at a particular location on a generic object which can be scaled or transformed anywhere, then a "hotspot" can be particularly useful.
What's a hotspot?
Edit the target gameobject (the line in this case) and add an empty gameobject to it. Give it some appropriate name - "cross arms hotspot" for example, and then move it to the location where you'd like your other gameobject to target. Essentially, a hotspot is just an empty gameobject - a placement marker of sorts.
How do I use it?
All you need is a reference to the hotspot gameobject. You could do this by adding a little script to the pole gameobject which tracks it for you:
public class PowerPole : MonoBehaviour {
public GameObject CrossArmsHotspot; // Set this in the inspector
}
Then you can get that hotspot reference from any power pole instance like this:
var targetHotspot = aPowerPoleGameObject.GetComponent<PowerPole>().CrossArmsHotspot;
Then it's just a case of getting your target object to place itself where that hotspot is, using whichever technique you prefer. If you want it to just "stick" there, then:
void Start(){
targetHotspot = aPowerPoleGameObject.GetComponent<PowerPole>().CrossArmsHotspot;
}
void Update(){
transform.position = targetHotspot.transform.position;
}
would be a (simplfied) example.
A more advanced example using lerp to move towards the hotspot:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class CrossArmsMover : MonoBehaviour
{
public GameObject PowerPole;
private GameObject targetHotspot;
public GameObject CrossArms;
public float TimeToTake = 5f;
private float timeSoFar;
private Vector3 startPosition;
private Quaternion startRotation;
// Start is called before the first frame update
void Start()
{
startPosition = CrossArms.transform.position;
startRotation = CrossArms.transform.rotation;
targetHotspot = PowerPole.GetComponent<PowerPole>().CrossArmsHotspot;
}
// Update is called once per frame
void Update()
{
timeSoFar+=Time.deltaTime;
var progress = timeSoFar/TimeToTake;
// Clamp it so it doesn't go above 1.
if(progress > 1f){
progress = 1f;
}
// Target position / rotation is..
var targetPosition = targetHotspot.transform.position;
var targetRotation = targetHotspot.transform.rotation;
// Lerp towards that target transform:
CrossArms.transform.position = Vector3.Lerp(startPosition, targetPosition, progress);
CrossArms.transform.rotation = Quaternion.Lerp(startRotation, targetRotation, progress);
}
}
You would need to put a script on the following gameobject in wich you would put :
GameObject pitcher = //reference to the gameobject with the pitch;
const int DISTANCE_ON_LINE = //distance between the 2 objects
void Update() {
transform.position = pitcher.transform.position + pitcher.transform.forward * DISTANCE_ON_LINE;
}

Rigidbody being accessed from different scripts at the same time

I have am using two mixed reality controllers at the same time. The right and left trigger both do different things in my game. The problem is that I can only have one of them access the rigidbody at a time. If the left controller has the script attached it will work correctly. If the right controller has the script attached it will work correctly. But I cannot get both of them to work at the same time. I attached the code below. This script is on both controllers.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using Valve.VR;
public class ControllerRight : MonoBehaviour
{
public GameObject ballObj;
Rigidbody rb;
public SteamVR_Behaviour_Pose pose;
public SteamVR_Input_Sources handType;
public SteamVR_Action_Single clickTrigger;
// Start is called before the first frame update
void Start()
{
rb = ballObj.GetComponent<Rigidbody>();
}
// Update is called once per frame
void FixedUpdate()
{
Debug.Log("Trigger Status: " + clickTrigger.GetAxis(handType));
if (clickTrigger.GetAxis(handType) > 0.5)
{
rb.velocity = Vector3.forward * clickTrigger.GetAxis(handType) * 10;
}
else if (clickTrigger.GetAxis(handType) == 0)
{
rb.velocity = new Vector3(0, 0, 0);
}
}
}
Instead of setting rigidbody's velocity, try to apply a force in the direction of input. That way, two controllers will not override each other, but result in some combined movement.

Unity - Ortho Camera Size VS Screen Resolution

I'm making a 2D game about launching objects towards eachother. It is almost complete. However, when I run it on different devices, the offsets of certain gameobjects are messed up due to the different screen size. In the Unity editor, I'm using the free aspect view, and I've created my gameobjects so that with a camera size of 80, they all align perfectly. I think the problem here is that the screen resolution messes up the display, and because I'm using a fixed amount of Unity units to position my gameobjects, they are displayed weirdly when I run the game on the standalone. I've written a pixel perfect camera script, but it doesn't seem to help. The camera is pixel perfect, however, in order to compensate, the camera size is turned into something extremely small, or extremely large. I just want the same look across all devices and screen resolutions. A main problem here is that I want my GUI elements to display next to where the player is standing. My script is here.
using UnityEngine;
using UnityEngine.UI;
using System.Collections;
public class HoldTimeMultiplierPlayerFollowController : MonoBehaviour {
public float textYOffset = 50f;
public Text holdTimeMultiplierDisplay;
void Update() {
Vector3 position = Camera.main.WorldToScreenPoint(transform.position);
holdTimeMultiplierDisplay.gameObject.transform.position = new Vector3(position.x, position.y + textYOffset, position.z);
}
}
Anyone got any ideas?
BTW,
all my art is 32x32. The pixels to units ratio is always 1 to 1. The camera size in the editor is 81.92, when I'm using a free aspect screen size of 686x269. At those measurements, everything is displayed perfectly.
Any help is appreciated. Maybe a script, or a suggestion on how to implement it?
Other scripts (If you see any issues that need to be resolved or improvements that could be added, please tell me):
using UnityEngine;
using UnityEngine.UI;
using System.Collections;
public class PlayerMovementController : MonoBehaviour {
private int holdTimeMultiplier = 0;
public int maxHoldTimeMultiplier = 215;
public Text holdTimeMultiplierDisplayText;
private Rigidbody2D rbody;
void Awake() {
rbody = GetComponent<Rigidbody2D>();
if(rbody == null) {
Debug.LogError ("No Rigidbody2D detected on player.");
}
}
void Update() {
Vector2 mousePosition = new Vector2(Camera.main.ScreenToWorldPoint(Input.mousePosition).x,
Camera.main.ScreenToWorldPoint(Input.mousePosition).y);
Vector2 mouseDirectionFromPlayer = mousePosition - new Vector2(transform.position.x, transform.position.y);
if(Input.GetMouseButton(0)) {
if(holdTimeMultiplier < maxHoldTimeMultiplier) {
holdTimeMultiplier ++;
} else {
holdTimeMultiplier = 0;
}
} else {
if(holdTimeMultiplier != 0) {
rbody.AddForce(mouseDirectionFromPlayer * holdTimeMultiplier * 200);
holdTimeMultiplier = 0;
}
}
holdTimeMultiplierDisplayText.text = holdTimeMultiplier.ToString();
}
}
...
using UnityEngine;
using System.Collections;
public class SpawnNewObject : MonoBehaviour {
public GameObject[] UseableObjects;
public Transform SpawnPoint;
void Update() {
if(Input.GetKeyDown(KeyCode.N)) {
var randomObject = Random.Range(0,UseableObjects.Length);
GameObject UseableObject;
UseableObject = Instantiate(UseableObjects[randomObject], SpawnPoint.position, SpawnPoint.rotation) as GameObject;
}
}
}