I searched the whole internet, but couldn't find anything useful. I want to "do" a touch input on the position where a ray hits an object, and only if it hits something. I hope you understand what I mean, and I know that its not common to do it with a raycast, but I need it this way for the htc vive. Thanks in advance.
or easier said, how can I touch on the screen from my script, means without really touching the screen?
You can use camera.ScreenToWorldPoint (new Vector3 (position that you want to check clicking)) in your code.
//you can change the start position in your code.
var startPos =
camera.ScreenToWorldPoint(new Vector3(2,2, 10));
then you can use this function for detecting clicking without real clicking.
public void CalculateEndPositions(Vector3 start)
{
//forward
var rayForward = new Ray(start, Vector3.up);
Debug.DrawRay(start,Vector3.up, Color.green);
RaycastHit rayForwardInfo;
if (Physics.Raycast(rayForward, out rayForwardInfo))
{
Debug.Log("Collider Name = " + rayForwardInfo.collider.name);
}
}
I hope this code is usefull for you.
Related
I'm creating a top down 2D game, where the player has to break down trees. I made it so the player casts a ray toward the mouse, and when the ray hits a tree, it should lower the tree's health. I don't get any errors when I run the game or click, but it seems like the tree isn't detecting the hits.
void Update()
{
...
if (Input.GetMouseButtonDown(0))
{
RaycastHit2D hit = Physics2D.Raycast(playerRb.transform.position, mousePosition - playerRb.transform.position, 2.0f);
if (hit.collider != null)
{
if (hit.collider == GameObject.FindWithTag("Tree"))
{
hit.collider.GetComponent<TreeScript>().treeHealth--;
}
}
}
}
Still pretty new to coding and I'm teaching myself, so please make your answer easy to understand to help me learn.
Input.mousePosition is equal to the pixel your mouse is on. This is very different than the location your mouse is pointing at in the scene. To explain further, Input.mousePosition is where the mouse is. Think about it. If the camera was facing up, the mouse positon would be the same, but where they are clicking is different.
Instead of using Input.mousePosition, You should pass this into a function called Ray Camera.ScreenPointToRay();
You just input the mouse position and then use this new ray to do the raycast.
ANOTHER EXTREMELY IMPORTANT THING 1: Do not use Camera.main in Update(), as it uses a GetComponet call, which can cause perormance decreases. Store a reference of it in your script and use that.
Extremely important thing 2: I notice you are using GetComponent to change the tree's health. This is fine, but do not use GetComponent if you don't have to.
Like this:
Camera cam;
void Start()
{
cam = Camera.main; //it is fine to use this in start,
//because it is only being called once.
}
void Update()
{
...
if (Input.GetMouseButtonDown(0))
{
Ray ray = cam.ScreenPointToRay(Input.mousePosition);
RaycastHit2D hit = Physics2D.Raycast(ray);
...
}
}
You need to convert your mouse position from screen point to world point with Z value same as the other 2D objects.
Vector3 Worldpos=Camera.main.ScreenToWorldPoint(mousePos);
Also use a Debug.DrawRay to check the Raycast
Debug.DrawRay(ray.origin, ray.direction*10000f,Color.red);
Source
I am able to avoid a collision between my player and my entire plateform with the use of contactFilter2D.SetLayerMask() + rigidBody2D.Cast(Vector2, contactFilter, ...);
But I don't find a way to avoid the collision only if my player try to acces to the plateform from below it (with a vertical jump).
I'm pretty sure I should use the contactFilter2D.setNormalAngle() (after specify the minAngle and maxAngle) but no matter the size of my angles, I can't pass threw it.
This is how I initialize my contactFilter2D.
protected ContactFilter2D cf;
void Start () {
cf.useTriggers = false;
cf.minNormalAngle = 0;
cf.maxNormalAngle = 180;
cf.SetNormalAngle(cf.minNormalAngle, cf.maxNormalAngle);
cf.useNormalAngle = true;
}
void Update () {
}
I use it with
count = rb.Cast(move, contactFilter, hitBuffer, distance + shellRadius);
Any ideas ? If you want more code, tell me. But I don't think it will be usefull to understand the matter.
unity actualy has a ready made component for this: it is a physics component called "Platform Effector 2D" if you drag and drop it on your platform it will immediately work the way you want, and it has adjustable settings for tweaking the parameters. hope this helps!
I'm playing around with an FPS where I want my player(s) to be able to build/construct their own buildings from scratch. I've searched around for exisiting solutions/theories, but have so far been unable to find anything suitable to my needs. Please point me in the right direction if I've missed anything.
Where I am right now is that I have three prefabs; floor, wall and wall with a door opening. First I want to instantiate floor tiles which I then can put walls on, and hopefully being able to have the walls snap to the edges/corners of the floor tiles.
Can anyone please point me in the right direction for how to do this? Also, does my desired "work flow" at all make sense? Any pitfalls in there?
Thanks in advance!
UPDATE: Here's what I have in regards to instantiation prefabs, and while this works (except it's like I'm shooting walls), I would like the wall to snap to the corners/edges of the nearest floor (which has already been instantiated in the same fashion.
[RequireComponent (typeof (CharacterController))]
public class PlayerController : MonoBehaviour {
// Declare prefabs here
GameObject wallPrefab;
// Initialise variables before the game starts
void Awake () {
wallPrefab = (GameObject)Resources.Load( "WoodWall" );
}
// This happens every frame
void Update () {
if ( Input.GetButtonDown("Fire1") ) {
// Instantiate new wall
Instantiate( wallPrefab, cc.transform.position + cc.transform.forward + Vector3.up * 1.0f, wallPrefab.transform.rotation );
}
}
}
hmm... well one solution I can think of, is to have the wall raycast downwards in order to find a floor, then move to a predetermined position in relation to that floor (if it found any). Stick this in a script in the wall prefabs:
void Start()
{
var down = transform.TransformDirection (Vector3.down); //down might not actually be the down direction of your object, check to make sure
RaycastHit hit;
if (Physics.Raycast(transform.position, down, out hit) && hit.collider.gameObject.name == "myFloorName") //Maybe use tags here instead of name
{
Vector3 floorPos = hit.collider.gameObject.transform.position;
Vector3 floorSize = hit.collider.gameObject.transform.localScale;
this.transform.position = new Vector3(floorPos.x - floorSize.x/2, floorPos.y - this.tranform.localScale.y/2, floorPos.z); //These might need fiddling with to get right
}
}
void Update()
{
}
Vector3.down may not correspond to the down direction for the wall, since this can depend on the 3d model too, so you might need to fiddle with that. The position might also need fiddling with (this assumes that y corresponds to height, which might not be the case), but hopefully this gives you a rough idea of how it can be done. Also, if you don't know what the name of the floor object is, you could probably check by tags, which is probably easier.
If anything else needs clarifying, leave a comment and I'll get back to you
I have an issue with trying to get an 'object(character)' to walk around a cube (all sides) within Unity. Ive attached an image and video showing what i am trying to achieve. Im trying to show you visually rather than trying to explain. As the character drops over the edge it rotates 90 degrees and then the stands up like gravity has switched. Then the character can jump walk etc.
This is an example of someone else that posted a video showing exactly what im trying to achieve
I have looked through the forums and cant find what im after. i have tried to attach a diagram but the site wont let me. Any advice would be greatly appreciated!
Regards
Nick
You have a couple of options that I can think of.
One is to trigger the gravity change when the character exits one face of the cube to go to another. To achieve this you would have trigger zones on each edge and face and use a [Bob went from Face A to Edge ANorth -> Switch Gravity to go in X direction].
This would work for situations where the gravity switch must affect other objects too but be advantageous to your player (walking off the side makes an enemy fall off and die - for example.)
However, if you want all entities to stick to their relative sides then we need to make custom gravity! To do this is easier than you might think as gravity is simply a downward accelleration of 9.8. So turn off the engines native gravity and create a "personal gravity" component:
private Vector3 surfaceNormal;
private Vector3 personalNormal;
private float personalGravity = 9.8f;
private float normalSwitchRange = 5.0f;
public void Start()
{
personalNormal = transform.up; // Start with "normal" normal
rigidbody.freezeRotation = true; // turn off physics rotation
}
public void FixedUpdate()
{
// Apply force taking into account character normal as personal gravity:
rigidbody.AddForce(-personalGravity * rigidbody.mass * personalNormal);
}
Rotating your character and changing his normal is then up to you to suit your situation or game mechanic, whether you do that by raycasting if you're standing on a surface to detect when to change it or only want gravity to change when you hit a powerup or similar - experiment and see what works. Comment more if you have questions!
EDIT:
As an addition to the video you linked. You can keep a state variable on the jump state and raycast in each axis direction to check which face is nearest in the case of just rolling off.
public void Update()
{
// we don't update personal normals in the case of jumping
if(!jumping)
{
UpdatePersonalNormal();
}
}
public void UpdatePersonalNormal()
{
RaycastHit hit; //hit register
// list of valid normals to check (all 6 axis)
Ray[] rays =
{
Vector3.up, Vector3.down,
Vector3.left, Vector3.right,
Vector3.forward, Vector3.backward
};
//for each valid normal...
foreach(Ray rayDirection in rays)
{
//check if we are near a cube face...
if(Physics.Raycast(rayDirection , hit, normalSwitchRange)
{
personalNormal = hit.Normal; //set personal normal ...
return; // and return as we are done
}
}
}
Please keep in mind that the above is completely hand written and not tested but play with it and this pseudo start should give you a good idea of what to do.
I have set up Unity navigation meshes (four planes), navigation agent (sphere) and set up automatic and manual off mesh links. It should now jump between meshes. It does jump between meshes, but it does that in straight lines.
In other words, when agent comes to an edge, instead of actually jumping up (like off mesh link is drawn) it just moves straight in line but a bit faster. I tried moving one plane higher than others, but sphere still was jumping in straight line.
Is it supposed to be like this? Is it possible to set up navigation to jump by some curve? Or should I try to implement that myself?
I came by this question, and had to dig through the Unity sample. I just hope to make it easier for people by extracting the important bits.
To apply your own animation/transition across a navmesh link, you need to tell Unity that you will handle all offmesh link traversal, then add code that regularly checks to see if the agent is on an offmesh link. Finally, when the transition is complete, you need to tell Unity you've moved the agent, and resume normal navmesh behaviour.
The way you handle link logic is up to you. You can just go in a straight line, have a spinning wormhole, whatever. For jump, unity traverses the link using animation progress as the lerp argument, this works pretty nicely. (if you're doing looping or more complex animations, this doesn't work so well)
The important unity bits are:
_navAgent.autoTraverseOffMeshLink = false; //in Start()
_navAgent.currentOffMeshLinkData; //the link data - this contains start and end points, etc
_navAgent.CompleteOffMeshLink(); //Tell unity we have traversed the link (do this when you've moved the transform to the end point)
_navAgent.Resume(); //Resume normal navmesh behaviour
Now a simple jump sample...
using UnityEngine;
[RequireComponent(typeof(NavMeshAgent))]
public class NavMeshAnimator : MonoBehaviour
{
private NavMeshAgent _navAgent;
private bool _traversingLink;
private OffMeshLinkData _currLink;
void Start()
{
// Cache the nav agent and tell unity we will handle link traversal
_navAgent = GetComponent<NavMeshAgent>();
_navAgent.autoTraverseOffMeshLink = false;
}
void Update()
{
//don't do anything if the navagent is disabled
if (!_navAgent.enabled) return;
if (_navAgent.isOnOffMeshLink)
{
if (!_traversingLink)
{
//This is done only once. The animation's progress will determine link traversal.
animation.CrossFade("Jump", 0.1f, PlayMode.StopAll);
//cache current link
_currLink = _navAgent.currentOffMeshLinkData;
//start traversing
_traversingLink = true;
}
//lerp from link start to link end in time to animation
var tlerp = animation["Jump"].normalizedTime;
//straight line from startlink to endlink
var newPos = Vector3.Lerp(_currLink.startPos, _currLink.endPos, tlerp);
//add the 'hop'
newPos.y += 2f * Mathf.Sin(Mathf.PI * tlerp);
//Update transform position
transform.position = newPos;
// when the animation is stopped, we've reached the other side. Don't use looping animations with this control setup
if (!animation.isPlaying)
{
//make sure the player is right on the end link
transform.position = _currLink.endPos;
//internal logic reset
_traversingLink = false;
//Tell unity we have traversed the link
_navAgent.CompleteOffMeshLink();
//Resume normal navmesh behaviour
_navAgent.Resume();
}
}
else
{
//...update walk/idle animations appropriately ...etc
Its recommended to solve your problems via animation. Just create a Jump animation for your object, and play it at the correct time.
The position is relative, so if you increase the Y-position in your animation it will look like the object is jumping.
This is also how the Unity sample is working, with the soldiers running around.
Not sure what version of unity you are using but you could also try this, I know it works just fine in 4:
string linkType = GetComponent<NavMeshAgent>().currentOffMeshLinkData.linkType.ToString();
if(linkType == "LinkTypeJumpAcross"){
Debug.Log ("Yeah im in the jump already ;)");
}
also just some extra bumf for you, its best to use a proxy and follow the a navAgent game object:
Something like:
AIMan = this.transform.position;
AI_Proxy.transform.position = AIMan;
And also be sure to use:
AI_Proxy.animation["ProxyJump"].blendMode = AnimationBlendMode.Additive;
If you are using the in built unity animation!
K, that's my good deed for this week.
Fix position in update()
if (agent.isOnOffMeshLink)
{
transform.position = new Vector3(transform.position.x, 0f, transform.position.z);
}