Getting Skybox to change on collision in unity with UnityScript? - unity3d

I'm quite new to UnityScript as my area of interest is python, however me and a friend are planning to create a small indie game and I need the skybox to change on collision in unity. I would prefer this to be done in javascript if at all possible. Please take a look at it and let me know whats wrong with it, as when run it makes no difference to the scene.
#pragma strict
var mat:Material;
function OnTriggerEnter(trigger: Collider){
RenderSettings.skybox=mat;
}
That is the entire script. Thank you for any help given

make sure that for your objects that collide... typically one must have a static collider (e.g. floor, wall) and the other a collider and rigidBody to hit it (e.g. player, car). Also check your physics settings to make sure the layers that the objects are in, can hit each other.
Make sure you have your script on the right object... collider or collidee ? try putting a script on both objects with a print in each.
Check here for more info
http://docs.unity3d.com/Documentation/Manual/Physics.html

What exactly to you mean by changing on collision? Skyboxes cannot be collided with. If you mean objects colliding other than the skybox, make sure that at least one of the collider's has "Is Trigger" checked in the inspector.

Related

Child Components with rigidbody

Context
I have in my project two elements. A player (just a cube) and some eyewears.
When the glasses aren't attached to the player, i want them to have properties of a Rigidbody. But when the eyewears are attached to the player, i want them to be a static object, so the player can process the collisions and physics.
I have tried:
DestroyElement(rigidbody) when the player pickup the eyewears. When he leaves them, i recreate the rigidbody with AddComponent
It worked nice, but in the future other elements will be attached and they will not share the same properties of rigidbody. I though maybe i could save the rigidbody instance, so when the player leaves the glasses i assign it to them. I could't.
AddComponent don't accept arguments.
Then i tried to set "kinematic mode" when my player wears the eyewears. It didn't go well, my player can't jump anymore and sometimes he glitches in the floor.
How can i resolve this?
GameObject.AddComponent does take an argument, or a generic argument (preferred):
go.AddComponent<RigidBody>();
this is also possible, but deprecated, since you lose type-safety:
go.AddComponent(typeof(RigidBody));
However, RigidBody is not meant to be added/removed, and in your case I would say that kinematic mode is the way to go... but I can't tell why you're experiencing weird results with it.
Thanks to R Astra i checked again the eyewear collisions and i found out the problem. I had enabled the Convex mesh.
The col was causing trouble because the back meshes were inside the player
So, quickly i copied that eyewear and generate a new mesh. It worked!
Thank you very much!

using unity physics with SteamVR

I want to make a plank game in VR with Unity. So when the player walk outside of the plank, he falls. Right now the only way to make it work is by using VRTK which is another physics system and it makes a lot of things complicated.
I've put a rigidbody on the CameraRig and uncheck "is kinematic". The player falls, but the colliders on other objects are not working anymore...
Is there a way to use Unity's physics with SteamVR and without VRTK ??
Thank you !
Firstly I would read up on Rigidbodies and Colliders/Trigger Colliders - here's a link.
Here's a useful table from that website:
You will need to use this to understand why the player is falling. Is the CameraRig actually colliding with the ground? Is it a Trigger Collider (which has a callback method but doesn't do any physical collision). There's many possibilities for why.
I wrote a script that you can drag in two objects and see if they collide. You could use that if it helps.
The issue in VR with Vive is that determining where someone walks can be difficult, as we are only tracking their head and their hands. If you have a Vive Tracker available and it fits your use case you could use that to track someones foot.
What I have done in the past is use the Camera(eyes) GameObject within the CameraRig and get it's transform.position.x and transform.position.z value to determine if it has gone outside of the boundaries of the object the user is standing on.
Hope this helps,
Liam

How do I stop two gameobjects from going through each other on unity3d

I have tried every from unchecking the matrix box , and removing the sphere collider . The problem I am having is the gameobjects wont stop going through each other .I dont know what is going on . I have a pic of what is going on : enter image description here
First of all, don't remove the colliders and make sure they are not set on Is Trigger (that makes the collider penetrable). Also, you are using sphere collider for quite a complex mesh, so I'd recommend using MeshCollider, which would generate it according to the mesh.
Secondly, recheck how you move the objects. If given too much force, it might bash through another collider and not get out of it (imagine that you breaking through a barrier and inside it you cannot get enough speed to break out of it again). That might happen if you use AddForce() and not increasing transform.velocity.
Thirdly, what controls these gameobjects? Player or NavMeshAgent? Because, I think, if they are controlled by AI (NavMeshAgent), they should avoid each other in their paths and shouldn't collide. However, I might be wrong on this one.

Determine on which collider the collision has taken place

I have a gameobject with two sphere colliders attached. One has IsTrigger checked and the other not.
I want to execute different set of statements when collision occurs with different colliders. For example I want to play different sound for both different collisions. Is there any way to achieve it?
I tried OnTriggerEnter() but unfortunately it is called for both type of collisions since other colliding objects have triggered colliders. I just thought if we could somehow find out on which collider of the gameobject the collision has taken place we will be able to achieve it.
So is there any way to get through with this?
I have been using Unity for years and faced tons of problems like this, related to bad software design. I hope Unity guys will handle physics more carefully in future releases.
In the mean time, you can use Physics.OverlapSphere and Physics.CheckSphere to manually check if there is something that collides with your object. Remove the collider that you are using as a trigger and use these methods instead of OnTriggerEnter. This is a bit hacky, but this will do the job I think.
Make your colliders visible in the inspector (make them public or add [SerializeField] before it) and then tie in the colliders to the code that way.
Then, in your collisions, compare the colliding objects against your variables that are holding the colliders for you to keep them separate.
To detect for source trigger in OnTriggerEnter, you must use workaround with multiple gameobject hosting trigger and satellite scripts.
Allow me to link to my answer on gamedev SO:
https://gamedev.stackexchange.com/a/185095/54452

Can I move dynamic physics bodies using SKAction when only contact detection is needed?

I am looking at tutorial where things are defined like this:
planes are sprites with dynamic physics bodies
plane moving is done with actions by following the path
there is contact detection between bullet and plane
bullet is sprite and it has physics body set to be static (which is little unusual in my opinion)
Here is the link to tutorial for more information.
Questions:
When we use actions to move physicsBodies is there a difference how we set body's dynamic property? Because bullet is static but still there is no problem for movement.
When we have situation like this, where we don't need collision detection, but just contact detection, and we can't move sprites (enemies) by applying forces or impulses, what is a good approach? Is this approach correct?
I think this is nice way, but I would like to be fully aware what is really happening when we use this method and are there any drawbacks or possible problems.
There are posts on SO that suggest we shouldn't use actions for moving dynamic physics bodies. I am aware that we can't use this approach in every case. For example this would not work for moving platform with other object on it, because there would be no correct physics simulation between body on the platform and platform moved by action. But in cases like from this tutorial, where we only need contact detection for object that can be moved only by actions (moved along path) I suppose it's not a problem ?
static means that the body isnt affected by physics. That doesnt mean it cant be manually repositioned or moved. So if something is set to static, it participates in the physics simulation, but it isnt affected by it. Think of a plane hitting a mountain. The plane is dynamic, the mountain is just going to sit there even though its participating in the physics. But you could still move that mountain around manually using a position or an action.
Not totally understanding your question, but I'll give it a shot.
You can move physicsBody's manually (using position property or actions), but you need to ask yourself why you're doing that. You typically don't want to do it because they're bypassing the physics simulation. If you're forcing it to move around, what's the point of using a physics body in the first place.. right?
But if your physicsbody is something like a powerup that is totally static, and you just want it to move around in a circle using an action then thats fine. You probably just want to use contact detection for the bullet, powerup, etc without actually moving it around using physics. Nothing wrong with that.