Problem accessing a node from a class [GDScript] - gdscript

I encountered some difficulties while studying Godot.
I'm trying to add a shotgun class (Shotgun.gd). I have a function in the class that creates several RayCasts, a function that checks what they encounter and a function that calls these functions one by one. If I call these functions from the class where they are declared in the _process() function, everything works fine, but if I call these functions in another script, like Player.gd, it finds nothing when checking for RayCasts.
P.S Sorry for the sloppy translation, English is not my native language
Part of the code from the shotgun class (Shotgun.gd)
# Gen RayCast-ы
func gen_raycasts(player):
# Очистка списка
raycasts.clear()
var temp
for number in PELLETS_COUNT:
# Создание RayCast
temp = RayCast.new()
# Установка настроек
temp.set_cast_to(Vector3(LENGHT, 0, 0))
temp.set_enabled(true)
# Добавляет RayCast на сцену
player.add_child(temp)
# Установка положения
temp.transform.basis = Basis(Vector3(-0, 0, 0.1), Vector3(-0, 0, 0.1), Vector3(0.1, 0, -0))
temp.global_translate(Vector3(0, 1.65, 0))
temp.set_scale(Vector3(0.1, 0.1, -0.1))
# Рандомный разброс дробовика
randomize()
temp.rotate_y(deg2rad((-50 + randi() % 100) / 10))
temp.rotate_x(deg2rad((-50 + randi() % 100) / 10))
raycasts.append(temp)
#print(raycasts)
# Check Rayasts
func chech_raycasts():
in_raycast.clear()
for number in PELLETS_COUNT:
in_raycast.append(raycasts[number].get_collider())
#print(raycasts[number].is_enabled())
print(in_raycast)
Godot tree picture
Call from func _process()
Call from Player.gd

I'll venture to say that it is probably correct they collide with nothing.
Enable "Visible Collision Shapes" in the "Debug" menu, so Godot shows colliders in runtime (including enabled RayCasts), and check they are being positioned correctly.
I can think of a couple possible problems:
You are adding the RayCast before positioning it. This is not always a problem, and it is probably not a problem in your case. However, it might register a collision when added, before moved.
You are scaling the RayCast. This should not be a problem. However, there have been bugs related applying a scaling directly on collision shapes… So, I will avoid this, just to be safe.
A suggestion: Add a Position node, child of your Shotgun, marking the origin for the RayCasts. Then you can copy the global transform of the Position to the global transform of the RayCasts - and then apply the random rotation - before adding them as children. This should make it easier to update the position if you need to (e.g. if you changed the shotgun model, you can mode the Position node accordingly). It should avoid any typos when writing the coordinates in the code. And it should avoid the pitfalls mentioned above.
I'll remind you you are supposed to check is_colliding. I guess you are not doing it for demonstration purposes.
I'll also remind you that the RayCasts update with the physics frame. So, when calling the RayCast from somewhere other than _physics_process you are getting what they registered last physics frame (i.e. just before _physics_process ran last). You can make Godot update the RayCast by calling force_raycast_update on them.
If you don't need the RayCasts often, having them update every physics frame might be overkill. In that case, you might be interested in checking like this: get_world().direct_space_state.intersect_ray(start, end). See PhysicsDirectSpaceState. However, using RayCast should work. Using intersect_ray is only an optimization.
Unrelated: if you want to guarantee an uniform probability distribution distribution from the random functions, call randomize once at the start, instead of once per iteration. This is probably not noticeable anyway.

Related

In Unity3D, what would "setting" the bounds of a mesh do or achieve?

In a Unity code base, I saw this:
// the game object currently has no mesh attached
MeshFilter mFilter = gameObject.AddComponent<MeshFilter>();
gameObject.AddComponent<MeshRenderer>();
// no problem so far
mFilter.mesh = MakeASmallQuadMesh(0.1f);
// great stuff
mFilter.mesh.bounds = SomeSpecificBounds();
// what ?
The function "MakeASmallQuadMesh" has the usual completely normal code for making a mesh, so
Mesh mesh = new Mesh();
mesh.SetVertices(verts);
mesh.SetIndices(indices);
mesh.SetUVs(0, uvs);
mesh.RecalculateNormals();
mesh.RecalculateBounds();
return mesh;
No worries there. It makes a quad 10cm across.
But what about the line of code
mFilter.mesh.bounds = SomeSpecificBounds();
I was amazed to learn you can set mesh.bounds, I assumed it would be read-only.
What possible "meaning" is there to "setting" the bounds? It would be like: there's a written measurement in a doctor's office stating that Jane is 6'. You change the record to 5'10". Of course, Jane's height does not change at all. You've just, bizarrely made the record wrong.
Could it be that: so, the bounds of a mesh are used by various, indeed many, Unity systems. (Culling, etc etc.) Could the pattern be that by "forcing" the bounds like this (the bounds are now "totally wrong" for the object - they're just some value you forced in there) the programmer wanted (for some reason) Unity's system (say, culling) to use those forced, nonsensical (to the actual object) values?
Wild guess, maybe there's a pattern (I have never heard of) where you "force" the bounds of object A to be the same as object B - for some reason I cannot guess at?
What could possibly be the pattern / reason here?
I would just assume it's a basic mistake, but assumptions kill.
I was actually curious about this myself, and I happened to have a program that explicitly generated large numbers of custom meshes, so I decided to test a few things.
The first thing I wanted to confirm was that the bounds were in fact set automatically. A rudimentary inspection revealed that this was indeed the case: Specifically, a mesh's bounds are automatically recalculated any time you set the mesh.vertices property. This probably explains why the property is a fixed length array rather than a list. (fun side note: If you try to assign secondary properties like uv coords or normals to the mesh before assigning vertex positions, unity gently nags you about mismatched array lengths before promptly exploding. So Don't Do That.)
As for what this actually impacts, I had a hunch: I set the extents of my mesh bounds to be 0. Immediately, meshes at the corner of my viewport stated getting visually culled. This tells us a few things:
Setting bounds explicitly does have an effect.
Unity does actually make use of custom bounds data.
Unity uses mesh bounds to perform frustum culling.
According to Unity's manual, there are three cases where the Bounds class is used: Mesh.bounds, Renderer.bounds, and Collider.bounds. Of those three, Mesh.bounds is the only property that isn't read only.
As for the question of why anybody would want to set mesh bounds explicitly, it's not impossible that you could perform some clever culling optimization like looking at a complex mesh through a window or some such, but if I had to guess, whoever wrote that code didn't trust Unity to set mesh bounds accurately or explicitly.

Best way to move a game object in Unity 3D

I'm going through a few different Unity tutorials and the way a game object is moved around in each is a little different.
What are the pros/cons to each of these methods and which is preferred for a first person RPG?
// Here I use MovePosition function on the rigid body of this component
Rigidbody.MovePosition(m_Rigidbody.position + movement);
//Here I apply force to the rigid body and am able to choose force mode
Rigidbody.AddForce(15 * Time.deltaTime, 0, 0, ForceMode.VelocityChange);
// Here I directly change a transforms position value, in this case the cam
Transform.transform.position = playerTransform.position + cameraOffset;
Thanks!!
EDIT;
Something I have noticed is that the applied force seems to memic wheeled vehicles while the position changes memic walking/running.
RigidBodies and Velocities/Physics
The only time, I personally have used the rigidbodys system was when implementing my own boids (flocking behaviour) as you need to calculate a few separate vectors and apply them all to the unit.
Rigidbody.MovePosition(m_Rigidbody.position + movement);
This calculates a movement vector towards a target for you using the physics system, so the object's velocity and movement can still be affected by drag, angular drag and so on.
This particular function is a wrapper around Rigidbody.AddForce I believe.
Pros :
Good if realistic physical reactions is something you are going for
Cons:
A bit unwieldy to use if all you are trying to achieve is moving a object from point A to point B.
Sometimes an errant setting set too high somewhere (for example: Mass > 10000000) can cause really screwy bugs in behaviour that can be quite a pain to pin down and mitigate.
Notes: Rigidbodies when colliding with another Rigidbody would bounce from each other depending on physics settings.
They are also affected by gravity. Basically they try to mimic real life objects but it can be sometimes difficult to tame the objects and make them do exactly what you want.
And Rigidbody.AddForce is basically the same as above except you calculate the vector yourself.
So for example to get a vector towards a target you would do
Vector3 target = target.position - myPosition;
Rigidbody.AddForce(target * 15 * Time.deltaTime, 0, 0, ForceMode.VelocityChange);
If you don't plan on having any major physics mechanics in your game, I would suggest moving by interpolating the objects position.
As it is far easier to get things to behave how you want, unless of course you are going for physical realism!
Interpolating the units position
Pros :
Perhaps a little strange to understand at first but far simpler to make objects move how you want
Cons:
If you wanted realistic reactions to objects impacting you'd have to do a lot of the work yourself. But sometimes this is preferable to using a physics system then trying, as I've said earlier to tame it.
You would use the technique in say a Pokemon game, you don't stop in Pokemon and wait for ash to stop skidding or hit a wall and bounce uncontrollably backwards.
This particular function is setting the objects position like teleporting but you can also use this to move the character smoothly to a position. I suggest looking up 'tweens' for smoothly interpolating between variables.
//change the characters x by + 1 every tick,
Transform.transform.position.x += 1f;
Rigidbody.MovePosition(m_Rigidbody.position + movement);
From the docs:
If Rigidbody interpolation is enabled on the Rigidbody, calling Rigidbody.MovePosition results in a smooth transition between the two positions in any intermediate frames rendered. This should be used if you want to continuously move a rigidbody in each FixedUpdate.
https://docs.unity3d.com/ScriptReference/Rigidbody.MovePosition.html
Rigidbody.AddForce(15 * Time.deltaTime, 0, 0, ForceMode.VelocityChange);
This will make the object accelerate, so it won't travel at a constant velocity (this is because of Newton's second law, Force=mass*acceleration). Also if you have another force going in the opposite direction this force could get cancelled out and the object won't move at all.
Transform.transform.position = playerTransform.position + cameraOffset;
This will teleport the object. No smooth transition, no interaction with any forces already in the game, just an instant change in position.

setting scnnode presentation position

I want to set the transform of both a given scn:SCNNode and its current presentation node, to a new value. However, I'm having trouble setting the presentation node. I've tried four ways:
Set presentation node's transform:
scn.position = newVal
scn.presentation.position = newVal
scn.presentation.transform is read only -- it can't be set. (BTW, setting the presentation node's transform compiles with no errors, perhaps something to clean up)
Use resetTransform():
scn.position = newVal
scn.physicsBody.resetTransform()
does nothing. The docs say it "Updates the position and orientation of a body in the physics simulation to match that of the node to which the body is attached". But it isn't changed. Not clear why!
Remove the physicsBody while setting:
let pb = scn.physicsBody
scn.physicsBody = nil
scn.position = newVal
scn.physicsBody = pb
This sets the presentation transform to newVal ("yea!"), but Physics does not work ("boo!"). Perhaps one cannot reuse a physics body.
Add a new physics body after setting:
scn.position = newVal
scn.physicsBody = SCNPhysicsBody(type:.dynamic, shape:nil)
but alas, scn.presentation.position doesn't have newVal.
Thoughts?
I've been noticing this as well in my SceneKit-using project. The direct answer to your question is: “The presentation node automatically updates its transform to match the scene node's transform. If it doesn't seem to be updating, make sure the scene node's isPaused is set to false.” However, it's likely that your scene nodes are un-paused and/or you're not using animations at all, and yet this issue occurs.
I started noticing this issue happening intermittently on iOS 11 when everything in my project worked great on iOS 10.  I haven't touched SCNAnimations or anything like that— I have my own animation system, so I just update my nodes' .position every renderer(_:, updateAtTime:).
So AFAICT, the issue here isn't anything you're doing— the issue is that SceneKit has always been buggy and poorly-written (some parts of the API still haven't been fully updated for Swift), and will likely remain buggy and poorly-written because it seems the SceneKit team has completely moved onto working on ARKit.  So the solution to your problem seems to be “just try a bunch of stuff, setting .position more frequently than necessary, or at a different time in the update loop, or really hack-y solution that seems to work-around the issues in SceneKit”.
For me, personally, I'm slowly working towards abandoning SceneKit in my project— I've mostly switched to custom Metal shaders, and then I'll do my own scene render loop, and eventually I'll replace the scene graph and SCNNode usage leaving me with zero reliance on the buggy mess that is SceneKit.
I wish I had better news for you, but I've been dealing with the weirdness of SceneKit for 2 years now.  It's unfinished garbage.
Update: I ended up “fixing” my presentation nodes misbehaving by using a dirty hack that adds a tiny amount of “wiggle” to the node's position each frame:
public func updateNode()
{
let wigglePositionValue = SCNVector3(
x: Float.random(in: -0.00001...0.0001),
y: Float.random(in: -0.00001...0.0001),
z: Float.random(in: -0.00001...0.0001)
)
self.scnNode.position = _locationModel.value + wigglePositionValue
}
Side-note: As expected, “jiggling” node positions like this also disallows SceneKit from doing some of its batching/rendering optimizations, and (in my experience) incurs a performance hit. It's a horrible hacky work-around, only intended for worst-case scenarios (“the client says we have to go live with the game this week or we don't get paid, and now this stupid SceneKit bug?!”)
So yeah… the real solution here is “don't use SceneKit”.
I have the same issue using ARKit. Because physicsBody adjust itself according to presentation node instead of the node itself, and because the presentation node doesn't update immediately when you change the node's transform, the physicsBody you get might be weird.
There is one solution I found recently:
To change presentation node's transform, use SCNAction. For example, if you want to set position, use node.runAction(SCNAction.move(by: SCNVector3, duration: 0)) instead of simply node.position = SCNVector3. This way, the presentation node immediately updates and you get the right physics.
In addition, this approach only works on the node itself (not applying to children). So if you run action on the node's child, you still can't get the right presentation node and physics. To change the child node's transform and presentation transform, use the node.position = SCNVector3 approach.
To sum up, when changing node's transform, run an action, and when changing the node's children' transforms, directly set the transform.

What's the difference between these two SpriteKit functions?

I don't know what's the difference between these two functions.
First:
coin.run(SKAction.moveTo(y: -146.115, duration: 0))
Second:
coin.position.y = -146.115
The SKAction will not be processed until the next frame-- directly after update. If you call .run after didEvaluateActions, your position will not be updated, and you may encounter bugs due to that.
The second line of code will take place immediately, regardless of your position in the SK loop.
Example, if you are using physics, and call .run(.move( on something in didBegin(contact, and then are expecting that sprite to have moved already by didEnd(contact), then you will have problems. In that situation, you want to manually adjust .position instead of using an action.
Secondly, the .run command is also less performant, because it requires the initialization of an SKAction object, which is somewhere between 20-30% slower than just adjusting the position manually.
Granted, that amount of difference doesn't add up to much, but in complicated scenes it could be the difference between getting everything done in 16ms (60fps) or not.
Third, as others have mentioned, there is the forDuration parameter, which allows you to animate the movement over a period of time.. say, 2 seconds, or however long you want.
SKAction.moveTo() has a duration parameter, which is there, because it is an animated version of changing a node's position over the specified time interval. On the other hand, changing the position of a node doesn't animate the movement.
In first line you are using your coin class object and accessing function run through it's object.
coin.run(SKAction.moveTo(y: -146.115, duration: 0))
In second line coin class object accessing it's property position.y and assign it float value.
coin.position.y = -146.115
Hope u got it!!!

Unity3d - check for collision on non moving objects

I have a sphere build from multiple objects. What I want to do is when I touch/click an object, that object should find all adjunctive objects. But because none off them are moving, no collision detection can be used.
I can't find a way to detect these adjunctive objects even when the colliders do collide with each other, as I can see that in the scene. I tried all the possibilities, but none off them are working, because no objects are moving.
Is there a way to check for manual collision detection, or is there some sort of way to let Unity3d do the collision detection automatically?
You could keep a list of all those objects, then when your event happens you can send messages to all them to do what you want them to do.
Lets assume you want your sphere to break into little pieces. You send a Force message to the sphere. Then you use Newton's Laws of motion and find out how much velocity each piece gets. Remember velocity is a vector thus it has direction.
This is how I would do it and still keep the right amount of control over what happens in my game/simulation. Remember F = ma.
you could use RaycastHit (http://docs.unity3d.com/Documentation/ScriptReference/RaycastHit.html) for your collision, this also works on non-moving objects but it needs more performance
You can add rigidbody to every objects; when you touch one of them, give a force onto it, then it is going to move and trigger event of the adjacent objects.
for the reason you do not want to move the object you touch, you can cancel movement in the OnCollider or OnTrigger event handler function.
I managed to work around this by checking the distance from the selected object and all other objects that are part of the sphere. If the distance is below a certain value, then it is an adjunctive object.
Although this is certaintly not fool proof, it works without problems so far.
I am sorry I was not clear enough. Thanks for all the advice what so ever.