I am working on a game which allows some buildings to be placed and
procedurally rotated.
I found the following rotation code which I am trying to incorporate:
function TransformModel(objects, center, new, recurse)
for _,object in pairs(objects) do
if object:IsA("BasePart") then
object.CFrame = new:toWorldSpace(center:toObjectSpace(object.CFrame))
end
if recurse then
TransformModel(object:GetChildren(), center, new, true)
end
end
end
which is invoked as follows in my building placement script:
local center = model:GetModelCFrame()
TransformModel( model:GetChildren(), center, center * CFrame.Angles(0,math.rad(45),0), true )
So the code works, somewhat... The model does sort of rotate but only after it jumps into the air and lands haphazardly before finally settling down in the rotated position, although its far from an exact 45 degree rotation after the physics effect of jumping in the air and settling back down.
I am not quite sure but I suspect there is a better way to accomplish this where the model smoothly rotates. Any help would be greatly appreciated.
Thanks,
Andy
GetModelCFrame is a deprecated method. It is instead recommended to utilize the PrimaryPart, GetPrimaryPartCFrame, and SetPrimaryPartCFrame. Setting the primary part designates a specific part in the model to serve as a base that you can move around via its CFrame. What's nice about this approach is that if you transform the PrimaryPart of a Model with these methods, all of the other parts in the Model will also transform to maintain their offset from the PrimaryPart.
For placing buildings and props, it is usually desired to have this part be a flat part at the bottom that represents the footprint of the building.
For example, consider the following model:
In this case the best part to make the primary part would be the one on the bottom. If your model doesn't have a part incorporated in the geometry then you can make an invisible and cancollide false part to serve as the base. In the following example, there is a tree model that doesn't have a wide base, but we want to make sure the footprint is large enough that it wouldn't overlap with other nearby objects:
In this case the yellow part should be made fully transparent at runtime.
So in terms of the code to actually orient a model in the way you want, all you have to do is call SetPrimaryPartCFrame with the new CFrame. The TransformModel function simply becomes:
local function TransformModel(model, newCFrame)
model:SetPrimaryPartCFrame(newCFrame)
end
When you want to call this function to say rotate a model 45 degrees:
local model = game.Workspace.Model
model.PrimaryPart = model.Base
local rotatedCFrame = model:GetPrimaryPartCFrame() * CFrame.Angles(0, math.rad(45), 0)
TransformModel(model, rotatedCFrame)
I believe I found an answer that works for my problem, though I am not sure if this is the best way to fix it. I created an extra part to act as a base which is the exact width and height of the model, the model then sits on top of the new base part and the base is anchored. That seems to have fixed it - is there a better way?
Now, GetPrimaryPartCFrame and SetPrimaryPartCFrame are not recommended for future projects. Instead, it is best to use SetPivot() and GetPivot().
(I understand that this topic was quite a while back ago, so I'm saying this to people who stumble upon this)
Related
In a Unity code base, I saw this:
// the game object currently has no mesh attached
MeshFilter mFilter = gameObject.AddComponent<MeshFilter>();
gameObject.AddComponent<MeshRenderer>();
// no problem so far
mFilter.mesh = MakeASmallQuadMesh(0.1f);
// great stuff
mFilter.mesh.bounds = SomeSpecificBounds();
// what ?
The function "MakeASmallQuadMesh" has the usual completely normal code for making a mesh, so
Mesh mesh = new Mesh();
mesh.SetVertices(verts);
mesh.SetIndices(indices);
mesh.SetUVs(0, uvs);
mesh.RecalculateNormals();
mesh.RecalculateBounds();
return mesh;
No worries there. It makes a quad 10cm across.
But what about the line of code
mFilter.mesh.bounds = SomeSpecificBounds();
I was amazed to learn you can set mesh.bounds, I assumed it would be read-only.
What possible "meaning" is there to "setting" the bounds? It would be like: there's a written measurement in a doctor's office stating that Jane is 6'. You change the record to 5'10". Of course, Jane's height does not change at all. You've just, bizarrely made the record wrong.
Could it be that: so, the bounds of a mesh are used by various, indeed many, Unity systems. (Culling, etc etc.) Could the pattern be that by "forcing" the bounds like this (the bounds are now "totally wrong" for the object - they're just some value you forced in there) the programmer wanted (for some reason) Unity's system (say, culling) to use those forced, nonsensical (to the actual object) values?
Wild guess, maybe there's a pattern (I have never heard of) where you "force" the bounds of object A to be the same as object B - for some reason I cannot guess at?
What could possibly be the pattern / reason here?
I would just assume it's a basic mistake, but assumptions kill.
I was actually curious about this myself, and I happened to have a program that explicitly generated large numbers of custom meshes, so I decided to test a few things.
The first thing I wanted to confirm was that the bounds were in fact set automatically. A rudimentary inspection revealed that this was indeed the case: Specifically, a mesh's bounds are automatically recalculated any time you set the mesh.vertices property. This probably explains why the property is a fixed length array rather than a list. (fun side note: If you try to assign secondary properties like uv coords or normals to the mesh before assigning vertex positions, unity gently nags you about mismatched array lengths before promptly exploding. So Don't Do That.)
As for what this actually impacts, I had a hunch: I set the extents of my mesh bounds to be 0. Immediately, meshes at the corner of my viewport stated getting visually culled. This tells us a few things:
Setting bounds explicitly does have an effect.
Unity does actually make use of custom bounds data.
Unity uses mesh bounds to perform frustum culling.
According to Unity's manual, there are three cases where the Bounds class is used: Mesh.bounds, Renderer.bounds, and Collider.bounds. Of those three, Mesh.bounds is the only property that isn't read only.
As for the question of why anybody would want to set mesh bounds explicitly, it's not impossible that you could perform some clever culling optimization like looking at a complex mesh through a window or some such, but if I had to guess, whoever wrote that code didn't trust Unity to set mesh bounds accurately or explicitly.
I'm trying to make a simplified version of crazy taxi . For the first step i need to make an infinite ground . I'VE searched online but couldn't find any examples .
Can i find any example of how to do this ?
https://www.youtube.com/watch?v=rhTPxrJICVg&list=PLLH3mUGkfFCXQcNBz_FZDpqJfQlupTznd
N3K makes an infinite runner in Subway Surfers style, meaning ground coming from the back and going towards the camera.
The above link is to his tutorial series.
This is a very broad question I try to give a very simplified answer to get you going.
To create endless road you need some sort of procedural function that generates corners for your track. If you do not need to backtrack you can cook something up yourself like (in X distance turn X degrees to the right). If you do need to backtrack you need something like perlin/simplex noise that always generates the same value based on 1 or more other values. You could use total distance to get the curve in the road.
You simply keep generating the world on the fly and unload pieces of the world you don't need any more. If the player can alter the world like destroying street furniture or leaving skitmarks you need to implement a chunk system. When you backtrack and generate a cerain part of the world with your procedural function you can have permanent changes the player made in that specific part by saving and loading to the chunk. Much like Minecraft does it actually.
I am developing (or atleast trying to develop)
a decently big real time tactics game (something similar to RTS) using SpriteKit.
I am using GamePlay kit for pathfinding.
Initially I used SKActions to move the sprites within the Path but fast enough I realized that it was a big mistake.
Then I tried to implement it with GKAgents (this is my current state)
I feel that GKAgents are very raw and premature also they are following some strange Newton Law #1 that makes them to move forever (I can't think of any scenario where it would be useful - maybe for presentations at WWDC)
as well as I see that they have some Angular speed to perform rotations
which I don't need at all and can't really find how to disable it...
As well as GKBehaviors given a GKGoals seems to make some weird thing...
Setting behavior to avoid obstacles makes my units to joggle around them...
Setting behavior with follow path goal completely ignores everything unless the maxPredictionTime is low enough...
I am not even willing to tell what happens when I combine both them.
I feel broken...
I feel like I have 2 options now:
1) to struggle more with those agents and trying to make them behave as I wish
2) To roll all the movement on my own with help by GKObstacleGraph and a path Finding (which is buggy as well I have to say at some points the path to the point will generate the most awful path like "go touch that obstacle then reverse touch that one then go to the actual point (which from the beginning could be achieved by a straight line)").
Question is:
What would be the best out of those options ?
One of the best ways (in SpriteKit/GameplayKit) to get the kind of behavior you're after is to recognize that path planning and path following need not be the same operation. GameplayKit provides tools for both — GKObstacleGraph is good for planning and GKAgent is good for following a planned path — and they work best when you combine the strengths of each.
(It can be a bit misleading that GKAgent provides obstacle avoidance; don't think of this in the same way as finding a route around obstacles, more like reacting to sudden obstacles in your way.)
To put it another way, GKObstacleGraph and GKAgent are like the difference between navigating with a map and safely driving a car. The former is where you decide to take CA-85 and US-101 instead of I-280. (And maybe reevaluate your decision once in awhile — say, to pick a different set of roads around a traffic jam.) The latter is where you, continuously moment-to-moment, change lanes, avoid potholes, pass slower vehicles, slow down for heavy traffic, etc.
In Apple's DemoBots sample code, they break this out into two steps:
Use GKObstacleGraph to do high level path planning. That is, when the bad guys are "here" and the hero is "way over there", and there are some walls in between, select a series of waypoints that roughly approximates a route from here to there.
Use GKAgent behaviors to make the character roughly follow that path while also reacting to other factors (like making the bad guys not step on each other and giving them vaguely realistic movement curves instead of simply following the lines between waypoints).
You can find most of the relevant stuff behind this in TaskBotBehavior.swift in that sample code — start from addGoalsToFollowPath and look at both the places that gets called and the calls it makes.
As for the "moving forever" and "angular speed" issues...
The agent simulation is a weird mix of a motivation analogy (i.e. the agent does what's needed to move it toward where it "wants" within constraints) and a physics system (i.e. those movements are modeled like forces/impulses). If you take away an agent's goals, it doesn't know that it needs to stop — instead, you need to give it a goal of stopping. (That is, a movement speed goal of zero.) There might be a better model than what Apple's chosen here — file bugs if you have suggestions for design improvements.
Angular speed is trickier. The notion of agents' intrinsic physical constraints being sort of analogous to, say, vehicles on land or boats at sea is pretty well baked into the system. It can't really handle things like space fighters that have to reorient to vector their thrust, or walking creatures that can just as happily walk sideways or backwards as forward — at least, not on its own. You can get some mileage toward changing the "feel" of agent movement with the maxAcceleration property, but you're limited by the fact that said property covers both linear and angular acceleration.
Remember, though, that the interface between what the agent system "wants" and what "actually happens" in your game world is under your control. The easiest way to implement GKAgentDelegate is to just sync the velocity and position properties of the agent and the sprite that it represents. However, you don't have to do it that way — you could calculate a different force/impulse and apply it to your sprite.
I can't comment yet so I post as an answer. I faced the same problem recently: agent wiggling around the target or the agent that keeps moving even if you remove the behavior. Then I realized that the behavior is just the algorithm controling the movement, but you can still access and set the agent's speed, position and angle by hand.
In my case, I have a critter entity that chases for food in the scene. When it makes contact with the food agent, the food entity is removed. I tried many things to make the critter stop after eating the food (it would keep going in a straight line). And all I had to do was to set its speed to 0. That is because the behavior will influence not the position directly, but the speed/angle combination instead (from what I understand). When there is no goal for the entity, it doesn't "want" to change it's state, so whatever speed and direction it reached, it will keep. It will simply not update/change it. So unless you create a goal to make it want to stop, it will wiggle/keep going. The easy way is to set the behavior to nil and set the speed to 0 yourself.
If the behavior/goal system doesn't do it for the type of animation you are looking for, you can still use the Agent system and customize the movement with the AgenDelegate protocol and the update method and make it interact with other agents later on. You can even synchronize the agent with a node that is moved with the physics engine or with actions (or any other way).
I think the agent system is nice to keep around since you can use it later, even if it's only for special effects. But just as mixing actions and physics can give some weird results, mixing goal/behaviors and any other "automated" tool will probably result in an erratic behavior.
You can also use the agent system for other stuff than moving an actual sprite around. For example, you could use an agent to act as a "target seeker" to simulate reaction time for your enemies. The agent moves around the scene and finds other agents, when it makes contact with a suitable target, the enemy entity would attack it (random idea).
It's not a "one size fits all" solution, but it's a very nice tool to have.
I am working on a drone based video surveillance project. I am required to implement object tracking in the same. I have tried conventional approaches but these seem to fail due to non static environment.
This is an example of what i would want to achieve. But this uses background subtraction which is impossible to achieve with a non static camera.
I have also tried feature based tracking using SURF features, but it fails for smaller objects and is prone to false positives.
What would be the best way to achieve the objective in this scenario ?.
Edit : An object can be anything within a defined region of interest. The object will usually be a person or a vehicle. The idea is that the user will make a bounding box which will define the region of interest. The drone now has to start tracking whatever is within this region of interest.
Tracking local features (like SURF) won't work in your case. Training a classifier (like Boosting with HAAR features) won't work either. Let me explain why.
Your object to track will be contained in a bounding box. Inside this bounding box there could be any object, not a person, a car, or something else that you used to train you classifier.
Also, near the object, in the bounding box there will be also background noise that will change as soon as your target object moves, even if the appearance of the object doesn't change.
Moreover the appearance of you object changes (e.g. a person turns, or drop the jacket, a vehicle get a reflection of the sun, etc...), or the object gets (partially or totally) occluded for a while. So tracking local features is very likely to lose the tracked object very soon.
So the first problem is that you must deal with potentially a lot of different objects, possibly unknown a priori, to track and you cannot train a classifier for each one of these.
The second problem is that you must follow an object whose appearance may change, so you need to update your model.
The third problem is that you need some logic that tells you that you lost the tracked object, and you need to detect it again in the scene.
So what to do? Well, you need a good long term tracker.
One of the best (to my knowledge) is Tracking-Learning-Detection (TLD) by Kalal et. al.. You can see on the dedicated page a lot of example videos, and you can see that it works pretty good with moving cameras, objects that change appearance, etc...
Luckily for us, OpenCV 3.0.0 has an implementation for TLD, and you can find a sample code here (there is also a Matlab + C implementation in the aforementioned site).
The main drawback is that this method could be slow. You can test if it's an issue for you. If so, you can downsample the video stream, upgrade your hardware, or switch to a faster tracking method, but this depends on you requirements and needs.
Good luck!
The simplest thing to try is frame differencing instead of background subtraction. Subtract the previous frame from the current frame, threshold the difference image to make it binary, and then use some morphology to clean up the noise. With this approach you typically only get the edges of the objects, but often that is enough for tracking.
You can also try to augment this approach using vision.PointTracker, which implements the KLT (Kanad-Lucas-Tomasi) point tracking algorithm.
Alternatively, you can try using dense optical flow. See opticalFlowLK, opticalFlowHS, and opticalFlowLKDoG.
I want to ask about jelly physics ( http://www.youtube.com/watch?v=I74rJFB_W1k ), where I can find some good place to start making things like that ? I want to make simulation of cars crash and I want use this jelly physics, but I can't find a lot about them. I don't want use existing physics engine, I want write my own :)
Something like what you see in the video you linked to could be accomplished with a mass-spring system. However, as you vary the number of masses and springs, keeping your spring constants the same, you will get wildly varying results. In short, mass-spring systems are not good approximations of a continuum of matter.
Typically, these sorts of animations are created using what is called the Finite Element Method (FEM). The FEM does converge to a continuum, which is nice. And although it does require a bit more know-how than a mass-spring system, it really isn't too bad. The basic idea, derived from the study of continuum mechanics, can be put this way:
Break the volume of your object up into many small pieces (elements), usually tetrahedra. Let's call the entire collection of these elements the mesh. You'll actually want to make two copies of this mesh. Label one the "rest" mesh, and the other the "world" mesh. I'll tell you why next.
For each tetrahedron in your world mesh, measure how deformed it is relative to its corresponding rest tetrahedron. The measure of how deformed it is is called "strain". This is typically accomplished by first measuring what is known as the deformation gradient (often denoted F). There are several good papers that describe how to do this. Once you have F, one very typical way to define the strain (e) is:
e = 1/2(F^T * F) - I. This is known as Green's strain. It is invariant to rotations, which makes it very convenient.
Using the properties of the material you are trying to simulate (gelatin, rubber, steel, etc.), and using the strain you measured in the step above, derive the "stress" of each tetrahdron.
For each tetrahedron, visit each node (vertex, corner, point (these all mean the same thing)) and average the area-weighted normal vectors (in the rest shape) of the three triangular faces that share that node. Multiply the tetrahedron's stress by that averaged vector, and there's the elastic force acting on that node due to the stress of that tetrahedron. Of course, each node could potentially belong to multiple tetrahedra, so you'll want to be able to sum up these forces.
Integrate! There are easy ways to do this, and hard ways. Either way, you'll want to loop over every node in your world mesh and divide its forces by its mass to determine its acceleration. The easy way to proceed from here is to:
Multiply its acceleration by some small time value dt. This gives you a change in velocity, dv.
Add dv to the node's current velocity to get a new total velocity.
Multiply that velocity by dt to get a change in position, dx.
Add dx to the node's current position to get a new position.
This approach is known as explicit forward Euler integration. You will have to use very small values of dt to get it to work without blowing up, but it is so easy to implement that it works well as a starting point.
Repeat steps 2 through 5 for as long as you want.
I've left out a lot of details and fancy extras, but hopefully you can infer a lot of what I've left out. Here is a link to some instructions I used the first time I did this. The webpage contains some useful pseudocode, as well as links to some relevant material.
http://sealab.cs.utah.edu/Courses/CS6967-F08/Project-2/
The following link is also very useful:
http://sealab.cs.utah.edu/Courses/CS6967-F08/FE-notes.pdf
This is a really fun topic, and I wish you the best of luck! If you get stuck, just drop me a comment.
That rolling jelly cube video was made with Blender, which uses the Bullet physics engine for soft body simulation. The bullet documentation in general is very sparse and for soft body dynamics almost nonexistent. You're best bet would be to read the source code.
Then write your own version ;)
Here is a page with some pretty good tutorials on it. The one you are looking for is probably in the (inverse) Kinematics and Mass & Spring Models sections.
Hint: A jelly can be seen as a 3 dimensional cloth ;-)
Also, try having a look at the search results for spring pressure soft body model - they might get you going in the right direction :-)
See this guy's page Maciej Matyka, topic of soft body
Unfortunately 2d only but might be something to start with is JellyPhysics and JellyCar