I am working on a application where, I would like to make a 3D terrain model of my country in unity3D in clusive of hills, mountains and rivers. So far I've been able to use mapbox to import the country map because unity wrld sdk doesn't yet support my country.
However the end goal is to create an application capable of representing natural disasters. Example, I have the country. I would like to know how would one go about causing rain to occurred that would essentially affect the "water levels" of the river and essentially show a flood. Basically, after I bring in the terrain how do I "act" on it to cause a landslide.
Any help or tutorial pointing to such would be welcomed.
You will need different models for each natural disaster. You will always only get a rough estimate of what may happen as your data will never represent the actual terrain. (For example earthquake, you may be able to reproduce damage to structures but never be able to predict if there will be a drift in the earth itself)
Rain/ Flood
A really simple simulation of rising ground water is slowly moving a "water" plane up. This crude approach will demonstrate which areas are going to be under water quite easily. For a detailed flood simulation you will need a fluid simulation of any kind (there are quite a few on the asset store)
Avalanche
Treat it as a fluid system with a strong resistance.
Vulcan
Almost the same as a flood, just with more viscosity.
Earthquake
You may be able to simulate the damage of an earthquake if all your objects have some kind of break point and the earthquake is added force to an area. A set force has an certain chance to destroy the object in the area. (Think of it in terms like any castle destroy game aka Flappy Bird, the bullet is your local earthquake and the castle your terrain + building/ trees)
Fire
You will need something like a burn value. Higher value = the longer it burns, harder to put out, faster spread. If a fire starts at any given point, it grows around. A river would have a value of 0, same as mountains. A forest would have a high value, a grass plain a low value. If you want to simulate a hot dry summer, your terrain could add a fixed value to everything, grass gets drying and thus has a higher chance to spread fire.
Related
I am developing (or atleast trying to develop)
a decently big real time tactics game (something similar to RTS) using SpriteKit.
I am using GamePlay kit for pathfinding.
Initially I used SKActions to move the sprites within the Path but fast enough I realized that it was a big mistake.
Then I tried to implement it with GKAgents (this is my current state)
I feel that GKAgents are very raw and premature also they are following some strange Newton Law #1 that makes them to move forever (I can't think of any scenario where it would be useful - maybe for presentations at WWDC)
as well as I see that they have some Angular speed to perform rotations
which I don't need at all and can't really find how to disable it...
As well as GKBehaviors given a GKGoals seems to make some weird thing...
Setting behavior to avoid obstacles makes my units to joggle around them...
Setting behavior with follow path goal completely ignores everything unless the maxPredictionTime is low enough...
I am not even willing to tell what happens when I combine both them.
I feel broken...
I feel like I have 2 options now:
1) to struggle more with those agents and trying to make them behave as I wish
2) To roll all the movement on my own with help by GKObstacleGraph and a path Finding (which is buggy as well I have to say at some points the path to the point will generate the most awful path like "go touch that obstacle then reverse touch that one then go to the actual point (which from the beginning could be achieved by a straight line)").
Question is:
What would be the best out of those options ?
One of the best ways (in SpriteKit/GameplayKit) to get the kind of behavior you're after is to recognize that path planning and path following need not be the same operation. GameplayKit provides tools for both — GKObstacleGraph is good for planning and GKAgent is good for following a planned path — and they work best when you combine the strengths of each.
(It can be a bit misleading that GKAgent provides obstacle avoidance; don't think of this in the same way as finding a route around obstacles, more like reacting to sudden obstacles in your way.)
To put it another way, GKObstacleGraph and GKAgent are like the difference between navigating with a map and safely driving a car. The former is where you decide to take CA-85 and US-101 instead of I-280. (And maybe reevaluate your decision once in awhile — say, to pick a different set of roads around a traffic jam.) The latter is where you, continuously moment-to-moment, change lanes, avoid potholes, pass slower vehicles, slow down for heavy traffic, etc.
In Apple's DemoBots sample code, they break this out into two steps:
Use GKObstacleGraph to do high level path planning. That is, when the bad guys are "here" and the hero is "way over there", and there are some walls in between, select a series of waypoints that roughly approximates a route from here to there.
Use GKAgent behaviors to make the character roughly follow that path while also reacting to other factors (like making the bad guys not step on each other and giving them vaguely realistic movement curves instead of simply following the lines between waypoints).
You can find most of the relevant stuff behind this in TaskBotBehavior.swift in that sample code — start from addGoalsToFollowPath and look at both the places that gets called and the calls it makes.
As for the "moving forever" and "angular speed" issues...
The agent simulation is a weird mix of a motivation analogy (i.e. the agent does what's needed to move it toward where it "wants" within constraints) and a physics system (i.e. those movements are modeled like forces/impulses). If you take away an agent's goals, it doesn't know that it needs to stop — instead, you need to give it a goal of stopping. (That is, a movement speed goal of zero.) There might be a better model than what Apple's chosen here — file bugs if you have suggestions for design improvements.
Angular speed is trickier. The notion of agents' intrinsic physical constraints being sort of analogous to, say, vehicles on land or boats at sea is pretty well baked into the system. It can't really handle things like space fighters that have to reorient to vector their thrust, or walking creatures that can just as happily walk sideways or backwards as forward — at least, not on its own. You can get some mileage toward changing the "feel" of agent movement with the maxAcceleration property, but you're limited by the fact that said property covers both linear and angular acceleration.
Remember, though, that the interface between what the agent system "wants" and what "actually happens" in your game world is under your control. The easiest way to implement GKAgentDelegate is to just sync the velocity and position properties of the agent and the sprite that it represents. However, you don't have to do it that way — you could calculate a different force/impulse and apply it to your sprite.
I can't comment yet so I post as an answer. I faced the same problem recently: agent wiggling around the target or the agent that keeps moving even if you remove the behavior. Then I realized that the behavior is just the algorithm controling the movement, but you can still access and set the agent's speed, position and angle by hand.
In my case, I have a critter entity that chases for food in the scene. When it makes contact with the food agent, the food entity is removed. I tried many things to make the critter stop after eating the food (it would keep going in a straight line). And all I had to do was to set its speed to 0. That is because the behavior will influence not the position directly, but the speed/angle combination instead (from what I understand). When there is no goal for the entity, it doesn't "want" to change it's state, so whatever speed and direction it reached, it will keep. It will simply not update/change it. So unless you create a goal to make it want to stop, it will wiggle/keep going. The easy way is to set the behavior to nil and set the speed to 0 yourself.
If the behavior/goal system doesn't do it for the type of animation you are looking for, you can still use the Agent system and customize the movement with the AgenDelegate protocol and the update method and make it interact with other agents later on. You can even synchronize the agent with a node that is moved with the physics engine or with actions (or any other way).
I think the agent system is nice to keep around since you can use it later, even if it's only for special effects. But just as mixing actions and physics can give some weird results, mixing goal/behaviors and any other "automated" tool will probably result in an erratic behavior.
You can also use the agent system for other stuff than moving an actual sprite around. For example, you could use an agent to act as a "target seeker" to simulate reaction time for your enemies. The agent moves around the scene and finds other agents, when it makes contact with a suitable target, the enemy entity would attack it (random idea).
It's not a "one size fits all" solution, but it's a very nice tool to have.
I have a big complex unity scene including terrain, trees, grass, flowers and many other objects.
I'm having performance problems and i was wondering if its possible to bake all static objects that never move or change like terrain trees, houses, and other props etc, into one big static object to increase performance?
Thanks
Check out the Unity documentation on Static Objects here.
Many optimisations need to know if an object can move during gameplay. Information about a Static (ie, non-moving) object can often be precomputed in the editor in the knowledge that it will not be invalidated by a change in the object’s position. For example, rendering can be optimised by combining several static objects into a single, large object known as a batch.
To mark a GameObject as static, you simply need to check the Static box in the inspector window with the desired GameObject selected.
That wouldn't be a good idea. Having separate meshes is much more beneficial and efficient than combining them all into one huge mesh. This will allow you to set different LOD systems for the objects, billboarding of trees and detail objects, and will allow for finer control over your scene without having to rebuild that huge object again and again.
For large scenes, it is important that you set up a Level Of Detail (LOD) system. What it essentially means is that based on the distance of the object from the camera, a higher or lower quality model of that object will be rendered. At close distances, the highest polygon model will be rendered. At huge distances where detail is not required, lower polygon count models can be used. Consult the Unity Manual for more details. You can also look for scripts and tutorials on the internet.
Also, make sure that your terrain settings are reasonable. Setting the Pixel Error of the terrain to 1 is overkill, something like 4-5 is more than enough. Detail Density, Billboard Start, Detail and Tree Distance all these can be toned down.
Or just check for an unoptimized script or shader gone awry, that's usually the problem. Unoptimized 3d models are hell too. You'll have to do more than just pick up a model from Sketchup's 3D warehouse (speaking from personal experience). You will have to pay an artist to get high quality, optimized models with their LOD meshes also unless you have the skills yourself.
I'm hoping to prototype some very basic physics/statics simulations for "voxel-based" games like Minecraft and Dwarf Fortress, so that the game can detect when a player has constructed a structure that should not be able to stand up on its own.. Obviously this is a very fuzzy definition -- whether a structure is impossible depends upon multitude of material and environmental properties -- but the general idea is to motivate players to build structures that resemble the buildings we see in the real world. I'll describe what I mean in a bit more detail below, but I generally want to know if anyone could suggest either an potential approach to the problem or a resource that I could use.
Here's some examples of buildings that could be impossible if the material was not strong enough.
Here's some example situations. My understanding of this subject is not great but bear with me.
If this structure were to be made of concrete with dimensions of, say, 4m by 200m, it would probably not be able to stand up. Because the center of mass is not over its connection to the ground, I think it would either tip over or crack at the base.
The center of gravity of this arch lies between the columns holding it up, but if it was very big and made of a weak, heavy material, it would crumble under its own weight.
This tower has its center of gravity right over its base, but if it is sufficiently tall then it only takes a bit of force for the wind to topple it over.
Now, I expect that a full-scale real-time simulation of these physics isn't really possible... but there's a lot of ways that I could simplify the simulation. For example:
Tests for physics-defying structures could be infrequently and randomly performed, so a bad building doesn't crumble right as soon as it is built, but as much as a few minutes later.
Minecraft and Dwarf Fortress hardly perform rigid- or soft-body physics. For this reason, any piece of a building that is deemed to be physically impossible can simple "pop" into rubble instead of spawning a bunch of accurate physics props.
Have you considered taking an existing 3d environment physics engine and "rounding off" orientations of objects? In the case of your first object (the L-shaped thing), you could run a simulation of a continuous, non-voxelized object of similar shape behind the scenes and then monitor that object for orientation changes. In a simple case, if the object's representation of the continuous hits the ground, the object in the voxelized gameplay world could move its blocks to the ground.
I don't think there is a feasible way to do this. Minecraft has no notion of physical structure. So you will have to look at each block individually to determine if it should fall (there are other considerations but this is minimum). You would therefore need a way to distinguish between ground and "not ground". This is modeling problem first and foremost, not a programming problem (not even simulation design). I think this question is out of scope for SO.
For instance consider the following model, that may give you an indication of the complexities involved:
each block above height = 0 experiences a "down pull" = P, P may be any of the following:
0 if the box is supported by another box
m*g (where m is its mass which depends on material density * voxel volume) otherwise if it is free
F represents some "friction" or "glue" between vertical faces of boxes, it counteracts P.
This friction should have a threshold beyond which it "breaks" and the block then has a net pull downwards.
if m*g < sum F, box stays where it is. Otherwise, box falls.
F depends on the pairs of materials in contact
for n=2, so you can form a line of blocks between two towers
F is what causes the net pull of a box to be larger than m*g. For instance if you have two blocks a-b-c with c being on d, then a pulls on b, so b should be "heavier" than m*g where it contacts c. If this net is > F, then the pair a-b should fall.
You might be able to simulate the above and get interesting results, but you will find it really challenging to handle the case where there are two towsers with a line of blocks between them: the towers are coupled together by line of blocks, there is no longer a "tip" to the line of blocks. At this stage you might as well get out your physics books to create a system of boxes and springs and come up with equations that you might be able to solve numerically, but in a full 3D system you will have a 3D mesh of springs to navigate iteratively to converge to force values on each box and determine which ones move.
A professor of mine suggested that I look at this paper.
Additionally, I found the keyword for what it is I'm looking for. "Structural Analysis." I bought a textbook and I have a long road ahead of me.
I'm trying to re-create a 'falling sand' simulation, similar to those various web toys that are out there doing the same thing - and I'm failing pretty hard. I'm not really sure where to begin. I'm trying to use cellular automata to model the behavior of the sand particles, but I'm having trouble figuring out how to make the direction in which I update the 'world' not matter...
For example, one of the particle types I'd like to have is Plant. When Plant comes in contact with Water, Plant turns that Water particle into another Plant particle. The problem here though is that if I'm updating the game world from top to bottom and left to right, then a Plant particle placed in the middle of a sea of Water particles will immediately cause all of the Water particles to the right and below that new Plant particle to turn into Plants. This is not the behavior I am expecting. =(
One straightforward solution is to not do each iteration in-place. Instead, every time you update the world, create a copy of it... then look at the original, but update the copy. That way the order of updating does not matter any more, because you are completely disregarding your updates while you're looking for particles.
Don't program it in a sequential way (looping over all particles) but use real simulation programming techniques in which every particle is treated as an individual object/agent that obeys the laws of physics and that can act (run) asynchronously and respond to "events" (interactions with other particles).
If making every sand particle a separate object is too fine-grained, then divide the world into small blocks of let's say 1000 particles and simlute the behavior of these blocks instead.
I think it would be fun to model a top view of a train following a track, traversing switches and so on, using a physics library like Box2D. What joints and motors would I need to make this work?
I'm curious about how to implement the forces needed to make the car follow a spline track so it can bump into other train cars, pedestrians, DeLoreans etc. Just saying "the car is now at spline(t)" for each time step would create excessive forces in the physics engine. If I understand correctly, you have to stick the car onto the track with one force, constrain its angle to tend towards parallel with the track with another (or stick the front and back of the car to the track with two forces), and create another force to propel the train forward. I'm looking for some details on how to accomplish these things.
I believe it would be easier without "real" physics, like the ball movement of games such as Luxor or Tumble Bugs. Meaning: let the train follow a spline which is defined by the tracks.
Using phyiscs is probably overkill to make a train follow a track and could lead to all kinds of undesired side-effects, including jerky motion, train derailing, train getting stuck on junctions, etc.
You could still join the individual wagons together using physic joints, however. Just make sure that only the locomotive gets acceleration forces, the rest of the train just follows or is pushed but stays on the spline.
Why are you worried about keeping it "on the tracks"? Where is it going to go? Gravity should keep it down, object intersection should keep it up, and so the only directions you need to worry about are forward and backwards. That's where a motor comes in, and you're done. The rest is decorations.
In response to edit of problem:
Siderails. And have the train long enough / rigid enough compared to its width that you can navigate crossings (make them closer to right angles to minimize the crossing problems.
A top-down view (i.e. seeing the train from the sky) doesn't really require a 2d physics engine - if I understand you correctly. In fact, it seems like it wouldn't really help with the problem (if you want a train simulation), but then maybe you just wanna try it out for fun. :)
However, what about putting something like a slider joint on the train and the cars, and a motor on the locomotive. The slider joint might need some special implementation; you probably want to run the train along a spline and not a segment of straight lines, right?
Some sort of ball joint would connect the cars together.
The implementation is not so toughand I was able to prototype something in a few hours that does the basic job. It will require a lot of work to make it run smoothly, but it's essentially just "siderails."
Being top-down you obviously first must turn off gravity in Box2D. Second, build a train. Treat train wheels like car wheels and it'll suddenly get a lot more simple. For tracks you have a few choices:
Create your own game object (not in the box2D world) that is a simple line the train will then "follow" (you can use motors on train wheels to "steer" towards the line). Then just overlay the line with some nice wide "rail" graphics and you have a nicely faked system. Tell the wheels to turn off if it strays too far from the line and presto, you have a derailment.
Create actual physical rails - outside rails (like siderails) that the trains "wheels" will bump into. They will have to have gentle curves in this instance, which could be very difficult given the limited resources you have (simulating a nice slow curve out of boxes in Box2D is rough on the processor)
Then just let your train go!