Why does the agent not go to the right attractor? - anylogic

i have a fork-lifter who takes the item from the truck and stock it in the rectangular node.
The problem is that i want that the fork-lifter stock the agent according to the attractors. Instead, the operator goes in the same point every time. Why?(i did the same with another flowchart and same blocks and there is no problem, maybe a network problem?).

YOu have to manually tell it to which attractor it should go. Attractors are not actual physical spaces (that become "full" if something is stored there). They are just spaces to separate things visually.
In your process blocks, you probably did not specify the attractors correctly or did not define to move to a random attractor. But even in the latter case, you could have things being stored in the same attractor.
Two options:
Either, turn your attractors into actual agents with rectangular nodes, that you setup to be "empty/full" with state charts (faster but harder to design)
Or use the material handling library with its storage system setup, doing this for your (but this is computationally slower)

Related

Transporterfleet not blocking eachother ANYLOGIC

In my model I've a path-guided transporter fleet, but when they are close to eachother they are blocking eachother, since this is not the scope of my model (I want them just to override eachother ) is there a way to disable this option. I've already tried to set minimum distance to obstacle very low or use very small dimensions (see figure) but everything seems not to work.
The key aspect of Material-Handling transporters is to apply that spatial blocking.
If you do not want it, use moving resources from the Process modelling library. They act the same but do not have spatial awareness. However, they also cannot do the free-space navigation. Path-guided works but not applying path-specific constraints.
So it is a trade-off. The process-library resources also require much less computational power...

Anylogic: Is it possible to move Transporters based on travel time, rather than distance and speed?

I would like to use transporters in my model in various places (tugboats, forklifts, reach stackers, trucks, etc.). However, my model paths and animation can't be drawn to scale, detailed explanation in brackets below.
Is there a way I can move the transporter from one node to another based on travel time (similar to what a movable resource can do), rather than speed and distance?
The "Move By Transporter" block does not seem to allow this and I have not been able to find a solution online. Thank you for your help.
(Explanation on why I can't draw to scale: firstly, some destination locations (storage areas, etc.) are not known yet and will just be represented by a travel delay to get there, secondly, different areas of the model will be drawn to different scales, i.e. some network paths will represent a multiple kilometers and some network paths will only represent a few hundred meters, etc.)
You can draw the paths to suit your animation and then simply set the speed of the transporter that gets seized to a speed so that the duration of the movement matches what you need it to be, and when the transporter gets released set the speed back to normal

Looking for a way to analyse transporter traffic on shopfloor (Density Map) in AnyLogic

I built up a shopfloor where material flow is realized by Transporters (AGV / AMR) with free space navigation. I am looking for a possibility to observe traffic at certain spots (e.g. work stations, storage areas) or even on the whole shopfloor so I can compare different scenarios of the material flow and supply strategy of the working station with a view on the traffic. I tried out the Density Map but since it observes the whole layout which is quite big the values get too low for the scale quite fast so it isn't performing the way I want it to. Is there a way so set up like a "area density map" so I can just observe a defined rectangle or another functionality which could help me here?
Happy about all ideas! :-)
You can use normal Rectangular Node elements and trigger "on enter" code to count drive-throughs or similar, as below.
Just make sure to set the capacity to infinity so normal traffic flow is not interrupted :)

To use GKAgents or not

I am developing (or atleast trying to develop)
a decently big real time tactics game (something similar to RTS) using SpriteKit.
I am using GamePlay kit for pathfinding.
Initially I used SKActions to move the sprites within the Path but fast enough I realized that it was a big mistake.
Then I tried to implement it with GKAgents (this is my current state)
I feel that GKAgents are very raw and premature also they are following some strange Newton Law #1 that makes them to move forever (I can't think of any scenario where it would be useful - maybe for presentations at WWDC)
as well as I see that they have some Angular speed to perform rotations
which I don't need at all and can't really find how to disable it...
As well as GKBehaviors given a GKGoals seems to make some weird thing...
Setting behavior to avoid obstacles makes my units to joggle around them...
Setting behavior with follow path goal completely ignores everything unless the maxPredictionTime is low enough...
I am not even willing to tell what happens when I combine both them.
I feel broken...
I feel like I have 2 options now:
1) to struggle more with those agents and trying to make them behave as I wish
2) To roll all the movement on my own with help by GKObstacleGraph and a path Finding (which is buggy as well I have to say at some points the path to the point will generate the most awful path like "go touch that obstacle then reverse touch that one then go to the actual point (which from the beginning could be achieved by a straight line)").
Question is:
What would be the best out of those options ?
One of the best ways (in SpriteKit/GameplayKit) to get the kind of behavior you're after is to recognize that path planning and path following need not be the same operation. GameplayKit provides tools for both — GKObstacleGraph is good for planning and GKAgent is good for following a planned path — and they work best when you combine the strengths of each.
(It can be a bit misleading that GKAgent provides obstacle avoidance; don't think of this in the same way as finding a route around obstacles, more like reacting to sudden obstacles in your way.)
To put it another way, GKObstacleGraph and GKAgent are like the difference between navigating with a map and safely driving a car. The former is where you decide to take CA-85 and US-101 instead of I-280. (And maybe reevaluate your decision once in awhile — say, to pick a different set of roads around a traffic jam.) The latter is where you, continuously moment-to-moment, change lanes, avoid potholes, pass slower vehicles, slow down for heavy traffic, etc.
In Apple's DemoBots sample code, they break this out into two steps:
Use GKObstacleGraph to do high level path planning. That is, when the bad guys are "here" and the hero is "way over there", and there are some walls in between, select a series of waypoints that roughly approximates a route from here to there.
Use GKAgent behaviors to make the character roughly follow that path while also reacting to other factors (like making the bad guys not step on each other and giving them vaguely realistic movement curves instead of simply following the lines between waypoints).
You can find most of the relevant stuff behind this in TaskBotBehavior.swift in that sample code — start from addGoalsToFollowPath and look at both the places that gets called and the calls it makes.
As for the "moving forever" and "angular speed" issues...
The agent simulation is a weird mix of a motivation analogy (i.e. the agent does what's needed to move it toward where it "wants" within constraints) and a physics system (i.e. those movements are modeled like forces/impulses). If you take away an agent's goals, it doesn't know that it needs to stop — instead, you need to give it a goal of stopping. (That is, a movement speed goal of zero.) There might be a better model than what Apple's chosen here — file bugs if you have suggestions for design improvements.
Angular speed is trickier. The notion of agents' intrinsic physical constraints being sort of analogous to, say, vehicles on land or boats at sea is pretty well baked into the system. It can't really handle things like space fighters that have to reorient to vector their thrust, or walking creatures that can just as happily walk sideways or backwards as forward — at least, not on its own. You can get some mileage toward changing the "feel" of agent movement with the maxAcceleration property, but you're limited by the fact that said property covers both linear and angular acceleration.
Remember, though, that the interface between what the agent system "wants" and what "actually happens" in your game world is under your control. The easiest way to implement GKAgentDelegate is to just sync the velocity and position properties of the agent and the sprite that it represents. However, you don't have to do it that way — you could calculate a different force/impulse and apply it to your sprite.
I can't comment yet so I post as an answer. I faced the same problem recently: agent wiggling around the target or the agent that keeps moving even if you remove the behavior. Then I realized that the behavior is just the algorithm controling the movement, but you can still access and set the agent's speed, position and angle by hand.
In my case, I have a critter entity that chases for food in the scene. When it makes contact with the food agent, the food entity is removed. I tried many things to make the critter stop after eating the food (it would keep going in a straight line). And all I had to do was to set its speed to 0. That is because the behavior will influence not the position directly, but the speed/angle combination instead (from what I understand). When there is no goal for the entity, it doesn't "want" to change it's state, so whatever speed and direction it reached, it will keep. It will simply not update/change it. So unless you create a goal to make it want to stop, it will wiggle/keep going. The easy way is to set the behavior to nil and set the speed to 0 yourself.
If the behavior/goal system doesn't do it for the type of animation you are looking for, you can still use the Agent system and customize the movement with the AgenDelegate protocol and the update method and make it interact with other agents later on. You can even synchronize the agent with a node that is moved with the physics engine or with actions (or any other way).
I think the agent system is nice to keep around since you can use it later, even if it's only for special effects. But just as mixing actions and physics can give some weird results, mixing goal/behaviors and any other "automated" tool will probably result in an erratic behavior.
You can also use the agent system for other stuff than moving an actual sprite around. For example, you could use an agent to act as a "target seeker" to simulate reaction time for your enemies. The agent moves around the scene and finds other agents, when it makes contact with a suitable target, the enemy entity would attack it (random idea).
It's not a "one size fits all" solution, but it's a very nice tool to have.

Simulating physics for voxel constructions (Minecraft, Dwarf Fortress, etc)?

I'm hoping to prototype some very basic physics/statics simulations for "voxel-based" games like Minecraft and Dwarf Fortress, so that the game can detect when a player has constructed a structure that should not be able to stand up on its own.. Obviously this is a very fuzzy definition -- whether a structure is impossible depends upon multitude of material and environmental properties -- but the general idea is to motivate players to build structures that resemble the buildings we see in the real world. I'll describe what I mean in a bit more detail below, but I generally want to know if anyone could suggest either an potential approach to the problem or a resource that I could use.
Here's some examples of buildings that could be impossible if the material was not strong enough.
Here's some example situations. My understanding of this subject is not great but bear with me.
If this structure were to be made of concrete with dimensions of, say, 4m by 200m, it would probably not be able to stand up. Because the center of mass is not over its connection to the ground, I think it would either tip over or crack at the base.
The center of gravity of this arch lies between the columns holding it up, but if it was very big and made of a weak, heavy material, it would crumble under its own weight.
This tower has its center of gravity right over its base, but if it is sufficiently tall then it only takes a bit of force for the wind to topple it over.
Now, I expect that a full-scale real-time simulation of these physics isn't really possible... but there's a lot of ways that I could simplify the simulation. For example:
Tests for physics-defying structures could be infrequently and randomly performed, so a bad building doesn't crumble right as soon as it is built, but as much as a few minutes later.
Minecraft and Dwarf Fortress hardly perform rigid- or soft-body physics. For this reason, any piece of a building that is deemed to be physically impossible can simple "pop" into rubble instead of spawning a bunch of accurate physics props.
Have you considered taking an existing 3d environment physics engine and "rounding off" orientations of objects? In the case of your first object (the L-shaped thing), you could run a simulation of a continuous, non-voxelized object of similar shape behind the scenes and then monitor that object for orientation changes. In a simple case, if the object's representation of the continuous hits the ground, the object in the voxelized gameplay world could move its blocks to the ground.
I don't think there is a feasible way to do this. Minecraft has no notion of physical structure. So you will have to look at each block individually to determine if it should fall (there are other considerations but this is minimum). You would therefore need a way to distinguish between ground and "not ground". This is modeling problem first and foremost, not a programming problem (not even simulation design). I think this question is out of scope for SO.
For instance consider the following model, that may give you an indication of the complexities involved:
each block above height = 0 experiences a "down pull" = P, P may be any of the following:
0 if the box is supported by another box
m*g (where m is its mass which depends on material density * voxel volume) otherwise if it is free
F represents some "friction" or "glue" between vertical faces of boxes, it counteracts P.
This friction should have a threshold beyond which it "breaks" and the block then has a net pull downwards.
if m*g < sum F, box stays where it is. Otherwise, box falls.
F depends on the pairs of materials in contact
for n=2, so you can form a line of blocks between two towers
F is what causes the net pull of a box to be larger than m*g. For instance if you have two blocks a-b-c with c being on d, then a pulls on b, so b should be "heavier" than m*g where it contacts c. If this net is > F, then the pair a-b should fall.
You might be able to simulate the above and get interesting results, but you will find it really challenging to handle the case where there are two towsers with a line of blocks between them: the towers are coupled together by line of blocks, there is no longer a "tip" to the line of blocks. At this stage you might as well get out your physics books to create a system of boxes and springs and come up with equations that you might be able to solve numerically, but in a full 3D system you will have a 3D mesh of springs to navigate iteratively to converge to force values on each box and determine which ones move.
A professor of mine suggested that I look at this paper.
Additionally, I found the keyword for what it is I'm looking for. "Structural Analysis." I bought a textbook and I have a long road ahead of me.