Netlogo: create a stopping rule considering multiple conditions - netlogo

I am modelling a network of agents who interact with each other. Each agent is randomly connected to 8 others in the network. Each agent has an initial float-value between 0 and 1. If two agents' values are close enough (determined by the threshold "x" e.g. x = 0.4, v1 = 0.1, v2 = 0.3, so distance of 0.2 < x), one agent influences the other so that the values go even closer together.
However, if the values are greater than the other threshold "y", e.g. y = 0.8, one agent influences the other contrarily, so that the values drift further away from each other. If the difference is between x and y, no influence is taken place.
Now I wanted to create a stopping rule for my network, where an equilibrium is found.
Thank you very much!!!
I started with: if the total number of links (connections between agents) equals to the sum of 1.) number of links where the value difference is below 0.001 (so the two agents have almost the same value) and 2.) links with difference > 0.9, the iterations shall stop. But it is quasi never the case. Is there a possibility where it stops not too late and where it works for whatever values x and y have?
here is my initial code:
if count links = (count links with [value_difference < 0.001] + count links with [value_difference > 0.9])
[print number-of-clusters-conti
stop]

Related

Simulating first come first serve agent distribution

I currently have agents of type patient seizing an exam room when it becomes available, then, to mimic a first come first serve system where different healthcare practitioners see the patient, I have utilized a SelectOutputIn and 4 SelectOutputOut blocks corresponding to the 4 different practitioners that can see the patient. Each SelectOutputOut block has the same probability corresponding to their resource type. The problem arises when all practitioners are busy, it seems to send patients only to the Physio path and overload it. Is this because it is physically the last block in the order? How can I make the distribution of patients random even if all practitioners are busy?
[][1
It is correct. If all probabilities are zero, AnyLogic seems to pick the last option. To address your issue, we need to add a condition for when all practitioners are busy. So you will have 3 possible outcomes as follows:
Surgeons.idle() + Fellows.idle() + Residents.idle() + Physios.idle() == 0 ? 0.25 : Surgeons.idle() > 0 ? 0.25 : 0
The difference between the first and second 0.25 is that in the first case, all ports will have a 0.25 probability. Whereas in the second case, 3 of 4, or 2 of 4 or even 1 of 4 will have a 0.25 probability depending on how many resources are available. AnyLogic normalizes the probabilities. So if two ports have 0.25 probability, then it's like saying it's a 50/50 chance.
Finally, if the code seems too long, you can replace the first part by a function to have a cleaner/shorter code.
The function body could be:
return Surgeons.idle() + Fellows.idle() + Residents.idle() + Physios.idle();
Assume you named the function function, you can simplify your code in the probability field as follows:
function() == 0 ? 0.25 : Surgeons.idle() > 0 ? 0.25 : 0

SAT based motion planning

SAT BASED MOTION PLANNING ALGORITHM
A simple motion planning problem can be remodelled as a SAT solving problem. Can anyone explain how is this possible?
In this problem, we have to find a collision free path from start to end location.
The simplest example could look like this.
Let's introduce 2D grid of N rows and M columns, a moving agent A starts at a node (x,y). His target T has coordinates (x_i, y_j):
To reach a target the agent should perform several steps - move left, right, up or down consequently. We don't know how many steps it needs, so we have to limit this number ourselves. Let's say, we are searching for a plan that consists of K steps. In this case, we should add N*M*K boolean variables: N and M represent coordinates, K - time. If a variable is True then the agent currently at a node (x,y) at time k.
Next, we add various constraints:
The agent must change his position at each step (this is optional, actually)
If robot at step k is at a position (x,y), then at step k+1 it must be at one of four adjacent nodes
SAT formula is satisfied if and only if the agent at step k is at the target node
I'll not discuss a detailed implementation of the constraints here, it's not that difficult. The similar approach could be used for multiagent planning.
This example is just an illustration. People use satplan and STRIPS in real life.
EDIT1
In the case of a collision-free path you should add additional constraints:
If a node contains an obstacle, an agent can't visit it. E.g. corresponding boolean variables can't be True at any timestep e.g. it's always False
If we are talking about a multiagent system, then two boolean variables, corresponding to two agents being at same timestep at the same node, can't be True simultaneously:
AND (agent1_x_y_t, agent2_x_y_t) <=> False
EDIT2
How to build a formula that would be satisfied. Iterate over all nodes and all timestamps, e.g. over each Boolean variable. For each Boolean variable add constraints (I'll use Python-like pseudocode):
formula = []
for x in range(N):
for y in range(M):
for t in range (K):
current_var = all_vars[x][y][t]
# obstacle
if obstacle:
formula = AND (formula, NOT (current_var))
# an agent should change his location each step
prev_step = get_prev_step (x,y,t)
change = NOT (AND (current_var, prev_step))
formula = AND (formula, change)
adjacent_nodes = get_adj (x,y, k+1)
constr = AND (current_var, only_one_is_true (adjacent_nodes))
formula = AND (formula, constr)
satisfy (formula)

explanation of roulette wheel selection example

I have a lisp program on roulette wheel selection,I am trying to understand the theory behind it but I cannot understand anything.
How to calculate the fitness of the selected strng?
For example,if I have a string 01101,how did they get the fitness value as 169?
Is it that the binary coding of 01101 evaluates to 13,so i square the value and get the answer as 169?
That sounds lame but somehow I am getting the right answers by doing that.
The fitness function you have is therefore F=X^2.
The roulette wheel calculates the proportion (according to its fitness) of the whole that that individual (string) takes, this is then used to randomly select a set of strings for the next generation.
Suggest you read this a few times.
The "fitness function" for a given problem is chosen (often) arbitrarily keeping in mind that as the "fitness" metric rises, the solution should approach optimality. For example for a problem in which the objective is to minimize a positive value, the natural choice for F(x) would be 1/x.
For the problem at hand, it seems that the fitness function has been given as F(x) = val(x)*val(x) though one cannot be certain from just a single value pair of (x,F(x)).
Roulette-wheel selection is just a commonly employed method of fitness-based pseudo-random selection. This is easy to understand if you've ever played roulette or watched 'Wheel of Fortune'.
Let us consider the simplest case, where F(x) = val(x),
Suppose we have four values, 1,2,3 and 4.
This implies that these "individuals" have fitnesses 1,2,3 and 4 respectively. Now the probability of selection of an individual 'x1' is calculated as F(x1)/(sum of all F(x)). That is to say here, since the sum of the fitnesses would be 10, the probabilities of selection would be, respectively, 0.1,0.2,0.3 and 0.4.
Now if we consider these probabilities from a cumulative perspective the values of x would be mapped to the following ranges of "probability:
1 ---> (0.0, 0.1]
2 ---> (0.1, (0.1 + 0.2)] ---> (0.1, 0.3]
3 ---> (0.3, (0.1 + 0.2 + 0.3)] ---> (0.3, 0.6]
4 ---> (0.6, (0.1 + 0.2 + 0.3 + 0.4)] ---> (0.6, 1.0]
That is, an instance of a uniformly distributed random variable generated, say R lying in the normalised interval, (0, 1], is four times as likely to be in the interval corresponding to 4 as to that corresponding to 1.
To put it another way, suppose you were to spin a roulette-wheel-type structure with each x assigned a sector with the areas of the sectors being in proportion to their respective values of F(x), then the probability that the indicator will stop in any given sector is directly propotional to the value of F(x) for that x.

Clusters merge threshold

I'm working with Mean shift, this procedure calculates where every point in the data set converges. I can also calculate the euclidean distance between the coordinates where 2 distinct points converged but I have to give a threshold, to say, if (distance < threshold) then this points belong to the same cluster and I can merge them.
How can I find the correct value to use as threshold??
(I can use every value and from it depends the result, but I need the optimal value)
I've implemented mean-shift clustering several times and have run into this same issue. Depending on how many iterations you're willing to shift each point for, or what your termination criteria is, there is usually some post-processing step where you have to group the shifted points into clusters. Points that theoretically shift to the same mode need not practically end up on directly top of each other.
I think the best and most general way to do this is to use a threshold based on the kernel bandwidth, as suggested in the comments. In the past my code to do this post processing has usually looked something like this:
threshold = 0.5 * kernel_bandwidth
clusters = []
for p in shifted_points:
cluster = findExistingClusterWithinThresholdOfPoint(p, clusters, threshold)
if cluster == null:
// create new cluster with p as its first point
newCluster = [p]
clusters.add(newCluster)
else:
// add p to cluster
cluster.add(p)
For the findExistingClusterWithinThresholdOfPoint function I usually use the minimum distance of p to each currently defined cluster.
This seems to work pretty well. Hope this helps.

Dijkstra's algorithm with negative weights

Can we use Dijkstra's algorithm with negative weights?
STOP! Before you think "lol nub you can just endlessly hop between two points and get an infinitely cheap path", I'm more thinking of one-way paths.
An application for this would be a mountainous terrain with points on it. Obviously going from high to low doesn't take energy, in fact, it generates energy (thus a negative path weight)! But going back again just wouldn't work that way, unless you are Chuck Norris.
I was thinking of incrementing the weight of all points until they are non-negative, but I'm not sure whether that will work.
As long as the graph does not contain a negative cycle (a directed cycle whose edge weights have a negative sum), it will have a shortest path between any two points, but Dijkstra's algorithm is not designed to find them. The best-known algorithm for finding single-source shortest paths in a directed graph with negative edge weights is the Bellman-Ford algorithm. This comes at a cost, however: Bellman-Ford requires O(|V|·|E|) time, while Dijkstra's requires O(|E| + |V|log|V|) time, which is asymptotically faster for both sparse graphs (where E is O(|V|)) and dense graphs (where E is O(|V|^2)).
In your example of a mountainous terrain (necessarily a directed graph, since going up and down an incline have different weights) there is no possibility of a negative cycle, since this would imply leaving a point and then returning to it with a net energy gain - which could be used to create a perpetual motion machine.
Increasing all the weights by a constant value so that they are non-negative will not work. To see this, consider the graph where there are two paths from A to B, one traversing a single edge of length 2, and one traversing edges of length 1, 1, and -2. The second path is shorter, but if you increase all edge weights by 2, the first path now has length 4, and the second path has length 6, reversing the shortest paths. This tactic will only work if all possible paths between the two points use the same number of edges.
If you read the proof of optimality, one of the assumptions made is that all the weights are non-negative. So, no. As Bart recommends, use Bellman-Ford if there are no negative cycles in your graph.
You have to understand that a negative edge isn't just a negative number --- it implies a reduction in the cost of the path. If you add a negative edge to your path, you have reduced the cost of the path --- if you increment the weights so that this edge is now non-negative, it does not have that reducing property anymore and thus this is a different graph.
I encourage you to read the proof of optimality --- there you will see that the assumption that adding an edge to an existing path can only increase (or not affect) the cost of the path is critical.
You can use Dijkstra's on a negative weighted graph but you first have to find the proper offset for each Vertex. That is essentially what Johnson's algorithm does. But that would be overkill since Johnson's uses Bellman-Ford to find the weight offset(s). Johnson's is designed to all shortest paths between pairs of Vertices.
http://en.wikipedia.org/wiki/Johnson%27s_algorithm
There is actually an algorithm which uses Dijkstra's algorithm in a negative path environment; it does so by removing all the negative edges and rebalancing the graph first. This algorithm is called 'Johnson's Algorithm'.
The way it works is by adding a new node (lets say Q) which has 0 cost to traverse to every other node in the graph. It then runs Bellman-Ford on the graph from point Q, getting a cost for each node with respect to Q which we will call q[x], which will either be 0 or a negative number (as it used one of the negative paths).
E.g. a -> -3 -> b, therefore if we add a node Q which has 0 cost to all of these nodes, then q[a] = 0, q[b] = -3.
We then rebalance out the edges using the formula: weight + q[source] - q[destination], so the new weight of a->b is -3 + 0 - (-3) = 0. We do this for all other edges in the graph, then remove Q and its outgoing edges and voila! We now have a rebalanced graph with no negative edges to which we can run dijkstra's on!
The running time is O(nm) [bellman-ford] + n x O(m log n) [n Dijkstra's] + O(n^2) [weight computation] = O (nm log n) time
More info: http://joonki-jeong.blogspot.co.uk/2013/01/johnsons-algorithm.html
Actually I think it'll work to modify the edge weights. Not with an offset but with a factor. Assume instead of measuring the distance you are measuring the time required from point A to B.
weight = time = distance / velocity
You could even adapt velocity depending on the slope to use the physical one if your task is for real mountains and car/bike.
Yes, you could do that with adding one step at the end i.e.
If v ∈ Q, Then Decrease-Key(Q, v, v.d)
Else Insert(Q, v) and S = S \ {v}.
An expression tree is a binary tree in which all leaves are operands (constants or variables), and the non-leaf nodes are binary operators (+, -, /, *, ^). Implement this tree to model polynomials with the basic methods of the tree including the following:
A function that calculates the first derivative of a polynomial.
Evaluate a polynomial for a given value of x.
[20] Use the following rules for the derivative: Derivative(constant) = 0 Derivative(x) = 1 Derivative(P(x) + Q(y)) = Derivative(P(x)) + Derivative(Q(y)) Derivative(P(x) - Q(y)) = Derivative(P(x)) - Derivative(Q(y)) Derivative(P(x) * Q(y)) = P(x)*Derivative(Q(y)) + Q(x)*Derivative(P(x)) Derivative(P(x) / Q(y)) = P(x)*Derivative(Q(y)) - Q(x)*Derivative(P(x)) Derivative(P(x) ^ Q(y)) = Q(y) * (P(x) ^(Q(y) - 1)) * Derivative(Q(y))