I have two patches that are located at low distance each other.
I put some chemical on each one, and then i run the command diffuse. What happens to the pathes that are between the two sources? Do the chemical of the first patch sum up to the chemical of the second?
If there is no degradation, the sum of the neighboring patches' chemical and the original patch's chemical will be the original amount.
Depending on your diffusion rate, the neighboring patches' chemical amounts will vary.
Try programming it to convince yourself.
Related
I am computing a standard deviation at each tick. I'd like to get the mean of this through time with a moving average. Is there a smart way to do this built-in or should I create a list and a function that computes the MA each tick?
I have a 9 by 9 cubic lattice. in each site of the lattice, there are N molecules. which N is a large random number and differs in different sites. I want each molecule to do a random walk in each time step.
Could anyone give an answer to this question?
*I don't want to track each individual particles, but I want to track for each site the number of particles I have there.
I need to create a very large grid of patches to have GIS information of a very large network (such as a city-wide network). My question is how to get NetLogo to model such a world? When I set the max-pxcor and max-pycor to large numbers, it stop working. I need a world of for example size 50000 * 50000.
Thanks for your help.
See http://ccl.northwestern.edu/netlogo/docs/faq.html#howbig , which says in part: βThe NetLogo engine has no fixed limits on size...β
It's highly unlikely that you'll be able to fit a 50,000 x 50,000 world, in your computer though β that's 2.5 billion patches. Memory usage in NetLogo is proportional to the number of agents, and patches are agents too.
You might take Stephin Guerin's advice at http://netlogo-users.18673.x6.nabble.com/Re-Rumors-of-Relogo-tp4869241p4869247.html on how to avoid needing an enormous patch grid when modeling transportation networks.
In my engineering class we are programming a "non-trivial" predator/prey pursuit problem.
Here's the gist of the situation:
There is a prey that is trying to escape a predator. Each can be modeled as a particle that can be animated in MATLAB (we have to use this coding language).
The prey:
can maneuver (turn) easier than the predator can
The predator:
can move faster than the prey
I have to create code for both the predator and the prey, which will be used in a class competition.
This is basically what the final product will look like:
http://www.brown.edu/Departments/Engineering/Courses/En4/Projects/pred_prey.gif
The goal is to catch the other team's prey in the shortest amount of time, and for my prey to become un-catchable for the other team's predator (or at least escape for a long period of time).
Here are the specific design constraints:
3. Design Constraints:
Predator and prey can only move in the x-y plane
Simulations will run for a time period of 250 seconds.
Both predator and prey will be subjected to three forces: (a) The propulsive force; (b) a viscous drag
force; and (c) a random time-varying force. (all equations given)
1. The propulsive forces will be determined by functions provided by the two competing groups
The predator is assumed to catch the prey if the distance between predator and prey drops below 1m.
You may not use the rand() function in computing your predator/prey forces β the only random forces
should be those generated by the script provided. (EOM with random forces are impossible for the
ODE solver to integrate, and it ends up in an infinite loop).
For the competition, we will provide the MATLAB code that will compute and animate the trajectories of
the competitors, and will determine the winner of each contest. The test code will be working in SI units.
I am looking for any resources that may be able to help me with some strategy. I have looked at basic pursuit curves, but I would love to look at some examples where the prey is not moving in a straight line. Any other coding advice or strategies would be greatly appreciated!
It's a good idea to start with the fundamentals in any field, and you can't go past the work of Issacs (Differential Games: A mathematical theory with applications to warfare and pursuit, control and optimization). This will almost certainly end up being a reference in any academic research project you may end up writing up.
Steven Lavalle's excellent book Motion Planning has a number of aspects that may be of interest including a section on visibility based pursuit evasion.
As for many mathematical topics, Wolfram Mathworld has some good diagrams and links that might get you thinking in the right direction (eg Pursuit Curves).
If you want to have a look at a curious problem in the area that is well understood try the Homicidal chauffeur problem - this will at least give you some grounds for comparing complexity / efficiency of different techniques. In particular, this is probably a good way to get a feel for level set methods (the paper Homicidal Chaueur Game. Computation of Level Sets of the Value Function by Patsko and Turova appears to have a number of images that might be helpful)
I have a dataset consisting of a large collection of points in three dimensional euclidian space. In this collection of points, i am trying to find the point that is nearest to the area with the highest density of points.
So my problem consists of two steps:
1: Determine where density of the distribution of points is at its highest
2: Determine which point is nearest to the point found in 1
Point 2 i can manage, but i'm not sure how to solve point 1. I know there are a lot of functions for density estimation in Matlab, but i'm not sure which one would be the most suitable, or straightforward to use.
Does anyone know?
My command of statistics is a little bit rusty, but as far as i can tell, this type of problem calls for multivariate analysis. Someone suggested i use multivariate kernel density estimation, but i'm not really sure if that's the best solution.
Density is a measure of mass per unit volume. On the assumption that your points all have the same mass then you are, I suppose, trying to measure the number of points per unit volume. So one approach is to divide your subset of Euclidean space into lots of little unit volumes (let's call them voxels like everyone does) and count how many points there are in each one. The voxel with the most points is where the density of points is at its highest. This is, of course, numerical integration of a sort. If your points were distributed according to some analytic function (and I guess they are not) you could solve the problem with pencil and paper.
You might make this approach as sophisticated as you like, perhaps initially dividing your space into 2 x 2 x 2 voxels, then choosing the voxel with most points and sub-dividing that in turn until your criteria are satisfied.
I hope this will get you started on your point 1; you seem to be OK with point 2 so I'll stop now.
EDIT
It looks as if triplequad might be what you are looking for.