how to build a suitability map in netlogo - netlogo

I'm attaching an image of something wanting to replicate in NetLogo: a suitability analysis map. Is it possible to load various shp layers, each one representing a land attribute (use, value, slope, natural area, etc.) so they can be taken into account when occupying a patch? For example, urban occupation is more suitable where there is low land value, low slopes and near natural areas.
Thanks in advance for any help as it's going to be greatly appreciated.
Javier
enter image description here

Related

How to design multi level highway intersection

So I am trying to design a multi-level highway system with the Road Traffic library in Anylogic. The highways have multiple levels and I am having trouble with depicting the difference in the levels of the roads in my model.
I looked at the help content related to RTL specifically Library Reference Guides and Tutorials but they don't mention adding grades/inclination to a road to get a multi level system.
I apologize in advance if I missed documentation related to this. But I would like to know how to do this in Anylogic.
Also, there is a Highway Junction model available in the sample models that comes with the installation and implements an increase in the z-value of the roads but I am not sure how to do that when designing the road.
When You draw road Object in RTL, in points section of the drawn road properties panel, you can set the Z value of each point of your road. So you should use more than two points to draw your road, even if it is a straight one. This way you can easily set z value of different points of your road, and build up needed levels, grades or slope of the road.
Hope this helps you.

POSTGIS: Dissolve Multi-polygons to enable joined classification

I have a number of spatial database, through which I have identified particular types of land-cover. The topographic layer [defining land-cover] is made up of multi-polygons and I am using a separate point layer in order to join a classification type to it. However, the areas portrayed in the topographic layer may be formed from several individual polygons, without a total perimeter outline to identify the area. Therefore, the classification point may sit in just one individual polygon of the overall area [see below].
Example. Image is of a park, which consists of over 20 individual polygons. The point that classifies the area as a park sits within one of the polygons and cannot be attached to the entire area.
I would like to be able to apply the point based classification to the whole park area. I have tried to use ST_UNION function in order to do such, but have been unable. Does anyone know of a way to dissolve the area into a single shape/remove the pathways? This is a small example of a much larger data set, where there is little scope of manually defining area sand buffers in order to classify the data, thus I wondered if there was a practicable solution.
If anyone can help, it would be hugely appreciated....
SELECT ST_Collect(t.the_geom) as singlegeom FROM TABLE t; http://postgis.net/docs/ST_Collect.html

Static image calibration

I am capturing static images of particulate biological materials on the millimeter scale, and then processing them in MATLAB. My routine is working well so far, but I am using a rudimentary calibration procedure where I include some coins in the image, automatically find them based on their size and circularity, count their pixels, and then remove them. This allows me to generate a calibration line with input "area-mm^2" and output "Area- pixels," which I then use to convert the pixel area of the particles into physical units of millimeters squared.
My question is: is there a better calibrant object that I can use, such as a stage graticule or "phantom" as some people seem to call them? Do you know where I could purchase such a thing? I can't even seem to find a possible vendor. Is there another rigorous way to approach this problem without using calibrant objects in the field of view?
Thanks in advance.
Clay
Image calibration is always done using features of knowns size or distance.
You could calculate the scale based on nominal specifications but your imaging equipment will always have some production tolerances, your object distance is only known to a certain accuracy...
So it's always safer and simpler to actually calibrate your scale.
As a calibrant you can use anything that meets your requirements. If you know the size well enough and if you are able to extract it's dimensions in pixels properly you can use it...
I don't know your requirements and your budget, but if you want something very precise and fancy you can use glass masks.
There are temperature stable glass slides that are coated with chrome for example. There are many companies that produce such masks customized (IMT AG, BVM maskshop, ...) Also most optics lab equipment suppliers have such things on stock. Edmund Optics, Newport, ...

Using a neural network with genetic algorithm for pong or supermario

I'm trying to use GA to train an ANN whose job is to move a bar vertically so that it makes a ball bounce without hitting the wall behind the bar, in other words, a single bar pong.
I'm going to ask it directly because i think to know what the problem is.
The game window is 200x200 pixels, so i created 40000 input neurons.
The obvious doubt is: can GA handle chromosomes of 40000(input)*10(hidden)*2 elements(genes)?
Since i think the answer is no(i implemented this solution and doesn't seem to work), the solution seems simple, i feed the NN with only 4 parameters which are the coordinates x,y of bar and ball, nailed it.
Nice solution, but the problem is: how can i apply such a solution in a game like supermario where the number of enemies in the screen is not fixed? Surely i cannot create a NN with dynamic numbers of inputs.
I hope you can help me.
You have to use features to represent your state. For example, you can divide the screen in tiles and assign a value according to a function that takes into account the enemy (e.g., a boolean if the enemy is in the tile or the distance to the closest enemy).
You can still use pixels but you might need to preprocess them in order to reduce their size (e.g., use a recurrent NN).
Btw, a NN might not be able to handle 200x200 pixels, but it was able to learn to play Atari games using a representation of the state by preprocessed pixels of size 84x84x4 (see this paper).

Bald detection using image processing

I was wondering if someone can provide me a guideline to detect if a person in a picture is bald or not, or even better, how much hair s\he has.
So far I tried to detect the face and the eyes position. From that information, I roughly estimate the forehead and bald area by cutting the area above the eyes as high as some portion of the face.
Then I extract HOG features and train the system with bald and not-bald images using SVM.
Now when I'm looking at the test results, I see some pictures classified as bald but some of them actually have blonde hair or long forehead that hair is not visible after the cutting process. I'm using MATLAB for these operations.
So I know the method seems to be a bit naive, but can you suggest a way of finding out the bald area or extracting the hair, if exists. What method would be the most appropriate for that kind of problem?
very general, so answer is general unless further info provided
Use Computer Vision (e.g MATLAB Computer Vision toolkit) to detect face/head
head has analogies (for human faces), using these one can get the area of the head where hair or baldness is (it seems you already have these)
Calculate the (probabilistic color space model) range where the skin of the person lies (most peorple have similar skin collor space range)
Calculate percentage of skin versus other color (meaning hair) in that area
You have it!
To estimate a skin color model check following papers:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.56.8637&rep=rep1&type=pdf
http://infoscience.epfl.ch/record/135966
http://www.eurasip.org/Proceedings/Eusipco/Eusipco2010/Contents/papers/1569293757.pdf
Link
If an area does not fit well with skin model it can be taken as non-skin (meaning hair, assuming no hats etc are present in samples)
Head region is very small, hence, using HOG for classification doesn't make much sense.
You can use prior information - like detect faces; baldness/hair is certain to be found on the area above the face. Also, use some denser feature descriptors.
You are probably ending up with very sparse representation or equivalently less information because of which your classifier is not able to classify correctly.