NetLogo Dimensions In Space - netlogo

1.How does the coordinate system in NetLogo shape in terms of "cms" as horizontal coordinate(cm) and vertical coordinate(cm)?(The settings tab sure does give in pixels but I unfortunately don't know the conversion between pixels and cms)
2.How does turtle size correlate with or is specified in pixels?
UPDATE.
Is there any possible way given my screen resolution I can accomplish the above conversion?
I found some links http://www.unitconversion.org/typography/pixels-x-to-centimeters-conversion.html claiming to do the above I don't know regarding their crediblity

1.How does the coordinate system in Netlogo shape in terms of "cms" as horizontal coordinate(cm) and vertical coordinate(cm)?(The settings tab sure does give in pixels but I unfortunately don't know the conversion between pixels and cms)
It doesn't. There is no general conversion between pixels and centimeters, nor should there be. The physical size of a pixel depends on your screen size and resolution. For the purpose of a model, you can always decide that, e.g., 10 pixels represent 1cm, but this would have no correlation to actual physical size on screen.
2.How does turtle size correlate with or is specified in pixels?
Ah! This one actually has an answer: a turtle of size 1.0 is the same size as a patch, and patch-size gives you the size of a patch in pixels. The size of a turtle in pixels is thus size * patch-size. Note, however, that this is the size of the side of the square occupied by the turtle; not the actual area of the shape displayed on the screen.

Is there any possible way given my screen resolution I can accomplish the above conversion?
This depends not only on the resolution of the monitor, but the monitor itself. For instance, if your monitor is 1440x900 and you project up on a screen, or plug into an external monitor that scales the output, obviously the pixels-per-cm is going change dramatically, even though the resolution stays the same. Even within the same monitor, this can change. For instance, many modern laptops (notably macbook pros) have so called hi-dpi screens with huge resolutions. Applications on the screens can be run in scaled or non-scaled formats, which completely changes the pixels-per-cm (e.g. NetLogo 5.0.5 ran in scaled-mode on OS X, but 5.1 runs in non-scaled-mode; you'll notice that the text on retina screens looks considerably sharper and less pixelated). Even just in netlogo, you can zoom in and out, which changes the scale of all the elements (see the zoom menu).
So, the only way to determine the pixels-per-cm is for a specific application on a specific monitor running under a specific resolution with specific settings. In that case, your best bet for measuring the size of patches and turtles is probably a ruler. You can probably find some applications that give you a "screen ruler", but the only trustworthy ones I'm aware give the answer in pixels, and I probably wouldn't trust anything that claims to give cm.
I think you're having trouble getting the answer you want here because, in some sense, the question doesn't really make sense. The measurement of patches in cm can always be changed at will and will always change depending on environment. So perhaps the best answer to your first question is "whatever you want it to be".

Related

AnyLogic - Can density map be more accurate?

Can you change the size of pixels in Density Map?
I suspect that the size of density map pixels is based on agent/pedestrian size. Can it be modified, so that pixels are smaller and leave more precise trace?
Currently, my density map leaves huge pixels that are very difficult to use as reliable information.
EDIT: Screenshot below,
Thanks,
Peter
I am pretty sure it's not possible, the density map has a resolution of 1 meter (whatever the equivalent to 1 meter is by your scale object) and there's no way to change it (as far as I know)
But, what you have to make up for this, is the canvas object that you can find in the presentation palette. With the canvas object you can define your own resolution but you also have to code your own density map using your own personalized rules. Check the help documentation to understand how to use this and check the wondering elephants model to understand how to make changes dynamically.

Scale graphics with pygame_util

I am building a model with pymunk and I need to use real dimensions (physical size of model is approximately 1 meter). Is there a way to scale the graphics in pygame_util so that 1 meter corresponds to 800 pixels?
Pymunk itself is unitless, as described here: http://www.pymunk.org/en/latest/overview.html#mass-weight-and-units
Pymunk 6.1 (and later)
With Pymunk 6.1 its now possible to set a Transform on the SpaceDebugDrawOptions object (or one one of the library-specific implementations like pygame_utils.DebugDraw) as documented here http://www.pymunk.org/en/latest/pymunk.html#pymunk.SpaceDebugDrawOptions.transform
With this new feature it should be possible to set a scaling Transform to achieve what you are asking about.
Pymunk 6.0 (and earlier)
When used with pygame_util the distances will be measured in pixels, e.g. a 10x20 box shape (create_box(size=(10,20))) will be drawn as a 10x20 pixels rectangle. This means that the easiest way to achive what you ask about is to just define that the Pymunk length unit is 0.125cm, and therefore the box shape above 1.25cm x 2.5cm.
An alternative would be to scale the surface once complete. So instead of using the screen surface in pymunk.pygame_util.DrawOptions() you use a custom surface that you scale when the space has been drawn and then blit the result to the screen. I dont think this option is a good as the first option since there might be scaling artifacts, but depending on your exact use case maybe it works.

Netlogo - Graphics window grows every time model is opened (patch size < 1)

In Netlogo, I have to change the patch size to <1 in order to fit the graphics window on my computer screen. For example, from 1 to 0.5.
This works well; however, once I save the file, close it, and reopen, the graphics window (not coordinates - but the physical size of the window on my computer) is back to its previous size.
When I check the patch size however, it is still 0.5. Now for my model to fit, I must make the patch 0.25. This is a cycle, and I eventually need to make the patch size ridiculously small like 0.001.
I am using Netlogo 6.1.1 and have only used the Model Settings Box to modify patch size. I have attached screenshots of this behavior :
I understand that the patch size does not affect the model functionality, however, I would like to fix this so that the model is presentable.
Has anyone run into this issue, or has any idea of how to avoid/fix it?
Any suggestions are welcome!
This is a known NetLogo issue, and it would probably be rather difficult for the NetLogo developers to fix it. (I know, I used to be one of them.)
Details are here: https://github.com/NetLogo/NetLogo/issues/409

SpriteKit : Keep consistent sizes and speeds across devices

TL;DR : I want to find a method to give an impulse to an object so that the speed of this object is precisely proportional to the scene size.
I am currently building a SpriteKit game that will be available on many different screen sizes, my scene resizes itself to be the same size in points as its view (scene.scaleMode = .ResizeFill), when I launched my game on other devices than the one which I had developed it, I noticed that :
The size of nodes was too small
The speed of the objects was too low (the way I give speed to my objects is by calling applyImpulse(:_) on their physics body).
I think I fixed the size issue with a simple proportionality operation : I looked at the objectArea/sceneArea ratio of the scene that had the correct object size and than, instead of giving fixed dimensions to my objects, I simply gave them dimensions so that the ratio is always the same regardless of the scene area.
For the object speed, it was trickier...
I first thought it was due to the physics body mass being higher since the object itself was bigger, but since I attributed to objects their mass directly via their mass property, the objects would have the exact same mass regardless of their size.
I finally figured out that it was just due to the screen size being different therefore, an object, even by moving at the same speed, would seem to move slower on a bigger screen.
My problem is that I don't know exactly how to tune the strength of my impulse so that it is consistent across different scene sizes, my current approach is this one :
force = sqrt(area) * k
Where k is also a proportionality coefficient, I couldn't do the proportionality on the area, otherwise, speed would have grown exponentially (so I did it with the square root of the area instead).
While this approach works, I only found this way of calculating it with my intuition. While I know that objects areas are correctly proportional to the scene size, I can't really know if the speed can be considered as equivalent on all screen sizes
Do you know what I should do to ensure that the speed will always be equivalent on all devices ?
Don't do it
Changing the physics world in relation of the screen of the device is wrong.
The physics world should be absolutely agnostic about its graphics representation. And definitively it should have the same properties (size, mass, distance, ...) regardless of the screen.
I understand you don't want the scene to be smaller of the screen when the game runs on a iPad Pro instead of an iPhone 5 but you need to solve this problem in another way.
I suggest you to try another scaleMode like aspectFill (capitalized if you're on Xcode 7: AspectFill). This way the scene is zoomed and all your sprites will appear bigger.
Another point of view
In the comments below #Knight0fDragon pointed out some scenarios where you might actually want to make some properties of the Physics World depending on the UI. I suggest the reader of this answer to take a look at the comments below for a point of view different from mine.

Expanding object/feature pixel area

Which method is commonly used to evaluate the remaining 'boundary' pixels after an initial segmentation (based on thresholds)?
I thought about classification based on a standard deviation from the threshold values but I don't know if that is common practice in image analysis. This would be a region growing method but based on the answer on this question ( http://www.mathworks.com/matlabcentral/answers/53351-how-can-i-segment-a-color-image-with-region-growing ) it is not sensible to use the region growing algorithm. Someone suggested imdilate. This method seems arbitrary, useful when enhancing images for aesthetic purpose or to enhance the visibility. For my problem the assigning of the pixels has to be correct because I have to do measurements on these extracted objects/features and a few pixels make a huge difference.
What I was looking for :
To collect my boundary pixels of the BW image from the first segmentation (which I found : http://nl.mathworks.com/help/images/ref/bwboundaries.html)
A decision rule (nearest neighbor ?) to classify those boundary pixels. It would be helpful if there were multiple methods to do this, because it makes a relative accuracy check of the classification possible.
I would really appreciate the input/advice from someone with more experience in this area to point me to the right direction (functions, tutorials etc…)
Thank you !
What will work for you depends very much on the images you have. This is no one-size-fits-all algorithm.
First, you need to answer the question: Given a pixel close to a segmented feature, what would make you believe that this pixel belongs to the feature? Also: what is "close"?
The answer to the second question determines your search area. Here, imdilate is useful to identify candidate pixels (i.e. you dilate your feature, subtract the feature, and you are left with a ring of candidate pixels around each feature). If you test on all pixels, the risk is not so much that it could take forever, but that for some images, your region growing mechanism expands to the entire image.
The answer to the first question determines what algorithm you'll use. Do you look for a gradient, i.e. "if pixel p is closer in intensity to the adjacent feature than to most of its neighbors, then I take it"? Do you look for texture? Do you look for a local threshold (hysteresis thresholding)? The answer, again, depends very much on the images you are segmenting. Make sure you test on a large set of images, because what may look good on one image may totally fail on a different one.