Can you change the size of pixels in Density Map?
I suspect that the size of density map pixels is based on agent/pedestrian size. Can it be modified, so that pixels are smaller and leave more precise trace?
Currently, my density map leaves huge pixels that are very difficult to use as reliable information.
EDIT: Screenshot below,
Thanks,
Peter
I am pretty sure it's not possible, the density map has a resolution of 1 meter (whatever the equivalent to 1 meter is by your scale object) and there's no way to change it (as far as I know)
But, what you have to make up for this, is the canvas object that you can find in the presentation palette. With the canvas object you can define your own resolution but you also have to code your own density map using your own personalized rules. Check the help documentation to understand how to use this and check the wondering elephants model to understand how to make changes dynamically.
Related
I am attempting to place a hexagon (centred over co-ordinates) which I can interact with, hover/onclick. The method I am using is to LoadImage(..._Hexagon.png) and then addLayer. Eventually the idea is to have many hexagons over specific areas.
I have obtained the desired interaction with the shape, but I would like this layer to be invariant under zoom (ie I have the hexagons cover an area of x square km at all times regardless of zoom). Is there an efficient way to do this? Will another method be better? etc
Thank you in advance for any and all advice!
If you really want to scale an icon such that it gets bigger as you zoom in, you can use an exponential scale:
"icon-scale": ["*", ["interpolate", ["exponential", 2], ["zoom"]], SCALE]
where SCALE is some constant you pick.
It probably makes more sense to actually generate hexagonal polygon data (eg, using Turf), and displaying that though.
I am capturing static images of particulate biological materials on the millimeter scale, and then processing them in MATLAB. My routine is working well so far, but I am using a rudimentary calibration procedure where I include some coins in the image, automatically find them based on their size and circularity, count their pixels, and then remove them. This allows me to generate a calibration line with input "area-mm^2" and output "Area- pixels," which I then use to convert the pixel area of the particles into physical units of millimeters squared.
My question is: is there a better calibrant object that I can use, such as a stage graticule or "phantom" as some people seem to call them? Do you know where I could purchase such a thing? I can't even seem to find a possible vendor. Is there another rigorous way to approach this problem without using calibrant objects in the field of view?
Thanks in advance.
Clay
Image calibration is always done using features of knowns size or distance.
You could calculate the scale based on nominal specifications but your imaging equipment will always have some production tolerances, your object distance is only known to a certain accuracy...
So it's always safer and simpler to actually calibrate your scale.
As a calibrant you can use anything that meets your requirements. If you know the size well enough and if you are able to extract it's dimensions in pixels properly you can use it...
I don't know your requirements and your budget, but if you want something very precise and fancy you can use glass masks.
There are temperature stable glass slides that are coated with chrome for example. There are many companies that produce such masks customized (IMT AG, BVM maskshop, ...) Also most optics lab equipment suppliers have such things on stock. Edmund Optics, Newport, ...
I have an image (see attached) and I am trying to calculate the variance of the image inside the region of interest (dark region) using the stdfilt function.Image here.
The dark side is what I need to work on. When I use stdfilt on this image, it shows me the boundaries of the dark and bright.
My idea is that we can threshold the image to show only the dark side and tell Matlab to work only with this region of interest. So far, did not find a proper way of doing this.
The area is not a perfect polygon, which would make things way easier. At that point, I'm not sure what to do, so any suggestions are welcome.
Cheers
If the spatial location of the pixels is not relevant, you could just do:
datatoprocess=I(I<threshold);
Being threshold a value that separates the white from black. [graythresh][1] is a fantastic function for that. datatoprocess will be a 1xN array with the pixel values.
If, instead, the spatial location of the pixels is relevant, then you need to modify your functions to not work on specific pixels. The best approach for this is generally setting NaN values in pixels you dont want to take into account.
Itoprocess=I;
Itoprocess(I>threshold)=NaN;
Without more information on what exactly are you doing with the image, this is the best anyone can get to.
Which method is commonly used to evaluate the remaining 'boundary' pixels after an initial segmentation (based on thresholds)?
I thought about classification based on a standard deviation from the threshold values but I don't know if that is common practice in image analysis. This would be a region growing method but based on the answer on this question ( http://www.mathworks.com/matlabcentral/answers/53351-how-can-i-segment-a-color-image-with-region-growing ) it is not sensible to use the region growing algorithm. Someone suggested imdilate. This method seems arbitrary, useful when enhancing images for aesthetic purpose or to enhance the visibility. For my problem the assigning of the pixels has to be correct because I have to do measurements on these extracted objects/features and a few pixels make a huge difference.
What I was looking for :
To collect my boundary pixels of the BW image from the first segmentation (which I found : http://nl.mathworks.com/help/images/ref/bwboundaries.html)
A decision rule (nearest neighbor ?) to classify those boundary pixels. It would be helpful if there were multiple methods to do this, because it makes a relative accuracy check of the classification possible.
I would really appreciate the input/advice from someone with more experience in this area to point me to the right direction (functions, tutorials etc…)
Thank you !
What will work for you depends very much on the images you have. This is no one-size-fits-all algorithm.
First, you need to answer the question: Given a pixel close to a segmented feature, what would make you believe that this pixel belongs to the feature? Also: what is "close"?
The answer to the second question determines your search area. Here, imdilate is useful to identify candidate pixels (i.e. you dilate your feature, subtract the feature, and you are left with a ring of candidate pixels around each feature). If you test on all pixels, the risk is not so much that it could take forever, but that for some images, your region growing mechanism expands to the entire image.
The answer to the first question determines what algorithm you'll use. Do you look for a gradient, i.e. "if pixel p is closer in intensity to the adjacent feature than to most of its neighbors, then I take it"? Do you look for texture? Do you look for a local threshold (hysteresis thresholding)? The answer, again, depends very much on the images you are segmenting. Make sure you test on a large set of images, because what may look good on one image may totally fail on a different one.
can someone point me to a paper/algorithm/resource/whatever that tells me how to implement a texture minification filter (applies when texels are smaller than pixels) in a raytracer?
thanks!
Since you are using ray tracing I suspect you are looking for a high quality filtering that changes sampling dynamically based on the amount of "error". Based on this assumption I would say take a look at "ray differentials". There's a nice paper on this here: http://graphics.stanford.edu/papers/trd/ and it takes effects like refraction and reflection into account.
Your answer to yourself sounds like the right approach, but since others may stumble across the page I'll add a resource link as requested. In addition to discussing mipmapping (ripmapping is basically more advanced mipmapping), they discuss the effects of reflection and refraction on derivatives and mip-level selection.
Homan Igehy. "Tracing Ray Differentials." 1999. Proceedings of SIGGRAPH. http://graphics.stanford.edu/papers/trd/
Upon closer reading I see that Rehno Lindeque mentioned this paper. At first didn't realize that it was the right reference because he says that the method samples dynamically based on the error of the sampling, which is incorrect. Filtering is done based on the size of the pixel's footprint and uses only one ray, just as you described.
Edit:
Another reference that might be useful ( http://www.cs.unc.edu/~awilson/class/238/#challenges ). Scroll to the section "Derivatives of Texture Coordinates." He suggests backward mapping of texture derivatives from the surface to the screen. I think this would be incorrect for reflected and refracted rays, but is possibly easier to implement and should be okay for primary rays.
I think you mean mipmap'ing.
Here is an article talking about using them.
But nether say how to chose which mipmap to use, but they are often blended (the bigger and smaller mipmap).
Here's a one more article about how Google Earth works, and it talks about how they mipmapping the earth.
thank you guys for your answers, but since I didn't find any appropriate techinque i created something myself which turned out to work very well:
i assume my ray to be a cone with a coneradius of half a pixel on the imageplane. when the ray hits a surface, i calculate the ellipse which is projected onto the surface (the ellipse from the plane-cone intersection). Then, using the texturecoordinate derivatives at the intersection point, i project this ellipse into texturespace. now i know which part of the texture lies under my pixel and can subsample this area
I Also use RipMaps to improve the quality - and i chose the RipMap level based on the size of the ellipse in Texturespace