I want to run a series of experiments in anylogic by scaling my warehouse up and down to see the effects of increasing or decreasing floor space. I think the simplest way to do this would be to play with the scale. However, I can't figure out how to set the scale programmatically. Since I have multiple designs that I want to scale up and down, it would be much more efficient if I can do this programmatically as opposed to doing this manually.
Although you cannot change the scale of agents dynamically you can easily change them during setup on the animation presentation of the agent.
See the example below. If you select "Scale is" as "Defined Explicitly" you can change the scale.
This is the best you can do, AFAIK
Related
In my model I've a path-guided transporter fleet, but when they are close to eachother they are blocking eachother, since this is not the scope of my model (I want them just to override eachother ) is there a way to disable this option. I've already tried to set minimum distance to obstacle very low or use very small dimensions (see figure) but everything seems not to work.
The key aspect of Material-Handling transporters is to apply that spatial blocking.
If you do not want it, use moving resources from the Process modelling library. They act the same but do not have spatial awareness. However, they also cannot do the free-space navigation. Path-guided works but not applying path-specific constraints.
So it is a trade-off. The process-library resources also require much less computational power...
I built up a shopfloor where material flow is realized by Transporters (AGV / AMR) with free space navigation. I am looking for a possibility to observe traffic at certain spots (e.g. work stations, storage areas) or even on the whole shopfloor so I can compare different scenarios of the material flow and supply strategy of the working station with a view on the traffic. I tried out the Density Map but since it observes the whole layout which is quite big the values get too low for the scale quite fast so it isn't performing the way I want it to. Is there a way so set up like a "area density map" so I can just observe a defined rectangle or another functionality which could help me here?
Happy about all ideas! :-)
You can use normal Rectangular Node elements and trigger "on enter" code to count drive-throughs or similar, as below.
Just make sure to set the capacity to infinity so normal traffic flow is not interrupted :)
I am doing a project on face recognition. I have a dataset containing image of 21 actors(each 150). Now I want to increase the no. of image of each actor to 300+ for the training purpose. How can I do it using MATLAB. Some solutions can be we can vary the contrast/brightness level of each image. But what are some other factors through which I can increase the no. of images.
One option is for you to flip the images: If a person is looking to the right, after the flip he will be looking to the left.
Further more, depending on your possible toolkit and set of skills, you could do some more advanced technique. If you can find some interesting characteristic from the pictures, like: eyes, nose, mouth, background. With those, you could make some intelligent transformations - swap peoples eyes, change the background, switch noses.
There are some particular objects of the faces which you could also distort - like the eyes and nose - stretch them. Maybe for bold guys you could built some synthetic hair, and so on...
You could do the contrast/brightness level change, but usually it doesn't do so well, as your features probably doesn't have (almost) anything to do with it, so it will just be a duplication of your data.
Anyway, as it's not a very large data set, if you don't have the set of skills to pull the more advances options I proposed, or the time to deal with it, you can make some of those stuff manually. It won't take you as much as you think. And usually, with that amount of data, this will give a good boost to your results.
What you are looking for is called "data augmentation". Common transformations are mirroring (flipping left / right side of the image) and rotation of the image. You might also be able to zoom (crop) a part of the image.
Maybe scaled versions with the rotated ones may help. If your features are not robust to the changes such as lightning contras etc you can modfy the images accordingly.
1.How does the coordinate system in NetLogo shape in terms of "cms" as horizontal coordinate(cm) and vertical coordinate(cm)?(The settings tab sure does give in pixels but I unfortunately don't know the conversion between pixels and cms)
2.How does turtle size correlate with or is specified in pixels?
UPDATE.
Is there any possible way given my screen resolution I can accomplish the above conversion?
I found some links http://www.unitconversion.org/typography/pixels-x-to-centimeters-conversion.html claiming to do the above I don't know regarding their crediblity
1.How does the coordinate system in Netlogo shape in terms of "cms" as horizontal coordinate(cm) and vertical coordinate(cm)?(The settings tab sure does give in pixels but I unfortunately don't know the conversion between pixels and cms)
It doesn't. There is no general conversion between pixels and centimeters, nor should there be. The physical size of a pixel depends on your screen size and resolution. For the purpose of a model, you can always decide that, e.g., 10 pixels represent 1cm, but this would have no correlation to actual physical size on screen.
2.How does turtle size correlate with or is specified in pixels?
Ah! This one actually has an answer: a turtle of size 1.0 is the same size as a patch, and patch-size gives you the size of a patch in pixels. The size of a turtle in pixels is thus size * patch-size. Note, however, that this is the size of the side of the square occupied by the turtle; not the actual area of the shape displayed on the screen.
Is there any possible way given my screen resolution I can accomplish the above conversion?
This depends not only on the resolution of the monitor, but the monitor itself. For instance, if your monitor is 1440x900 and you project up on a screen, or plug into an external monitor that scales the output, obviously the pixels-per-cm is going change dramatically, even though the resolution stays the same. Even within the same monitor, this can change. For instance, many modern laptops (notably macbook pros) have so called hi-dpi screens with huge resolutions. Applications on the screens can be run in scaled or non-scaled formats, which completely changes the pixels-per-cm (e.g. NetLogo 5.0.5 ran in scaled-mode on OS X, but 5.1 runs in non-scaled-mode; you'll notice that the text on retina screens looks considerably sharper and less pixelated). Even just in netlogo, you can zoom in and out, which changes the scale of all the elements (see the zoom menu).
So, the only way to determine the pixels-per-cm is for a specific application on a specific monitor running under a specific resolution with specific settings. In that case, your best bet for measuring the size of patches and turtles is probably a ruler. You can probably find some applications that give you a "screen ruler", but the only trustworthy ones I'm aware give the answer in pixels, and I probably wouldn't trust anything that claims to give cm.
I think you're having trouble getting the answer you want here because, in some sense, the question doesn't really make sense. The measurement of patches in cm can always be changed at will and will always change depending on environment. So perhaps the best answer to your first question is "whatever you want it to be".
I've done work on software used for controlling imaging hardware, such as microscopes, that are sometimes hard to get time on. This means it is difficult to test out new/different algorithms which would require access to the instrument. I'd like to create a synthetic instrument that could be used for some of these testing purposes, and I was thinking of using some kind of fractal image generation to create the synthetic images. The key would be to be able to generate features at many different 'magnifications' and locations in some sort of deterministic manner. This is because some of the algorithms being tested may need to pan/zoom and relocate previously 'imaged' areas. Onto these base images I can then apply whatever instrument 'defects' are appropriate (focus, noise, saturation, etc.).
I'm at a bit of a loss on how to select/implement a good fractal algorithm for the base image. Any help would be appreciated. Preferably it would have the following qualities:
Be fast at rendering new image areas.
Fairly wide 'feature' coverage at as many locations and scales as possible.
Be deterministic (but initialized from random starting parameters).
Ability to tune to make images look more like 'real' images.
Item 2 is important, for example a mandelbrot set, with its large smooth/empty regions, might not be good since the software controlling the synthetic scope might fall into one of these areas.
So far I've thought of using something like a mandelbrot, but randomly shifting/rotating/scaling and merging two or more fractal sets to get more complete 'feature' coverage.
I've also seen images of the fractal flame algorithms and they seem to generate images that might be useful (and nice to look at).
Finally, I've thought of using some sort of paused particle simulation run to generate images that are more cell-like (my current imaging target), but I'm not sure if this approach can be made to work with the other requirements.
Edit:
#Jeffrey - So it sounds like some kind of terrain generation might be the way to go, as long as I have complete control over the PSRNG. Perhaps I can use some stored initial seed + x position + y position to generate my random numbers? But then I am unsure of how to consistently generate the terrains across scales, except, as you mentioned, to create the base terrain at the coursest scale, and at certain pre-determined 'magnifications' add new deterministic pseudo-random variations to this base. I'd also have to be careful about when to generate the next level of terrain, since if I'm too aggressive I'd have to generate and integrate the results appropriately for display at the coarser level... This is why I initially was leaning toward a more 'traditional' fractal, since this integration from finer scales would be handled more implicitly (I think).
The idea behind a fractal terrain creation algorithm is to build the image at each scale separately. For a landscape it's easy: just make a small array of height values, and set them randomly. Then scale it up to a larger array, averaging the values so that the contour is smooth, and then add small random amounts to those values. Then scale it up, etc. The original small bumps have become mountains, and they are filled with complex terrain.
There are two particular difficulties with the problem posed here, though. First, you don't want to store any of these values, since it would be potentially huge. Secondly, the features at each scale are of a different kind than the features at other scales.
These problems are not insurmountable.
Basically, you would divide the image up into a grid, and using deterministic psedorandom numbers establish the key features of each square in the grid. For example, each square could have a certain density of cell types.
At the next level of magnification, subdivide each square into another grid, apply a gradiant of values across the grid that is based on the values of the containing square and its surrounding squares. Then apply pseudorandom variations to that seeded with the containing square's grid coordinates. For the random seed, always use the coordinates of the immediately containing square of the subdivision under consideration regardless of where the image is cropped, in order to ensure that it is recreated correctly accross multiple runs.
At some level of magnification the random values go from being densities of paticles types to particle locations. Then for each particle, there are partical features. Then features on those features.
Although arbitrary left/right and up/down scrolling will be desired, the image at all levels of magnification above the current scene will have to be calculated each time the frame is shifted to ensure that all necessary features are included. This way the image can be scrolled from one cell to another without loss of consistancy. Partical simulations can be used to ensure that cells or cell features don't overlap. This could be done in a repeatable, deterministic manner.
And don't forget to apply a smoothing gradient based on averages of surrounding squares at higher levels before adding in the random variations. Otherwise, the abrupt changes will make the squares themselves appear in the images!
This answer is somewhat rambling and probably confusing, but that is best I can explain it right now. I hope it helps!