I have to code a model that one part of this model is done
on patches and turtles, and another part only display
an image. I want to have these two parts in seperate
view simultaneously.
For example, my model is that it creates 100 turtles with
random xcor & ycor. I code that these turtles move around
view in random path. Now, I want show an image in
another seperate view and these two views (1. view of moving turtles 2. view of displaying an image) is displayed
simultaneously. My problem is that how can I have two seperate
views in NetLogo.
Thanks a lot in advance.
The feature you want doesn't really exist in NetLogo.
Not really. But check out https://github.com/NetLogo/NetLogo/wiki/Multiple-views-via-HubNet :
Users often ask if they can have more than one view in their model and show different things, or show the same things differently, in each one.
Hopefully in the future we'll provide this capability directly. But it is already possible now using HubNet with local clients. It's a bit awkward, but it works. [...]
Related
This is a follow-up question to modelling population density in a mock model: Modelling population density in AnyLogic
Now I wanted to implement those changes in my main model. The functionality works, and the agents are generated at the correct places, but somehow they appear behind the image. If I set the the image to not visible, I can see that the agents are actually generated.
Short recap of how the functionality works:
I have traced the borders of regions using a polyline on top of an image of the country. Next, I have created a function to let agents appear in a random location inside one of the polyline shapes.
I am probably overlooking something simple, but could you give me any pointers?
Thanks!
This should be simple. Bring your agents to the front as shown in the following image:
You can experiment also sending your image to back.
How can I selectively grab only the world and just 2 of 3 plots in my NetLogo model run?
movie-grab-view grabs only the world view while other movie-grab-interfacegrabs all. Is the other to way to the aforementioned?
Note: this of course excludes buttons, sliders etc.
I would suggest moving the unwanted interface items to the edge of the Interface tab, recording the movie, then cropping the movie in movie-editing software to remove the unwanted portion.
Alternatively, you could cover the unwanted interface items with one or more opaque white notes, or cover them with other interface items.
I'm certain nothing exists to automate this process, unless you build it yourself.
I have developed scratch card effect.I am stuck at logic of how can I know object got visible which is behind the scratch card image? So that I can show reward screen.
PS: with modifications in this link I able to work this scratch card effect in uGUI.
There are many ways you could go about this. Assuming you know the dimensions of the red "target image" that the user is trying to uncover, you could take a fixed number of samples from the area that the target is under. Once, say, 80% of those samples are transparent (i.e. the target is visible at those positions), you can consider the object visible and show the reward screen.
You can use GetPixel to get the individual samples from the scratch texture.
I want to create a traffic simulation model where the user can move patches (that would act as nodes or junctions). Firstly is this possible , and if so would it be possible to see a code example? Secondly once this is done , will it be possible for the user to draw necessary links between nodes, upon which turtles can travel?
thanks
Yes, what you describe is very possible by combining a couple ideas. Check out two models in the the Code Examples folder of the Models Library: "Link-Walking Turtles" and "Mouse Drag One".
I have a few different OBJ files that I am able to parse and display. This code is based on Jeff LaMarche's The Start of a WaveFront OBJ File Loader Class. However, I need some means of detecting what coordinates I have selected within a displayed model. Usually there is one model displayed at a time but sometimes there will be two or more on the screen and I want to set up a NSNotificationCenter object to notify other sections of code as to which object is "selected". I have also looked at javacom's "OpenGL ES for iPhone : A Simple Tutorial" and would like to model the behavior of what I'm trying to program after his.
This is my current line of logic:
Setup a means to detect where a user has touched the screen
Have those coordinates compared with the current coordinates of a OBJ-based model
If they match, indicate said touch as being within the bounds of the object
The touchable set of coordinates must scale with the model. Currently the model is able to scale so I will most likely need to be able follow this scaling.
Also note, I don't need to move the model around on the screen. Just detect when it's been touched whether there is one model or several being displayed.
While this is most likely quite simple, I've been stumped by this for months now. I would really appreciate any light others can shed on this topic.
Use gluUnProject on the touch coordinates to get a vector going from the screen into the world, and then intersect it with your models to see if one of them has been touched. gluUnProject isn't by default available on iPhone, but you can look up implementations of it. http://www.mesa3d.org/ has an open source implementation.
Read about gluUnProject here: http://web.iiit.ac.in/~vkrishna/data/unproj.html