I have created some links between agents (turtles) in NetLogo. This links will change at each time step. My aim is to export this data (i.e., turtles and links b/w them) to graph with vertices (turtles) edges (links), which can be given as input to Gephi. Is it possible to see the changes which occurs in netlogo in the graph when it is linked with Gephi. Can someone help me out. Thanks.
To export your network data in a format usable by Gephi, I would suggest using the nw:save-graphml primitive from NetLogo's NW Extension. This will give produce a file in the GraphML file format, which Gephi can read.
I guess you could re-save your network at each time step and overwrite your file, but I'm not sure if Gephi can display your changes dynamically. And depending on the size of your network, it might be slow.
Are you trying to use Gephi to see how the network changes over time, in a changing network that is generated by NetLogo? That's what #NicolasPayette's answer suggests, so I'll make the same assumption.
Gephi can display "dynamic graphs", i.e. networks that change over time. My understanding is that are two file formats that allow Gephi to import dynamic graphs: GEXF, and a special CSV (comma-separated) format that Gephi calls "Spreadsheet". Nicolas mentioned GraphML, which is a very nice network data format, but it doesn't handle dynamic graphs. And as far as I know, NetLogo doesn't generate GEXF or Gephi's "Spreadsheet" format.
However, the Gephi Spreadsheet format is very simple, and it would not be difficult to write a NetLogo procedure that would write a file in that format. This procedure would write new rows to the "Spreadsheet" CSV file on each NetLogo tick. Then Gephi could read in the file, and you'd be able to move back and forth in time, seeing how the graph changes. (You might need to use a bit of trial and error to figure out how to write Spreadsheet files based on the description on the Gephi site.)
Another option would be to display the evolving graph online using the graphstream protocol. Plugins for NetLogo as well as for gephi provide support for this.
Related
I'm working on bark-beetle outbreaks in the Alpine forests. To do so, I work on UAV-acquired orthophotos with multispectral bands (RGB, NIR, RE). I want to proceed to a raster (VRT) supervised classification, according to field-acquired ROIs.
I successfully did this using SAGA GUI. I'm now trying to repeat the same process but with using QGIS, as I have all my working layers sorted in a project. I want to get the same supervised classification with the built-in SAGA extension, but the algorithm asks here (but not in SAGA GUI) for a mandatory "Load Statistics from File" parameter.
How do I have to set this parameter?
By reading SAGA documentation, I saw it should be the path to the stats file (about my raster to classify?), but no further informations were provided about the content of this stats file. I don't know how to create it, nor if there is a way to create it using QGIS or SAGA GUI.
Neither did I find help about this in the SAGA documentation or somewhere else on internet.
Dear Anylogic Community,
I am struggling with finding the right approach for storing my simulation results. I have datasets created that keep track of every value I am interested in. They live in Main (see below)
My aim is to do a parameter variation experiment. In every run, I change the value for p_nDrones (see below)
After the experiment, I would like to store all the datasets in one excel sheet.
However, when I do the parameter variation experiment and afterwards check the log of the dataset (datasets_log), the changed values do not even show up (2 is the value I did set up in the normal simulation).
Now my question. Do I need to create another type of dataset if I want to track the values that are produced in the experiments? Why are they not stored after executing the experiment?
I really would appreciate if someone could share the best way to set up this export of experiment results. I would like to store the whole time series for every dataset.
Thank you!
Best option would be to write the outputs to some external file at the end of each model run.
If you want to use Excel, which I personally would not advise, even though it has a nice excelFile.writeDataSet() function, you can.
I would rather write the data to a text file as you will have much for control over the writing, the file itself, it is thread-safe, and useable in many many more platforms than Microsoft Excel.
See my example below:
Setup parameters in your model that you will write the data to at the end of the model of type TextFile. Here I used the model on destroy code to write out the data from the data sets.
Here you can immediately see the benefit of using the text file! You can add the number of drones we are simulating (or scenario name or any other parameter) in a column, whereas with Excel this would be a pain...
Now you can pass your specific text file to the model to use by adding it to the parameter variation page, providing it to the model through the parameters.
You will see that I also set up some headers for the text file in the Initial Experiment setup part, and then at the very end of the experiment, I close the text files in the After experiment section so that the text files can be used.
Here is the result if you simply right-click on the text files and open them in Excel. (Excel will always have a purpose, even if it is just to open text files ;-) )
I have been trying to load some gis data from a postgis database into Cytoscape 3.6. I am trying to get some inDegree and outDegree values I have used the sif file format.
As long as the data is written out in the follow format
source_point\tinteracts with\ttarget_point
Cytoscape is happy to read it.
I am just wondering if there is anyway of including my own metric for the cost of getting between source_point and target_point
Sure! There are several ways to read in text files into Cytoscape -- SIF is just one of them. I would create a file that looks like SIF, but is actually a more complete text file:
Source\tTarget\tScore
source_point\ttarget_point\t1.1
...
And then use the "File->Import Network->File", choose your source and target and leave score as an edge attribute. You can have as many attributes on each line as you want, and can even mix edge attributes, source node attributes, and target node attributes.
-- scooter
I've written a NetLogo model to model agent movement in a landscape. I'd like to run this model from the command prompt, using AWs/Google Compute. The model uses about 500MB worth of input rasters and shapefiles and writes rasters and csv files. It also uses the extensions gis, rnd, cf, table and csv.
Would this be possible using the Controlling API? (https://github.com/NetLogo/NetLogo/wiki/Controlling-API). Can I just use the steps listed in the link? I have not tried running NetLogo from the command prompt before.
Also, I do not want to run BehaviourSpace as it is not relevant to this model.
A BehaviorSpace experiment can consist of only a single run, so BehaviorSpace may actually be relevant to you here. You only need to write one short XML file (or no new files at all, if the experiment setup you want is already part of the model) to do it this way.
Whereas if you go through the controlling API, you will have to write and compile Java (or Scala) code, which is a substantially more complex task.
But if you decide to go the controlling API route: yes, that works too, and it is documented, as you've already noticed.
I'm looking for a way to visualize arbitrary information about my repository over time, which might be some version-dependent number, such as:
lines of code
number of lines in a latex document
time between commits
anything that can be output by a script
What is the best way to visualize this information?
More specifically, I'm using mercurial and would ideally like something with a decent interface, with plot resizing/scrolling/etc... Jenkins' plot plugin is decent but not great, but more importantly it's not possible to visualize past data (say, after adding a new metric).
I would suggest to split your task to simplify everything a little bit. It is likely you will need several different tools in order to collect and visualize all required information. Historical view seems to be another big challenge.
Lines of code
There are several plugins available for Jenkins, but almost all are highly specialized. SLOCCount plug-in seems to be most universal, but it does not provide any graphical output.
NSIQ Collector Plugin
SLOCCount plug-in
JavaNCSS Plugin
There might be some other option for your language. For example, CCCC will provide required information for C and C++ code:
Number of lines in a latex document
I see several options to achieve that:
adapt existing solution/plugin
use repository statistics tool (Pepper, for example, can do the trick)
use simple shell script to count lines and report it
Pepper will generate something like the following:
Please check Pepper gallery. There are another tools, for example: hgchart
Time between commits
The simplest solution is to let a commit to trigger some trivial job, so Jenkins will provide all information as part of build history (with a timeline, etc).
Another solution is to use repository statistics tool once again:
Anything that can be output by a script
There are several good plug-ins for that.
Plot plugin can visualize multiple values provided as properties or csv file.
Measurement Plots Plugin scans the output in order to find values to be visualized
Happy continuous integration.