I would like to parse all the calls to patch statements in a given nlogo script. For instance, for ants, I'd like to get:
Seq(
"sum [food] of patches with [pcolor = cyan]",
"sum [food] of patches with [pcolor = sky]",
"sum [food] of patches with [pcolor = blue]").
The idea is to make a wizard in OpenMOLE, which builds automatically an OpenMOLE script set with relevant inputs and outputs; so that it is ready to run through OpenMOLE,
Thanks
I'll assume you actually want to pull code out of monitors and plots, as we discussed in the comments above.
The .nlogo format is documented, somewhat incompletely and poorly, at https://github.com/NetLogo/NetLogo/wiki/Model-file-format.
There aren't already API calls for doing exactly what you want here, so you're going to have go delving into the NetLogo source code to figure out how to assemble what you need out of the pieces that are available. The best file to begin by looking at is:
https://github.com/NetLogo/NetLogo/blob/5.x/src/main/org/nlogo/headless/HeadlessModelOpener.scala
It shows the needed calls for parsing a model file as a whole and splitting it into sections.
For parsing the interface section specifically, you'll see (in that same source file) that WidgetParser.parseWidgets does that, and there are cases for each widget type. So for plots and monitors the relevant code is:
https://github.com/NetLogo/NetLogo/blob/acb5fb5f576fe235032f95617da0835126048ef5/src/main/org/nlogo/headless/HeadlessModelOpener.scala#L201-L207
https://github.com/NetLogo/NetLogo/blob/acb5fb5f576fe235032f95617da0835126048ef5/src/main/org/nlogo/headless/HeadlessModelOpener.scala#L181-L186
as you can see, in both cases there are calls out to other code elsewhere that does the actual parsing.
Related
I am using R/exams to generate Moodle exams (Thanks Achim and team). I would like to make an introductory page to set the scenario for the exam. Is there a way to do it? (Now, I am generating a schoice with answerlist blank.)
Thanks!
João Marôco
Usually, I wouldn't do this "inside" the exam but "outside". In Moodle you can include a "Description" in the "General Settings" when editing the quiz. This is where I would put all the general information so that students read this before starting with the actual questions.
If you want to include R-generated content (R output, graphics, data, ...) in this description I would usually include this in "Question 1" rather than as a "Question 0" without any actual questions.
The "description" question type could be used for the latter, though. However, it is currently not supported in exams2moodle() (I'll put it on the wishlist). You could manually work around this in the following steps:
Create a string question with the desired content and set the associated expoints to 0.
Generate the Moodle XML output as usual with exams2moodle().
Open the XML file in a text editor or simply within RStudio and replace <question type="shortanswer"> with <question type="description"> for the relevant questions.
In the XML file omit the <answer>...</answer> for the relevant questions.
Caveat: As you are aware it is technically possible to share the same data across subsequent exercises within the same exam. If .Rnw exercises are used, all variables from the exercises are created in the global environment (.GlobalEnv) and can be easily accessed anyway. If .Rmd exercises are used, it is necessary to set the envir argument to a dedicated shared environment (e.g., .GlobalEnv or a new.env()) in exams2moodle(..., envir = ...). However, if this is done then no random exercises must be drawn in Moodle because this would break up the connections between the exercises (i.e., the first replication in Question 1 is not necessarily followed by by the first replication in Question 2). Instead you have to put together tests with a fixed selection of exercises (i.e., always the first replication for all questions or the second replication for all questions, ...).
I'm using sbt-scoverage plugin for measure the code (statement) coverage in our project. Because of months of not worriying about the coverage and our tests we decided to set a threshold for having a minimum coverage percentage: if you are writing code at least try to leave the project with the same coverage percentage as when you've find it. e.g. if you've started your feature branch with a project having 63% of coverage you have, after finishing your feature, to leave the same coverage value.
With this we want to ensure a gradual adoption of better practices instead of setting a fixed coverage value (something like coverageMinimum := XX).
Having said that, I'm considering the possibility of storing the last value of the analysis in a file and then compare that with a new execution, triggered by the developer.
Another option that I'm considering is to retrieve this value from our SonarQube server based on the data stored there.
My question is: Is there a way to do a thing like this with sbt-scoverage? I've dug into the docs and their Google Groups forum but I can't find something about it.
Thanks in advance!
coverageMinimum setting value doesn't have to be constant, you can write any function dynamically returning it, eg:
coverageMinimum := {
val tmp = 2 + 4
10 * tmp // returns 60 :)
}
I want to make a network graph which shows the distribution of our documents in our folder structure.
I have the nodefile, edgefile and gephi graph file in this location:
https://1drv.ms/f/s!AuVfRBdVHkO7hgs5K9r9f7jBBAUH
What I do is:
Run the algorithm ForceAtlas2 with scaling 10-20, dissuade hub marked and prevent overlap marked, all other standard setting.
What I get is a graph with groups radial/spherical distributed. However, what I want is a tree directed network graph.
Anyone know how I can adjust Gephi to make this?
Thanks!
I just found a solution.
I tested the file format as shown on the Yed site "import excel file" page
http://yed.yworks.com/support/manual/import_excel.html
This gave me the Yed import dialog (took a life time to figure out that it's a pop up menu and not selectable through the standard menu)
Anyway, it worked and I've adjusted the test files with the data prepared for the Gehpi. This was pretty easy, I could used the source target ID's etc. Just copy paste.
I load it into Yed and used some directed and radial clustering algorithms on it. Works fine!
Below you can find the excel node/edge file used to import in Yed and the graph file you can open with Yed to see the final radial result.
https://1drv.ms/f/s!AuVfRBdVHkO7hg6DExK_eVkm5_mR
Only thing to figure out is how to combine the weight (which represents the number of documents) with the node size.
Unfortunately, as of version 0.9.0, Gephi no longer supports hierarchical graphs. Maybe try using a previous version?
Other alternatives involve more complex software, such as Graphviz, but you need a .dot file instead of your .csv. I looked all over, but could not find an easy-to-use csv to dot converter.
You could try looking at d3-hierarchy, a node.js program, but then again you need to use the not-so-user-friendly npm. If you look at the link, it looks like it can produce the kind of diagram you're looking for.
I'd like to write a plugin for some Windows application, and it must be a DLL. I'd really love to try doing it in a mix of Red & Red/System. But asking on Rebol&Red chatroom here on SO, I got mixed responses as to whether it is currently possible in both Red and Red/System, or in Red/System only. What's the definitive answer?
Yes, it is possible. You can check the announcement on the Red-Blog for 0.3.3
First of all, here is a short snippet describing the process for Red/System:
Shared lib generation
Since a year, we were working on bringing shared library generation,
now it is available in the main branch. New features were added to
support library generation like a way to declare the exported symbols
and special callback functions when loading and freeing the library.
Here is a simple example of a Red/System library:
Red/System [
File: %testlib.reds
]
inc: func [n [integer!] return: [integer!]][n + 1]
#export [inc]
You compile such shared library using the new -dlib command-line
option:
do/args %rsc.r "-dlib testlib.reds"
The output binary name will have a platform-specifc suffix (.dll, .so
or .dylib).
Secondly, I was finally able to get a single simple Red script to compile to a .dll. The #export directive needs to be in a Red/System context as you can see the #system-global directive provides. Any function you have in Red needs to be wrapped with a Red/System wrapper. You can do this using #call as done below:
Red []
hello: does [print "hello"]
#system-global [
hellosystem: does [
#call [hello]
]
#export cdecl [hellosystem]
]
I wanted to use Phactory to test part of the submit action source as given below:
http://awesomescreenshot.com/050a0p2a6
Using Phactory, it is assured that the data is going into DB but the problem is in code coverage as some of the lines are still highlighted in red.
How do i test those lines so that the code coverage will be shown in green?
The code in orange indicates code that the code has run but is not covered by any of your tests. In the screenshot, it looks like your if condition is false, which your test does not cover. The expression in question looks like your data is not valid, perhaps related to your fields not validating.