Headless Behavior Space with table extension - netlogo

I currently have a project that I've completed using the netlogo interface.
My project includes multiple .nls files and one of my .nls files use the table extension. I've created a behavior space experiment and named it experiment for simplicity.
I'm trying to run the experiment headless using the following commands in my Netlogo application's directory where the Netlogo.jar is located.
java -Xmx1024m -Dfile.encoding=UTF-8 -cp ./Netlogo.jar org.nlogo.headless.Main --model /path/to/file/experiment.nlogo --experiment experiment --table /path/to/output/table-output.csv --spreadsheet /path/to/output/spreadsheet-output.csv
I'm getting the following error. It seems that I can't detect my extension from the .nls file. How can this be fixed?
Exception in thread "main" Nothing named TABLE:PUT has been defined at position 895 in
at org.nlogo.compiler.CompilerExceptionThrowers$.exception(CompilerExceptionThrowers.scala:26)
at org.nlogo.compiler.IdentifierParser.org$nlogo$compiler$IdentifierParser$$getAgentVariableReporter(IdentifierParser.scala:107)
at org.nlogo.compiler.IdentifierParser$$anonfun$processToken2$2.apply(IdentifierParser.scala:75)
at org.nlogo.compiler.IdentifierParser$$anonfun$processToken2$2.apply(IdentifierParser.scala:68)
at scala.Option.getOrElse(Option.scala:108)
at org.nlogo.compiler.IdentifierParser.processToken2(IdentifierParser.scala:68)
at org.nlogo.compiler.IdentifierParser.processToken$1(IdentifierParser.scala:31)
at org.nlogo.compiler.IdentifierParser$$anonfun$process$1.apply(IdentifierParser.scala:34)
at org.nlogo.compiler.IdentifierParser$$anonfun$process$1.apply(IdentifierParser.scala:34)
at scala.collection.Iterator$$anon$19.next(Iterator.scala:401)
at scala.collection.Iterator$class.toStream(Iterator.scala:1181)
at scala.collection.Iterator$$anon$19.toStream(Iterator.scala:399)
at scala.collection.Iterator$$anonfun$toStream$1.apply(Iterator.scala:1181)
at scala.collection.Iterator$$anonfun$toStream$1.apply(Iterator.scala:1181)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1060)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1052)
at scala.collection.immutable.StreamIterator$$anonfun$next$1.apply(Stream.scala:952)
at scala.collection.immutable.StreamIterator$$anonfun$next$1.apply(Stream.scala:952)
at scala.collection.immutable.StreamIterator$LazyCell.v(Stream.scala:941)
at scala.collection.immutable.StreamIterator.hasNext(Stream.scala:946)
at scala.collection.Iterator$class.isEmpty(Iterator.scala:329)
at scala.collection.immutable.StreamIterator.isEmpty(Stream.scala:933)
at scala.collection.immutable.StreamIterator.next(Stream.scala:948)
at scala.collection.Iterator$$anon$2.next(Iterator.scala:898)
at scala.collection.Iterator$$anon$2.head(Iterator.scala:885)
at org.nlogo.compiler.ExpressionParser.recurse$1(ExpressionParser.scala:474)
at org.nlogo.compiler.ExpressionParser.delayBlock(ExpressionParser.scala:479)
at org.nlogo.compiler.ExpressionParser.parseExpressionInternal(ExpressionParser.scala:339)
at org.nlogo.compiler.ExpressionParser.org$nlogo$compiler$ExpressionParser$$parseArgExpression(ExpressionParser.scala:290)
at org.nlogo.compiler.ExpressionParser$$anonfun$parseArguments$1.apply$mcVI$sp(ExpressionParser.scala:96)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:81)
at org.nlogo.compiler.ExpressionParser.parseArguments(ExpressionParser.scala:95)
at org.nlogo.compiler.ExpressionParser.parseStatement(ExpressionParser.scala:80)
at org.nlogo.compiler.ExpressionParser.parse(ExpressionParser.scala:55)
at org.nlogo.compiler.CompilerMain$$anonfun$compile$1.apply(CompilerMain.scala:34)
at org.nlogo.compiler.CompilerMain$$anonfun$compile$1.apply(CompilerMain.scala:29)
at scala.collection.Iterator$class.foreach(Iterator.scala:772)
at scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:573)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:73)
at scala.collection.JavaConversions$JCollectionWrapper.foreach(JavaConversions.scala:592)
at org.nlogo.compiler.CompilerMain$.compile(CompilerMain.scala:29)
at org.nlogo.compiler.Compiler$.compileProgram(Compiler.scala:28)
at org.nlogo.headless.HeadlessModelOpener.openFromMap(HeadlessModelOpener.scala:53)
at org.nlogo.headless.HeadlessWorkspace.openString(HeadlessWorkspace.scala:531)
at org.nlogo.headless.HeadlessWorkspace.open(HeadlessWorkspace.scala:513)
at org.nlogo.headless.Main$.newWorkspace$1(Main.scala:19)
at org.nlogo.headless.Main$$anonfun$runExperiment$1.apply(Main.scala:24)
at org.nlogo.headless.Main$$anonfun$runExperiment$1.apply(Main.scala:24)
at org.nlogo.lab.Lab$$anonfun$1.apply(Lab.scala:33)
at org.nlogo.lab.Lab$$anonfun$1.apply(Lab.scala:33)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
at scala.collection.immutable.Range.foreach(Range.scala:78)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:233)
at scala.collection.immutable.Range.map(Range.scala:46)
at org.nlogo.lab.Lab.run(Lab.scala:33)
at org.nlogo.headless.Main$.runExperiment(Main.scala:24)
at org.nlogo.headless.Main$$anonfun$main$1.apply(Main.scala:14)
at org.nlogo.headless.Main$$anonfun$main$1.apply(Main.scala:14)
at scala.Option.foreach(Option.scala:197)
at org.nlogo.headless.Main$.main(Main.scala:14)
at org.nlogo.headless.Main.main(Main.scala)

It seems that the .nlogo file must contain the extension declaration rather than the .nls file.

Related

How can I get "HelloWorld - BitBake Style" working on a newer version of Yocto?

In the book "Embedded Linux Systems with the Yocto Project", Chapter 4 contains a sample called "HelloWorld - BitBake style". I encountered a bunch of problems trying to get the old example working against the "Sumo" release 2.5.
If you're like me, the first error you encountered following the book's instructions was that you copied across bitbake.conf and got:
ERROR: ParseError at /tmp/bbhello/conf/bitbake.conf:749: Could not include required file conf/abi_version.conf
And after copying over abi_version.conf as well, you kept finding more and more cross-connected files that needed to be moved, and then some relative-path errors after that... Is there a better way?
Here's a series of steps which can allow you to bitbake nano based on the book's instructions.
Unless otherwise specified, these samples and instructions are all based on the online copy of the book's code-samples. While convenient for copy-pasting, the online resource is not totally consistent with the printed copy, and contains at least one extra bug.
Initial workspace setup
This guide assumes that you're working with Yocto release 2.5 ("sumo"), installed into /tmp/poky, and that the build environment will go into /tmp/bbhello. If you don't the Poky tools+libraries already, the easiest way is to clone it with:
$ git clone -b sumo git://git.yoctoproject.org/poky.git /tmp/poky
Then you can initialize the workspace with:
$ source /tmp/poky/oe-init-build-env /tmp/bbhello/
If you start a new terminal window, you'll need to repeat the previous command which will get get your shell environment set up again, but it should not replace any of the files created inside the workspace from the first time.
Wiring up the defaults
The oe-init-build-env script should have just created these files for you:
bbhello/conf/local.conf
bbhello/conf/templateconf.cfg
bbhello/conf/bblayers.conf
Keep these, they supersede some of the book-instructions, meaning that you should not create or have the files:
bbhello/classes/base.bbclass
bbhello/conf/bitbake.conf
Similarly, do not overwrite bbhello/conf/bblayers.conf with the book's sample. Instead, edit it to add a single line pointing to your own meta-hello folder, ex:
BBLAYERS ?= " \
${TOPDIR}/meta-hello \
/tmp/poky/meta \
/tmp/poky/meta-poky \
/tmp/poky/meta-yocto-bsp \
"
Creating the layer and recipe
Go ahead and create the following files from the book-samples:
meta-hello/conf/layer.conf
meta-hello/recipes-editor/nano/nano.bb
We'll edit these files gradually as we hit errors.
Can't find recipe error
The error:
ERROR: BBFILE_PATTERN_hello not defined
It is caused by the book-website's bbhello/meta-hello/conf/layer.conf being internally inconsistent. It uses the collection-name "hello" but on the next two lines uses _test suffixes. Just change them to _hello to match:
# Set layer search pattern and priority
BBFILE_COLLECTIONS += "hello"
BBFILE_PATTERN_hello := "^${LAYERDIR}/"
BBFILE_PRIORITY_hello = "5"
Interestingly, this error is not present in the printed copy of the book.
No license error
The error:
ERROR: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb: This recipe does not have the LICENSE field set (nano)
ERROR: Failed to parse recipe: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
Can be fixed by adding a license setting with one of the values that bitbake recognizes. In this case, add a line onto nano.bb of:
LICENSE="GPLv3"
Recipe parse error
ERROR: ExpansionError during parsing /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
[...]
bb.data_smart.ExpansionError: Failure expanding variable PV_MAJOR, expression was ${#bb.data.getVar('PV',d,1).split('.')[0]} which triggered exception AttributeError: module 'bb.data' has no attribute 'getVar'
This is fixed by updating the special python commands being used in the recipe, because #bb.data was deprecated and is now removed. Instead, replace it with #d, ex:
PV_MAJOR = "${#d.getVar('PV',d,1).split('.')[0]}"
PV_MINOR = "${#d.getVar('PV',d,1).split('.')[1]}"
License checksum failure
ERROR: nano-2.2.6-r0 do_populate_lic: QA Issue: nano: Recipe file fetches files and does not have license file information (LIC_FILES_CHKSUM) [license-checksum]
This can be fixed by adding a directive to the recipe telling it what license-info-containing file to grab, and what checksum we expect it to have.
We can follow the way the recipe generates the SRC_URI, and modify it slightly to point at the COPYING file in the same web-directory. Add this line to nano.bb:
LIC_FILES_CHKSUM = "${SITE}/v${PV_MAJOR}.${PV_MINOR}/COPYING;md5=f27defe1e96c2e1ecd4e0c9be8967949"
The MD5 checksum in this case came from manually downloading and inspecting the matching file.
Done!
Now bitbake nano ought to work, and when it is complete you should see it built nano:
/tmp/bbhello $ find ./tmp/deploy/ -name "*nano*.rpm*"
./tmp/deploy/rpm/i586/nano-dbg-2.2.6-r0.i586.rpm
./tmp/deploy/rpm/i586/nano-dev-2.2.6-r0.i586.rpm
I have recently worked on that hands-on hello world project. As far as I am concerned, I think that the source code in the book contains some bugs. Below there is a list of suggested fixes:
Inheriting native class
In fact, when you build with bitbake that you got from poky, it builds only for the target, unless you mention in your recipe that you are building for the host machine (native). You can do the latter by adding this line at the end of your recipe:
inherit native
Adding license information
It is worth mentioning that the variable LICENSE is important to be set in any recipe, otherwise bitbake rises an error. In our case, we try to build the version 2.2.6 of the nano editor, its current license is GPLv3, hence it should be mentioned as follow:
LICENSE = "GPLv3"
Using os.system calls
As the book states, you cannot dereference metadata directly from a python function. Which means it is mandatory to access metadata through the d dictionary. Bellow, there is a suggestion for the do_unpack python function, you can use its concept to code the next tasks (do_configure, do_compile):
python do_unpack() {
workdir = d.getVar("WORKDIR", True)
dl_dir = d.getVar("DL_DIR", True)
p = d.getVar("P", True)
tarball_name = os.path.join(dl_dir, p+".tar.gz")
bb.plain("Unpacking tarball")
os.system("tar -x -C " + workdir + " -f " + tarball_name)
bb.plain("tarball unpacked successfully")
}
Launching the nano editor
After successfully building your nano editor package, you can find your nano executable in the following directory in case you are using Ubuntu (arch x86_64):
./tmp/work/x86_64-linux/nano/2.2.6-r0/src/nano
Should you have any comments or questions, Don't hesitate !

Unable to set the package name for classpath in randoop

Here the project structure cloned from github after compiling on Ubuntu successfully,
javaml
bin
net/sf/javaml/core/Dataset.class
javaml
src
net/sf/javaml/core/Dataset.java
When the following command wa given:
java -ea -classpath /home/shahid/git/javaml/bin:/home/shahid/a_f_w/randoop-3.1.5/randoop-all-3.1.5.jar randoop.main.Main gentests --testclass=net.sf.javaml.core.Dataset --literals-file=CLASSES
It generated the error: "Ignoring interface net.sf.javaml.core.Dataset specified via --classlist or --testclass.
No classes to test
".
while the other command java -ea -classpath /home/shahid/git/java-ml/bin:/home/shahid/a_f_w/randoop-3.1.5/randoop-all-3.1.5.jar randoop.main.Main gentests --testclass=DataSet --literals-file=CLASSESwithout package for other project working perfectly.
Any help will be appretiated.
The error message gives you the answer:
Ignoring interface net.sf.javaml.core.Dataset specified via --classlist or --testclass. No classes to test
You are supposed to provide a class, not an interface, to the --testclass command-line argument.
By passing --testclass=net.sf.javaml.core.Dataset to Randoop, you indicated that you only want Randoop to create objects of type net.sf.javaml.core.Dataset. However, since that is an interface, it cannot be instantiated, and Randoop cannot create any objects, nor any tests.
The following command worked for me:
java -ea -classpath ~/javaml/bin/:~/Randoop/randoop-all-3.1.4.jar randoop.main.Main gentests --testclass=net.sf.javaml.core.Complex --literals-file=CLASSES
#mernst thanks for your kind response.

PredictionIO - getting error when build and run Evaluation metrics

I followed this quickstart:
https://docs.prediction.io/templates/classification/quickstart/
and this document for evaluation metrics
https://docs.prediction.io/evaluation/paramtuning/
Everything seems ok until the step build and run evaluation metrics
pio eval org.template.classification.AccuracyEvaluation \
org.template.classification.EngineParamsList
I am getting the exception:
Exception in thread "main" scala.reflect.internal.MissingRequirementError: object org.template.classification.AccuracyEvaluation not found.
at scala.reflect.internal.MissingRequirementError$.signal(MissingRequirementError.scala:16)
at scala.reflect.internal.MissingRequirementError$.notFound(MissingRequirementError.scala:17)
at scala.reflect.internal.Mirrors$RootsBase.ensureModuleSymbol(Mirrors.scala:126)
at scala.reflect.internal.Mirrors$RootsBase.staticModule(Mirrors.scala:161)
at scala.reflect.internal.Mirrors$RootsBase.staticModule(Mirrors.scala:21)
at io.prediction.workflow.WorkflowUtils$.getEvaluation(WorkflowUtils.scala:103)
at io.prediction.workflow.CreateWorkflow$$anonfun$19.apply(CreateWorkflow.scala:146)
at io.prediction.workflow.CreateWorkflow$$anonfun$19.apply(CreateWorkflow.scala:144)
Could anyone help me with this?
Thank you very much.
Had the exact same problem. Fixed it by doing the following:
For each .scala file in engine_dir/src/main/scala/org/template/engine_name/ you need to change the first line from...
package <SomeTemplateName>
To the following (replacing engine_name with the name of the folder in the path mentioned above):
package org.template.<engine_name>
Then, in engine.json you need to change the following line...
"engineFactory": "<template name>.<template engine>",
To the following (once again replacing engine_name with the name of the folder in the path mentioned above):
"engineFactory": "org.template.<engine name>.<template engine>",
Now re-run...
pio build
pio train
pio deploy
Then you should be able to run the model evaluation without errors.
Simply run it like this
$ pio eval org.example.classification.AccuracyEvaluation \
org.example.classification.EngineParamsList
You dont have to change anything. The class package from the sample was org.example.classification not org.template.classification

The input line is too long. The syntax of the command is incorrect

When I start play scala production mode that throw this kind of error please any one give me the clear idea..
F:\New_CMS\trunk\server\cms>activator start
[info] Loading project definition from F:\New_CMS\trunk\server\cms\project
[info] Set current project to cms (in build file:/F:/New_CMS/trunk/server/cms/)
[info] Wrote F:\New_CMS\trunk\server\cms\target\scala-2.11\cms_2.11-1.0-SNAPSHOT.pom
Starting server. Type Ctrl+D to exit logs, the server will remain in background
The input line is too long.
The syntax of the command is incorrect.
Follow these steps as a Windows solution:
activator stage in the command line
Copy the stage directory from target\universal\stage to c:\stage to avoid issues with long file paths
To avoid the Bad Application Path issues just create a new .bat file with the following (my project is called proj): set PROJ_OPTS="-Dconfig.file=../conf/application.conf" proj.bat
Note: change PROJ_OPTS to YOURPROJECTNAME_OPTS and proj.bat to yourprojectname.bat

Default configuration in Play 2.0 causes IOException?

Is this a bug with the default configuration of Play 2.0? I have just installed Play 2.0 and when I create an application for the first time, I get this:
Error during sbt execution: java.io.IOException: Cannot write parent directory: Path(/home/hanxue/play/myFirstApp/app) of /home/hanxue/play/myFirstApp/app/views
The app subdirectory does not have write permissions:
hanxue#ubuntu-dev:~/play$ ls -l myFirstApp/
total 16
dr-xr-xr-x 2 hanxue hanxue 4096 2012-03-13 11:22 app
It also seems that the $PLAY/framework/sbt/boot/ directory need to be world-writable or otherwise play will throw an IOException about not being able to create /opt/play-2.0/framework/sbt/boot/sbt.boot.lock . Is this by design?
I solved it by giving it world-writable permission
chmod -R o+w /opt/play-2.0/framework/sbt/boot/
This is not a bug, it is just how sbt works(and play-2.0 uses sbt). Sbt downloads all necessary parts first time when its started, and default behavior of play is that all dependency go to same directory and this is place where you extracted play-2.0, so this directory needs write permissions. You will probably need write permissions on /opt/play-2.0/repository too.