I'm trying to get to grips with command line operations of NetLogo on a Windows 10 machine. I want to run the Fire.nlogo model provided.
I set the directory with cd C:\Program Files\NetLogo 6.0.2
Then I try to run a simple experiment called experiment1 which I've written beforehand in BehaviourSpace
netlogo-headless --model Fire.nlogo --experiment experiment1
This gives me the following error:
Exception in thread "main" java.io.FileNotFoundException: C:\Program Files\NetLogo 6.0.2\Fire.nlogo (The system cannot find the file specified)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at scala.io.Source$.fromFile(Source.scala:91)
at scala.io.Source$.fromFile(Source.scala:76)
at scala.io.Source$.fromURI(Source.scala:121)
at org.nlogo.fileformat.AbstractNLogoFormat.$anonfun$sections$1(NLogoFormat.scala:37)
at scala.util.Try$.apply(Try.scala:209)
at org.nlogo.fileformat.AbstractNLogoFormat.sections(NLogoFormat.scala:36)
at org.nlogo.fileformat.AbstractNLogoFormat.sections$(NLogoFormat.scala:34)
at org.nlogo.fileformat.NLogoFormat.sections(NLogoFormat.scala:16)
at org.nlogo.api.ModelFormat.load(ModelFormat.scala:53)
at org.nlogo.api.ModelFormat.load$(ModelFormat.scala:51)
at org.nlogo.fileformat.NLogoFormat.load(NLogoFormat.scala:16)
at org.nlogo.api.FormatterPair.load(ModelLoader.scala:26)
at org.nlogo.api.ModelLoader.readModel(ModelLoader.scala:60)
at org.nlogo.api.ModelLoader.readModel$(ModelLoader.scala:57)
at org.nlogo.api.ConfigurableModelLoader.readModel(ModelLoader.scala:90)
at org.nlogo.headless.HeadlessWorkspace.open(HeadlessWorkspace.scala:491)
at org.nlogo.headless.Main$.newWorkspace$1(Main.scala:18)
at org.nlogo.headless.Main$.runExperiment(Main.scala:21)
at org.nlogo.headless.Main$.$anonfun$main$1(Main.scala:12)
at org.nlogo.headless.Main$.$anonfun$main$1$adapted(Main.scala:12)
at scala.Option.foreach(Option.scala:257)
at org.nlogo.headless.Main$.main(Main.scala:12)
at org.nlogo.headless.Main.main(Main.scala)
I notice that the output gives the path as C:\Program Files\NetLogo 6.0.2\Fire.nlogobut the model is actually located at C:\Program Files\NetLogo 6.0.2\app\models\Sample Models\Earth Science\Fire.nlogo
Though I seem to be following the tutorials as they're written here https://ccl.northwestern.edu/netlogo/docs/behaviorspace.html
Any ideas where I'm going wrong here? Thanks.
A quick look suggests that you need to give the full file path to the --model argument. So the command would look like:
netlogo-headless --model "C:\Program Files\NetLogo 6.0.2\app\models\Sample Models\Earth Science\Fire.nlogo" --experiment experiment1
Since you have set cd C:\Program Files\NetLogo 6.0.2 you can probably go with
netlogo-headless --model "app\models\Sample Models\Earth Science\Fire.nlogo" --experiment experiment1
Alternatively, you can go to the directory that contains the model you want to run and instead provide the path (again with quotes) to the .bat file
"c:\Program Files\NetLogo 6.0.2\netlogo-headless.bat" --model Fire.nlogo --experiment experiment1
Related
I get an "output location validation failed" exception in my pig script on EMR.
It fails when saving data back S3.
I use this simple script to narrow the problem:
REGISTER /home/hadoop/lib/mongo-java-driver-2.13.0.jar
REGISTER /home/hadoop/lib/mongo-hadoop-core-1.3.2.jar
REGISTER /home/hadoop/lib/mongo-hadoop-pig-1.3.2.jar
example = LOAD 's3://xxx/example-full.bson'
USING com.mongodb.hadoop.pig.BSONLoader();
STORE example INTO 's3n://xxx/out/example.bson' USING com.mongodb.hadoop.pig.BSONStorage();
This is the Stacktrace Produced:
================================================================================
Pig Stack Trace
---------------
ERROR 6000:
<line 8, column 0> Output Location Validation Failed for: 's3://xxx/out/example.bson More info to follow:
Output directory not set.
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1002: Unable to store alias example
at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1637)
at org.apache.pig.PigServer.registerQuery(PigServer.java:577)
at org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:1091)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:501)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:198)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:69)
at org.apache.pig.Main.run(Main.java:543)
at org.apache.pig.Main.main(Main.java:156)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: org.apache.pig.impl.plan.VisitorException: ERROR 6000:
<line 8, column 0> Output Location Validation Failed for: 's3://xxx/out/example.bson More info to follow:
Output directory not set.
at org.apache.pig.newplan.logical.rules.InputOutputFileValidator$InputOutputFileVisitor.visit(InputOutputFileValidator.java:95)
at org.apache.pig.newplan.logical.relational.LOStore.accept(LOStore.java:66)
at org.apache.pig.newplan.DepthFirstWalker.depthFirst(DepthFirstWalker.java:64)
at org.apache.pig.newplan.DepthFirstWalker.depthFirst(DepthFirstWalker.java:66)
at org.apache.pig.newplan.DepthFirstWalker.depthFirst(DepthFirstWalker.java:66)
at org.apache.pig.newplan.DepthFirstWalker.walk(DepthFirstWalker.java:53)
at org.apache.pig.newplan.PlanVisitor.visit(PlanVisitor.java:52)
at org.apache.pig.newplan.logical.rules.InputOutputFileValidator.validate(InputOutputFileValidator.java:45)
at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.compile(HExecutionEngine.java:317)
at org.apache.pig.PigServer.compilePp(PigServer.java:1382)
at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307)
at org.apache.pig.PigServer.execute(PigServer.java:1299)
at org.apache.pig.PigServer.access$400(PigServer.java:124)
at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1632)
... 13 more
Caused by: org.apache.hadoop.mapred.InvalidJobConfException: Output directory not set.
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:138)
at org.apache.pig.newplan.logical.rules.InputOutputFileValidator$InputOutputFileVisitor.visit(InputOutputFileValidator.java:80)
... 26 more
To setup the MongoConnector I used this Bootstrap script:
#!/bin/sh
wget -P /home/hadoop/lib http://central.maven.org/maven2/org/mongodb/mongo-java-driver/2.13.0/mongo-java-driver-2.13.0.jar
wget -P /home/hadoop/lib https://github.com/mongodb/mongo-hadoop/releases/download/r1.3.2/mongo-hadoop-core-1.3.2.jar
wget -P /home/hadoop/lib https://github.com/mongodb/mongo-hadoop/releases/download/r1.3.2/mongo-hadoop-pig-1.3.2.jar
wget -P /home/hadoop/lib https://github.com/mongodb/mongo-hadoop/releases/download/r1.3.2/mongo-hadoop-hive-1.3.2.jar
cp /home/hadoop/lib/mongo* /home/hadoop/hive/lib
cp /home/hadoop/lib/mongo* /home/hadoop/pig/lib
The error suggests that the output directory does not exist.
Of course the solution would be to create the output directory.
For a quick check it is also possible to make the output directory equal to the input directory. If the directory actually does exist, it may be a rights issue.
I am getting an error saying I have a file that is too long in sbt.
[info] Compiling 29 Scala sources to /home/chris/dev/suredbits-core/target/scala-2.11/classes...
[error] File name too long
[error] one error found
[error] (compile:compile) Compilation failed
[error] Total time: 7 s, completed Feb 17, 2015 8:10:25 AM
How do I find out which file is too long so I can shorten the filename? I have added the compiler flag -Xmax-classfile-name and set it to 254.
If your /home is an encrypted file system (e.g. LUKS), you might run into this issue.
Setting max-classfile-name to 254 is the default (or it might be 255) - so you're not reducing it much. You should probably be considering something closer to a max length of 70 - 100. You can set it for all your projects by creating ~/.sbt/0.13/local.sbt with the scalac override:
scalacOptions ++= Seq("-Xmax-classfile-name","78")
This is how I solved my problem
mkdir /tmp/myproject-target
cd ~/workspace/myproject
rm -rf target
ln -s /tmp/myproject-target target
I encountered this problem in IntelliJ Ultimate 2016.1.2 (which resembles Intellij 14). I solved it by setting:
-Xmax-classfile-name 78
In File > Settings... > Build, Execution, Deployment > Compiler > Scala Compiler > Additional Compiler Options.
NOTE: there is a space between the option name and its value ("78"), not an equals sign.
Try using a shell script like this:
#!/bin/sh
for file in *; do {
echo -m "$file" | wc -m;
echo "$file"
}
done
Running this in your src/main/scala directory should show you which files have a name with more than 254 chars. I hope this answers your question.
Setting the file length limits might be unsafe, I could not find any official documentation that this solution is safe.
Using un-encrypted directory is not safe.
I want to offer a different approach:
install veracrypt (in ubuntu with apt)
create a non-encrypted directory (outside the encrypted user home dir)
create a veracrypt file container in the new directory
mount the container in
sbt works fine even if the mount point is in the encrypted directory)
It is possible to create the container with a complex password and mount on login
veracrypt -t --pim=0 --protect-hidden=no -k "" -p $PASSWORD $ENCRYPTED_CONTAINER $MOUNT_DIR
Has anyone encountered the error below when compiling omniORB_4.1.6 64-bit for windows?
'RegQueryValueEx failed - error 109'
I followed the procedure in the readme.win32 and I get linking errors in the omniDyamic, codesets etc.. So someone suggested to rebuild the omniorb_root/src/tools/win32 and copy it in bin/x86_win32/. That's what I did and when I recompile the whole omniORB, the error is as below:
../../../../bin/x86_win32/omkdepend -D__cplusplus -D_MSC_VER -DIDLMODULE_VERSION
="0x2630" -DMSDOS -DOMNIIDL_EXECUTABLE -Ic:/python27/include -Ic:/python27/PC -I
c:/python27/include/python2.7 -DPYTHON_INCLUDE=<Python.h> -I. -I. -I../../../../
include -D__WIN32__ -D_WIN32_WINNT=0x0501 -D__x86__ -D__NT__ -D__OSVERSION__=4 -
D_CRT_SECURE_NO_DEPRECATE=1 idlc.cc idlpython.cc idlfixed.cc idlconfig.cc idldum
p.cc idlvalidate.cc idlast.cc idlexpr.cc idlscope.cc idlrepoId.cc idltype.cc idl
util.cc idlerr.cc lex.yy.cc y.tab.cc
RegQueryValueEx failed - error 109
-----------------------------------------------------------------------------------------------
make[4]: Entering directory `/cygdrive/c/Software/COTS/omniORB/omniORB_4.1.6/src
/tool/omniidl/cxx/cccp'
../../../../../bin/x86_win32/clwrapper -gnuwin32 -c -O2 -MD -GS -GR -Zi -nologo
-DHAVE_CONFIG_H -I. -I. -I. -I../../../../../include -D__WIN32__ -D_WIN32_WINNT=
0x0501 -D__x86__ -D__NT__ -D__OSVERSION__=4 -D_CRT_SECURE_NO_DEPRECATE=1 -Focexp
.o cexp.c
RegQueryValueEx failed - error 109
I'm going to answer my own question because it seems nobody has encountered this problem, and the mailing list is so quiet.
Someone suggested to me to recompile the src\tools\win32. So that's what I did and I copied the .exe files generated to bin\x86_win32.
I then compiled all the omniORB and get the RegQueryValueEx error.
The reason for this is when you check the src\tools\win32\bccwrapper.c in the void GetMounts(void) function,
it looks for this path in the registry:
Software\Cygnus Solutions\CYGWIN.DLL setup\b15.0\mounts\%02X.
When I checked that using regedit, I noticed that in the mounts->00, 01, 02, 03 etc.. keys, there are no 'unix' and 'native' string values inside those keys.
So I decided to delete all the keys and retained just the 00 and added a 'unix' and 'native' string value.
After which, I recompiled the src\tools\win32 and copied over the created .exe files to bin\x86_win32 and finally when I recompiled all the omniOrb, it started compiling (need to copy the ssl libs too) and finished successfully.
I really don't even know how the following got into my registry:
Software\Cygnus Solutions\CYGWIN.DLL setup\b15.0\mounts\%02X.
Best regards,
Mark
I spent quite some time trying to compile OmniORB on windows 10 with visual studio 2017.
Assuming Cygwin64 was installed in directory
c:\software\cygwin64
, the compilation of OmniORB is quite straightforward:
open a command terminal (cmd)
in that terminal, setup the Visual environment:
"C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Auxiliary\Build\vcvarsall.bat" x64
then, append the PATH (yes append and not prepend):
set PATH=%PATH%;c:\software\cygwin64\bin
then, in file config\config.mk, uncomment this line
platform = x86_win32_vs_15
in file platforms\x86_win32_vs_15, set PYTHON to target the python executable, in my case Python 3.6.5
PYTHON = /cygdrive/c/software/Python/python
finally start the compilation with make:
make export
Hope this helps.
I am trying to get SBT running on my Mac Operating System.
So far, I downloaded the Jar-Launcher and installed in into the /bin folder.
Then I created a SBT script, containing the following lines:
export PATH=/System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/bin:$PATH
java -Xmx512M -jar ` $0` /bin/sbt-launch-0.7.7.jar "$#"
When I call SBT on the Console, I receive the following series of error messages:
> /bin/sbt: fork: Resource temporarily unavailable
java.io.IOException: Cannot run program "sh": error=35, Resource temporarily unavailable
at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
at java.lang.Runtime.exec(Runtime.java:593)
at java.lang.Runtime.exec(Runtime.java:466)
at jline.UnixTerminal.exec(UnixTerminal.java:297)
at jline.UnixTerminal.exec(UnixTerminal.java:282)
at jline.UnixTerminal.stty(UnixTerminal.java:273)
at jline.UnixTerminal.initializeTerminal(UnixTerminal.java:77)
at jline.Terminal.setupTerminal(Terminal.java:75)
at jline.Terminal.getTerminal(Terminal.java:26)
at xsbt.boot.JLine$.terminal(SimpleReader.scala:20)
at xsbt.boot.JLine$.createReader(SimpleReader.scala:22)
at xsbt.boot.SimpleReader$.<init>(SimpleReader.scala:42)
at xsbt.boot.SimpleReader$.<clinit>(SimpleReader.scala)
at xsbt.boot.Initialize$.create(Create.scala:17)
at xsbt.boot.Launch$.parsed(Launch.scala:28)
at xsbt.boot.Launch$.configured(Launch.scala:21)
at xsbt.boot.Launch$.apply(Launch.scala:16)
at xsbt.boot.Launch$.apply(Launch.scala:13)
at xsbt.boot.Boot$.runImpl(Boot.scala:24)
at xsbt.boot.Boot$.run(Boot.scala:19)
at xsbt.boot.Boot$.main(Boot.scala:15)
at xsbt.boot.Boot.main(Boot.scala)
Caused by: java.io.IOException: error=35, Resource temporarily unavailable
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:53)
at java.lang.ProcessImpl.start(ProcessImpl.java:91)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
... 21 more
What is going wrong here?
Looks like you have a stray $0 there (which is expanded to the name of the current process sh). Try
java -Xmx512M -jar /bin/sbt-launch-0.7.7.jar "$#"
instead. That should get you up and running. The usual way is to call sbt like so:
java -Xmx512M -jar `dirname $0`/sbt-launch.jar "$#"
supposed you have the shell script sbt in the same folder as sbt-launch.jar because that is where dirname $0 points at.
I read a post here on stack overflow about this link being very good for hadoop deployment on windows - http://v-lad.org/Tutorials/Hadoop/12%20-%20format%20the%20namendoe.html
Problem is when I format the namenode as given on that page, after
bin/hadoop namenode -format
I get the following error:
Maybe its a problem with my environmental variables but I'm not sure.
bin/hadoop: line 330: C:\Program: command not found
bin/hadoop: line 395: C:\Program Files\java\jdk1.7.0\bin/bin/java: No such
file/directory
bin/hadoop: line 395: exec: C:\Program Files\java\jdk1.7.0\bin/bin/java: cannot
execute: No such file or directory
Just set you JAVA_HOME properly, it resolved my issue-
anwar#dell-pc ~/hadoop-0.20.203.0
$ export JAVA_HOME="C:\Program Files\Java\jdk1.6.0_02"