Running a single BehaviorSpace experiment on NetLogo - netlogo

I need to perform a single run of a BehaviorSpace experiment so that I can run the NetLogo model headless on Google cloud/AWS.
I tried writing a simple test code that just prints an output following the 'setup' command. However this prints the output twice. Am I doing something wrong? I tried entering 0 runs in parallel but this threw an IllegalArgumentException.
Here is the setup of the experiment:
Repetitions: 1
Setup commands: setup
Go commands: setup
Time limit: 1
Simultaneous runs in parallel: 1

Answered by Charles in the comment section:
I was running setup twice! Doh.

Related

RobotFrameWork: Is there a way of checking the report.html although the run paused?

Situation: VisualStudioCode (Browser library) runs a couple of .robot files (manually started)
Then it pauses because of an error...
At that point the process breaks and there is no final report.html
If you stop the run it doesn't generate an report.html that's not what you want. You actually want the results until that point. (or even better described: you still want the links output.xml, log.html and report.html)
you should be able to generate lag.htm and report.html using the rebot command. however you need output.xml for this. output.xml is created when you run the tests. when you break you will probobaly not have all the resources you need.
I would suggest to assign test timeout to the test that causes the pause. When the timeout is reached the test will be stoped automaticaly and you should have all reports. You can also set it globaly for all tests eg.:
*** Settings ***
Test Timeout 2 minutes

Why do Selenium tests behave different on different machines?

I couldn't find much information on Google regarding this topic. Below, I have provided three results from the same Selenium tests. Why am I getting different results when running the tests from different places?
INFO:
So our architecture: Bitbucket, Bamboo Stage 1 (Build, Deploy to QA), Bamboo Stage 2 (start Amazon EC2 instance "Test", run tests from Test against recently deployed QA)
Using Chrome Webdriver.
For all three of the variations I am using the same QA URL that our application is deployed on.
I am running all tests Parallelizable per fixture
The EC2 instance is running Windows Server 2012 R2 with the Chrome browser installed
I have made sure that the test solution has been properly deployed to the EC2 "test" instance. It is indeed the exact same solution and builds correctly.
First, Local:
Second, from EC2 Via SSM Script that invokes the tests:
Note that the PowerShell script calls the nunit3-console.exe just like it would be utilized in my third example using the command line.
Lastly, RDP in on EC2 and run tests from the command line:
This has me perplexed... Any reasons why Selenium is running different on different machines?
This really should be a comment, but I can't comment yet so...
I don't know enough about the application you are testing to say for sure, but this seems like something I've seen testing the application I'm working on.
I have seen two issues. First, Selenium is checking for the element before it's created. Sometimes it works and sometimes it fails, it just depends on how quickly the page loads when the test runs. There's no rhyme or reason to it. Second, the app I'm testing is pretty dumb. When you touch a field, enter data and move on to the next, it, effectively, posts all editable fields back to the database and refreshes all the fields. So, Selenium enters the value, moves to the next field and pops either a stale element error or can't find element error depending on when in the post/refresh cycle it attempts to interact with the element.
The solution I have found is moderately ugly, I tried the wait until, but because it's the same element name, it's already visible and is grabbed immediately which returns a stale element. As a result, the only thing that I have found is that by using explicit waits between calls, I can get it to run correctly consistently. Below is an example of what I have to do with the app I'm testing. (I am aware that I can condense the code, I am working within the style manual for my company)
Thread.Sleep(2000);
By nBaseLocator = By.XPath("//*[#id='attr_seq_1240']");
IWebElement baseRate = driver.FindElement(nBaseLocator);
baseRate.SendKeys(Keys.Home + xBaseRate + Keys.Tab);
If this doesn't help, please tell us more about the app and how it's functioning so we can help you find a solution.
#Florent B. Thank you!
EDIT: This ended up not working...
The tests are still running different when called remotely with a powershell script. But, the tests are running locally on both the ec2 instance and my machine correctly.
So the headless command switch allowed me to replicate my failed tests locally.
Next I found out that a headless chrome browser is used during the tests when running via script on an EC2 instance... That is automatic, so the tests where indeed running and the errors where valid.
Finally, I figured out that the screen size is indeed the culprit as it was stuck to a size of 600/400 (600/400?)
So after many tries, the only usable screen size option for Windows, C# and ChromeDriver 2.32 is to set your webDriver options when you initiate you driver:
ChromeOptions chromeOpt = new ChromeOptions();
chromeOpt.AddArguments("--headless");
chromeOpt.AddArgument("--window-size=1920,1080");
chromeOpt.AddArguments("--disable-gpu");
webDriver = new ChromeDriver(chromeOpt);
FINISH EDIT:
Just to update
Screen size is large enough.
Still attempting to solve the issue. Anyone else ran into this?
AWS SSM Command -> Powershell -> Run Selenium Tests with Start-Process -> Any test that requires an element fails because ElementNotFound or ElementNotVisible exceptions.
Using POM for tests. FindsBy filter in c# is not finding elements.
Running tests locally on EC2 run fine from cmd, powershell and Powershell ISE.
The tests do not work correctly when executing with the AWS SSM Command. Cannot find any resources to fix problem.

SSH: Torch-Lua Script Unexpectedly Stops

I'm trying to run a Torch-Lua script on the luajit interpreter by using SSH on a remote (Ubuntu 14.04) machine. It runs only for two iterations and displays all the outputs accordingly, but, as soon as the third iteration is going to complete, due to some unexpected reason, it seems to stop all by itself and I am returned to the terminal of the remote machine.
It doesn't display any standard OS messages, like the 'luajit' process being killed or being terminated with a signal. I used 'top' to check if it is running in the background but that is not the case. Neither it is the case that the remote machine is turning off, nor am I losing connection because I stay connected to the remote machine through SSH. And certainly, the script shouldn't have any issues as the exact same script runs till completion in my local machine. I would also like to mention that I have sudo-er permissions on the remote machine as well.
I am posting this because I have tried the same on two different, independent remote machines and it behaves in the same way. Can someone please help me by sharing what might be the cause(s) behind the "mysterious" way this script/machine might be behaving and possible solution(s) which I could try?
Thanks in advance.
EDIT:
The following is the output which I receive on the terminal every time I run the same script:
==> the main loop
==> online epoch # 1 [batchSize = 128]
[==================== 15/15 ==================>] Tot: 46s400ms | Step: 3s314ms
Train accuracy: 4.90 % time: 50.33 s
==> testing
Test accuracy: 1.50 %= 8 time: 3.05 s====>......] ETA: 387ms | Step: 3ms
==> online epoch # 2 [batchSize = 128]
[==================== 15/15 ==================>] Tot: 49s439ms | Step: 3s531ms
Train accuracy: 5.05 % time: 50.44 s
==> testing
Test accuracy: 1.50 %= 8 time: 2.92 s====>......] ETA: 369ms | Step: 2ms
==> online epoch # 3 [batchSize = 128]
[==================== 15/15 ==================>] Tot: 50s620ms | Step: 3s615ms
Train accuracy: 5.00 % time: 51.38 s
user-name#my-remote-machine:~/path/to/script$
(As you can see from the output, the script is essentially a training-testing procedure for a conv-net.)
After some thinking over and debugging, I found the issue with my script and resolved it.
Neither the SSH nor the system's configuration was terminating the script's execution. The problem was a little one with my script. Since the remote-machine that I was connecting to was not accessible as a standard desktop (by which I mean that it didn't have any desktop environment like GNOME), so I couldn't do 'ssh -x' to the machine. All the interactions with the machine could be done just with the command line.
So, there was one feature of "live plot" in my script which basically took the training/testing logs, being actively created by the script after each epoch, and displayed training/testing accuracy-versus-epoch plot (using 'gnuplot'). In my original script (which ran on my CPU-only, desktop-environment-enabled machine), it was enabled. Since I was initially using the same script on my remote machine, the same enabling was causing this strange problem in my case. After I disabled it, I was able to get the epochs running and the training-testing procedure working correctly, as I expected it to. In my script, it was just a flag, which I had to set to true/false, in order to enable/disable this "live plot" feature (similar to the way it is done in this tutorial).

How can I confirm how many tests are running in parallel?

I'm running my unit tests on a 16-core machine. I can't see any difference in elapsed time when using no parallelization parameter, --workers=1 and --workers=32. I'd like to confirm that NUnit really is running the expected number of simultaneous tests, so that I don't spend time hunting down a non-existent problem in my test code.
I have [Parallelizable] (default scope, ParallelScope.Self) on the common base class. It is defined not on any other class or method. I'm using nunit3-console, both via Jenkins and on my local command line.
Is there a way to tell that tests are running in parallel? NUnit is reporting the correct number of worker threads, but there's no report saying (for example) how many tests were run in each thread.
Run Settings
ProcessModel: Multiple
RuntimeFramework: net-4.5
WorkDirectory: C:\Jenkins\workspace\myproject
NumberOfTestWorkers: 16
I can log the start and finish times of each test then manually check that there's a reasonable number of overlaps; is there any simpler and more repeatable way of getting what I want?
I turned on --trace=Verbose which produces a few files. InternalTrace.<pid1>.log and InternalTrace.<pid2>.<dll_name>.log together contained a thorough description of what was happening during the tests. The per-agent log (the one with the DLL name in the log file name) was pretty clear about the state of parallelization.
16:13:40.701 Debug [10] WorkItemDispatcher: Directly executing test1
16:13:47.506 Debug [10] WorkItemDispatcher: Directly executing test2
16:13:52.847 Debug [10] WorkItemDispatcher: Directly executing test3
16:13:58.922 Debug [10] WorkItemDispatcher: Directly executing test4
16:14:04.492 Debug [10] WorkItemDispatcher: Directly executing test5("param1")
16:14:09.720 Debug [10] WorkItemDispatcher: Directly executing test5("param2")
16:14:14.618 Debug [10] WorkItemDispatcher: Directly executing test5("param3")
That third field looks like a thread ID to me. So I believe that the agent is running only one test at a time, even though (I think..) I've made all the tests parallelizable and they're all in different test fixtures.
Now I've just got to figure out what I've done wrong and why they're not running in parallel...
I could be mistaken but it seems to me that to set [Parallelizable] parameter is not enough to make tests run in parallel. You also need to create nodes that will run tests. So if you use Jenkins you have to create Jenkins Slaves and only then your tests will be run in parallel.
So what I want to tell is that as I understand there is no possibility to run tests in parallel on one PC.
If there is such a possibility (to run tests in parallel on the same machine) it would be really great and I struggle to hear about it!

Spring Batch allow only one instance at time

I want to allow to run only 1 batch run at time. I tried to find it by using this code (from job listener):
Set<JobExecution> jobExecutions = jobExplorer.findRunningJobExecutions(jobExecution.getJobInstance().getJobName());
if (jobExecutions.size() > 1)
System.exit(2);
When I run 2 batches from different terminals this code doesn't work.
P.S. In dev environment I use HSQL.