SWTBot becomes very slow - swtbot

I want to test an Eclipse RPC with SWTBot. The tests itself run fine. But the problem is performance: The first testcases complete in about 1 minute, after a while they take 1 hour or longer. Each testcase seems to take significantly longer than its predecessor although the testcases aren't more complex.
I was suspecting that I have any operations in my tests which cause the bot to wait for timeout a lot but that's not the case. A main culprit seems to be SWTBotMenu#contextMenu which takes a lot of the time and I can't figure out why, it's simple operations like tree().contextMenu("Save").click();

You can check once SWTBotPreferences.TIMEOUTconstant. By default it is 5000ms.
I think it should not take one hour time provided that first test case completes in one minute.
How much time application takes if same user actions are done manually instead of SWTBOT. Also try debugging once where exactly it stops in your SWTBOT

Related

The simulation model time slows down in virtual mode

I am currently building a model on a manufacture process line and the simulation was running fine without errors. Suddenly when I entered in virtual mode to run quickly the simulation, the model started to slow down although the step is high. I am trying to identify where the issue is but nothing is working. At a certain time , the simulation just stops while the step is still running.
This is a picture of the pallete, maybe the experiment is causing this.
You created an infinite loop, this can be triggered by various things in your model.
Likely, you have a ' while' loop not finishing, could also be a condition-based transition.
You need to find this yourself, though. 3 options:
(easy): Check the model logic yourself and find the problem
(easy): nudge yourself to where it stops with traceln commands (see where they stop showing, getting you closer to the culprit)
(harder): Use a profiler (google "AnyLogic profiling" or similar if you are not familiar)
Benjamin is correct, you have created an infinite loop. Click on the "Events" tab in the developer panel and see which events are scheduled to occur at about the time that your model slows down to 0 days/sec. You can also pay attention to the "Step: " counter at the bottom of the developer panel and see where the step count spikes - e.g., if your model has roughly 10k steps per day, and suddenly starts climbing to 400k steps around 25.99 days, you can pay attention to which things are happening in your logic at that time and narrow down where the infinite loop is created. traceln will also help immensely

Selenoid query priority

There is a question: Is there a possibility to set tests to perform priority in selenoid.
Problem: There is a suite> 20 tests, correspondingly at startup it fills the queue. After that, another test is run. He gets to the end of the line.
Is there an option to make it run as soon as the browser is freed, without waiting for all the tests to run before it?
No, this is not possible in current implementation. All incoming requests have equal priority. Two alternatives:
I think such issues should be addressed in test framework of you choice. For example for py.test a quick search shows a plugin for ordering your tests: https://github.com/ftobia/pytest-ordering Not sure whether it works.
You could also install Ggr and use different Selenoids and quota names for different tests, but this seems to be too much complicated for your case.

Matlab code taking a long time to run

I have a Matlab code (from a journal paper) and I'm trying to re-simulate their data.
I executed the code one week ago. I think the code is taking so long time to run. Matlab is still busy and taking 50% of my cpu.
I was wondering if the process has ended with some errors somewhere in the code. My question is:
When I see no errors, can I be sure that everything is fine with this running process? And I can wait until it is finished?
Is there any way to check which part of code is being run now ( without stopping the execution)?
Or I should stop the program and try something else?
Actually I don't want to loose this 1 week and if you think everything is fine, I would wait until the code stops.
(The authors of the paper didn't reply to my question and I don't know how long should it naturally take... They just mentioned it may take a long time to simulate the data).
Unfortunately, there is little we can do for you.
When I see no errors, can I be sure that everything is fine with this running process?
That's pretty much the definition of an error. If no error is raised, then it means that the program is still running.
Is there any way to check which part of code is being run now (without stopping the execution)?
Unfortunately no. For long-lasting execution times like that, a good developing practice is to display some information from time to time to inform the end user of the execution status.
However, if the programs produces files all along the way (like for instance at every step in an iterative simulation) you can check on your computer that the files are well-produced, and the production rate will more or less inform you on the total execution time.
For all your other questions, well, it's up to you to decide what to do (stop it or let it run). Be aware that the execution time can differ significantly from one machine to another, so the time it took on the author's machine may not be really informative to you.
In the future, I would advise you to react faster than within a week. When you launch a code that has a long execution time and see that there is no display within the first hour, you should stop it, modify it such that it regulatly displays information, and re-run it. It's better to loose one hour than one week.
Best,

Gatling test that uses a different action every request

We're trying to stress test our REST-ish app with Gatling. We want our users to make a post with a different fileBody every request.
Our scenario looks like:
scenario("100%")
.during(15 minutes) {
exec(requestStream.next())
.pause(118 seconds, 120 seconds)
}
.users(2)
.delay(2 minutes)
.protocolConfig(httpConf)
...build up several scenarios...
setUp(severalScenarios)
This runs fine but it appears that the block with the exec is only executed one time when each scenario is built for the first time. We thought that the block would be executed every time the during(...) loop comes around giving each user a new Request from the iterator to run every 15 minutes.
Are we missing something? Is there a smarter way of doing this?
No, that's not the way the DSL works. The DSL elements are actually builders that are resolved once and for all when the simulation is loaded.
What you want is inject dynamic data into your scenario elements, and you have to use Feeders, user Session, Gatling EL, etc. What does your requestStream look like?

CC.Net Build Server -- how to find the NUnit test that does not complete at all?

I have a test suite of ~ 1500 tests and they generally run and finish within 'reasonable time'.
Recently, however, I've changed parts of the code to use threads -- and now my builds fail from time to time by simply timing out. I imagine that a thread refuses to die and the build waits until reaching the maximum build time.
My problem is how to detect which test is causing the problem?
Can I activate some logging that shows me that a test has started/finished? I can of course be done by inserting code in every single test method - or just the fixtures, but that is A LOT of work that I'd rather avoid.
I'd suggest upgrading to NUnit 2.5 and decorating your tests with Timeout attribute, specifying maximum per-test run time. For example, you can put this in your AssemblyInfo.cs:
[assembly: NUnit.Framework.Timeout(100)]
NUnit will now launch each test in a separate thread and cancell it if it exceeds its time slot. However, this might be costly, so it's probably better to identify long-running tests and then remove assembly-level attribute in favor of test-fixture time slots. You can also override this on individual tests, assigning them more time to run.
This way you move the timeout/hang detection from CruiseControl.Net to NUnit and get information inside the report on the tasks that did not complete properly. Unfortunately there's no way for CC.Net to get this information when it has to kill the process because of timeout.