Run browser commands concurrently in Protractor - protractor

Is there any way to write a Protractor test in such a way that would allow simulation of many users at once? E.g. simulate 100 users all making a checkout at the same time, or 100 users all logging in at the same time. The purpose being to detect any possible race conditions or locking issues.
Protractor appears to be designed so that everything runs in sync, even across multiple browser instances (i.e. forks). Is there any way to accomplish what I'm doing in Protractor, or am I out of luck?
Note that I'm not referring to running tests concurrently (be that running tests under both Firefox and Chrome or spreading describes over multiple instances to speed it up), rather the ability to spawn new "threads" inside one test case to execute commands in parallel.

Related

Protractor - Why should i implement waiting or sleeps in test script

I have read that "Protractor can automatically execute the next step in your test the moment the webpage finishes pending tasks, so you don’t have to worry about waiting"
But, I had to implement waiting(s) or sleeps in my test script to make them all PASS.
Can anyone help to understand this waiting.
Read At :http://www.protractortest.org/#/
Automatic Waiting:
You no longer need to add waits and sleeps to your test. Protractor can automatically execute the next step in your test the moment the webpage finishes pending tasks, so you don’t have to worry about waiting for your test and webpage to sync.
Right, I find this description as confusing as you. I think it describes the ideal world with no network delays and timeouts, no animations and layout issues.
The description originates from the following:
Protractor runs an extra command before performing any action on the
browser to ensure that the application being tested has stabilized.
This extra command is an async script which asks Angular to respond when the application is done with all timeouts and asynchronous requests, and ready for the test to resume.
Now, what does that "application is ready" statement mean? It basically means that, there are no pending requests, promises and "macro tasks" inside the Angular running application (source for angular testability).
From what I understand, this helps to cover most of the timing and waiting issues, but, if there is a pending JS code executed outside of Angular, or if there are any pending animations or other UI-related changes - this may potentially have an effect on your test stability - for instance, an element might not be yet visible or clickable, an input may not yet get enabled etc.
And, this does not actually contribute to the feedback from the end-to-end tests being stable and helpful - for example, in our project we often find ourselves adding browser.wait()s here and there to tackle occasionally failing tests. Also, here is a set of things that helped us to tackle this flakiness:
Protractor flakiness

TeamCity cross-browser Webdriver multiple test failure information

I have a suite of Webdriver tests in NUnit that I have running repeatedly on a TeamCity server. I'm getting the driver type from a config file that is altered between test runs; this was the most elegant way of doing cross-browser testing within NUnit that I could come up with. Fortunately, TeamCity combines the output of all the test runs quite neatly. Unfortunately, when a test fails more than once, only one failure and its accompanying stack trace is displayed, with an annotation of "2 failures in one build." Since these are actually different tests, being in different browsers, I would like to view the error outputs separately. Is there a way to do this?
For cross browser testing you can consider using Grid 2 instead of changing the browser type from the configuration file.
In addition you'll be able to run in your tests in parallel on different browsers
and solve your multiple test failure information

Is Selenium Grid2 really capable to run tests in parallel on its own?

Well, StackOverFlow is such a good site, most of my google search results direct to here, really.
I've seen too many posts about selenium grid2 inside and outside this place, they all explained that, grid2 has such capability to run tests in parallel, how to set up grid hub and nodes. But, no one told me how to run tests through selenium grid2, all I got was "set up hub and nodes, then run tests, then all things become parallel". But how to trigger the running through selenium grid2?
Then, I got answers myself, that is, to trigger the running with another runner, e.g., NUnit. However, NUnit can only run tests serially, not parallelly. I've also tried other runners but they can't function so well along with grid2.
So I started to doubt, whether selenium grid2 really has such capability to run tests in parallel on its own? If so, how to? What is the whole workflow?
If not, then a third-party tool is needed to trigger the running, what's more, the third-party tool must be able to trigger multiple tests at one time(multi-thread, something like that?), so that grid2 can deliver those tests to its nodes to run them at the same time. In this way, can we call it a "parallel running".
What third-party tool would be a good choice? NAnt? Jenkins?
I have a long story coping with grid2 these days, these statements above are just part of it. If you can come up with anything, please tell me, that would be really appreciated.
我对自己的英文还是有信心的,在此多谢各位的帮忙了!谢谢!
Selenium Grid 2 is capable of executing tests in parallel "provided you pass multiple commands simultaneously to the hub". You need to use a different framework like NUnit, testNG to run multiple test cases simultaneously. I use testNG for triggering multiple tests in parallel. It works absolutely fine without any issues. You can find some help on getting started here
WebDriver driver = new RemoteWebDriver(new URL("http://localhost:4444/wd/hub"), capability);
as descripted here:
http://code.google.com/p/selenium/wiki/Grid2
Tests are passed to a node which executes.

Is it possible to get Selenium IDE to run the whole test suite even though errors are encountered

We often use test suites to ensure we start from a known, working situation.
In this case, we want the whole test suite to run to test all the pages. If some fail, we still want the other tests to run.
The test suite seems to stop the moment it finds an error in one of the tests.
Can it be set-up to keep going and run all the tests, irrespective of results?
Check you are using Verify and not Assert. Assert will STOP your system
Each test case restart from a fixed point (like load base URL)
Set your timeout (how long you want the system to wait before moving on)

Quartz job fires multiple times

I have a building block which sets up a Quartz job to send out emails every morning. The job is fired three times every morning instead of once. We have a hosted instance of Blackboard, which I am told runs on three virtual servers. I am guessing this is what is causing the problem, as the building block was previously working fine on a single server installation.
Does anyone have Quartz experience, or could suggest how one might prevent the job from firing multiple times?
Thanks,
You didn't describe in detail how your Quartz instance(s) are being instantiated and started, but be aware that undefined behavior will result if you run multiple Quartz instances against the same job store database at the same time, unless you enable clustering (see http://www.quartz-scheduler.org/docs/configuration/ConfigJDBCJobStoreClustering.html).
I guess I'm a little late responding to this, but we have a similar sort of scenario with our application. We have 4 servers running jobs, some of which can run on multiple servers concurrently, and some should only be run once. As Will's response said, you can look into the clustering features of Quartz.
Our approach was a bit different, as we had a home-grown solution in place before we switched to Quartz. Our jobs utilize a database table that store the cron triggers and other job information, and then "lock" the entry for a job so that none of the other servers can execute it. This keeps jobs from running multiple-times on the servers, and has been fairly effective so far.
Hope that helps.
I had the same issue before but I discovered that I was calling scheduler.scheduleJob(job, trigger); to update the job data while the job is running which is randomly triggered the job 5-6 times each run. I had to use the following to update the job data without updating the trigger scheduler.addJob(job, true);