Create Pool of logins that can be checked in and out by protractor parallel runs - protractor

I have a large suite of protractor tests that currently are setup to run with a unique login per spec. This allows us to run the specs in parallel of each other. But now we are looking to use protractors built in parallel runs, where it runs the tests within a spec in parallel. Problem is we need the tests to all login to their own unique login. Rather than creating a unique login for each and every test. What I am trying to do is create a pool of tests that the tests would check out when they start, and then check back in when finished. This way we could have something like 10 logins for a spec of 50 tests, and run 10 of the tests at the same time. Each one checking out one of the logins and then checking it in for the next test to use.
My original thought was to create a two dimensional array, with a list of logins and a boolean that says whether that login is in use or not. Then I figured the beforeEach function could log in to the next available account and mark that login as checked out. Then have the afterEach log out and check the account back in. But I am struggling to find any way for the AfterEach to be aware of the login that needs to be checked back in.
So I need some way for afterEach to know which login the test that just finished was using.
Is there a way to do this? Or is there a better way to manage a login pool?
Thanks in advance.

We've done something similar using a .temp directory and files. What you could do is before the tests run, generate a list of 10 users represented by 10 files that contain a json of the username and password, for example. When a test starts running it reserves a user by deleting the file so that no other test can use it. When the test is done running you have it write the file back to the temp directory so that another test can then "reserve" the user.

Related

How to write Integration test: Forced logout by logging in from another terminal? #Flutter

I am using webview for logging in my application. If i log in that account on device A, device B automatically logs out. Thanks for the help
There are a couple different paths you could take on this one:
1. Fake the check if another device logs out.
For this one, whatever call you make to see if the user should automatically log out, inject a mock or a fake to force it to say "yes, log out".
I'd suggest combining this with a separate test on the backend that ensures that when someone logs in, the old credentials are flagged as expired as well. Between the two, you have good coverage that the system as a whole works.
2. Manually test this functionality with two devices.
This is pretty self explanatory, but trying to get two separate automated tests to run in the proper order on two different devices consistently is probably not worth the effort. Especially when a quick manual test can verify this functionality when a full end to end validation is required.
It wouldn't be a bad idea to do both options above. #1 for quick validations during development, then do #2 as part of your major release process.

how to write E2E tests for search feature

I've a feature in my application where the data collection be created dynamically by a job that runs nightly (it collects data from sql and creates a mongo collection). I've created a page to search the data from that collection and upon clicking the results it will take the user to edit the data and then save it back. As, my test database in different than my actual database how can I test this feature, any idea or inputs from anyone is highly appreciated. I can copy my collection from my dev database to my test database, but I'm wondering how do I that when I run my tests in my CI environment.
Build cleanup functions to ensure that your environment is the same every time you run the tests.
In protractor you just need to send keys to whatever search box you have created.
Sorry I'll need more info in order to give more useful advice. Try protractortest.org

How to obtain a persistent login when using a CasperJS login script?

I have a CasperJS script which logs into our test website platform. Our website application produces dynamic data which is updated every second, and normally using a web browser the login is left running (as you would using webmail)
The script logs into the website as a user, waits five seconds for the page to populate with data and uses the this.capture function to grab a screen shot to confirm the details are correct.
What I want to do, is follow on from the login as I've noticed the CasperJS script does not stay logged in as our customers logins are persistent.
I want to do this because we are load testing a new proof of concept platform.
Does anyone know how I make CasperJS do this?
I also want to parse a csv list of username/passwords to simulate logins - I'm presuming that I have to do this via a shell script or get PhantomJS invoke each login sequentially?
(background: I'm not a web developer, but someone with 20 years of IT and Unix/Infrastructure - so I would class myself as an intermediate skill scripting)
Persistent login
This is what the --cookies-file commandline option is for. It stores all cookies in a file on disk and on subsequent invocations of the script will use the stored cookies to restore the session. So just run your script like this:
casperjs --cooies-file=cookies.txt yourScript.js
yourScript.js should be able to tell that are already logged in.
Multiple credentials
Your other problem can be solved in different ways, but none of them should be invoked with the --cookies-file option.
Since a CSV is a simple file format you can read it through the PhantomJS fs module and iterate over them with casper.eachThen. For each iteration, you would need to login, do your thing and don't forget to log out just in the same way you would do in a browser session.
Parse the CSV somehow in the shell and pass the pairs into CasperJS. Then you can access casper.cli to get the credentials to log in. With this option you don't need to log out, since each invocation runs in its own PhantomJS instance and doesn't share cookies.
This option can be combined with your first question, if that is what you want. Add on each invocation the option --cookies-file=cookies_<username>.txt, so you can run the shell script multiple times without logging in each time.
Load testing
If I understood correctly, then the web application is password protected. You would need to run a separate CasperJS process for each username/password pair. You should check the memory footprint for one script invocation and scale up. Memory is the primary limiting factor which you can calculate for your test machine, but CPU will also hit a limit somewhere.
PhantomJS/CasperJS instances are full browsers and are therefore much heavier than a slim webserver. So you will probably need multiple machines each with many instances that run your script to load test the webserver.

Pattern for Google Alerts-style service

I'm building an application that is constantly collecting data. I want to provide a customizable alerts system for users where they can specify parameters for the types of information they want to be notified about. On top of that, I'd like the user to be able to specify the frequency of alerts (as they come in, daily digest, weekly digest).
Are there any best practices or guides on this topic?
My instincts tell me queues and workers will be involved, but I'm not exactly sure how.
I'm using Parse.com as my database and will also likely index everything with Lucene-style search. So that opens up the possibility of a user specifying a query string to specify what alerts s/he wants.
If you're using Rails and Heroku and Parse, we've done something similar. We actually created a second Heroku app that did not have a web dyno -- it just has a worker dyno. That one can still access the same Parse.com account and runs all of its tasks in a rake task like they specify here:
https://devcenter.heroku.com/articles/scheduler#defining-tasks
We have a few classes that can handle the heavy lifting:
class EmailWorker
def self.send_daily_emails
# queries Parse for what it needs, loops through, sends emails
end
end
We also have the scheduler.rake in lib/tasks:
require 'parse-ruby-client'
task :send_daily_emails => :environment do
EmailWorker.send_daily_emails
end
Our scheduler panel in Heroku is something like this:
rake send_daily_emails
We set it to run every night. Note that the public-facing Heroku web app doesn't do this work but rather the "scheduler" version. You just need to make sure you push to both every time you update your code. This way it's free, but if you ever wanted to combine them it's simple as they're the same code base.
You can also test it by running heroku run rake send_daily_emails from your dev machine.

Script to generate test users with random data for Facebook application

I want to test my Facebook application with the maximum 500 test users available. I've had a go at using the interface which facebook provide and another good one called "FacebookTestUserManager", but these create blank user profiles and I want to populate certain parts of the profiles with random information e.g. profile picture, education etc.
I don't think getting this data should be too difficult (I'm thinking a list of options and getting a random number generator to select a choice), but I'm confused as to how I input this information into the accounts and how I run my script.
This http://developers.facebook.com/docs/test_users/ is basically the only resource I can find on the matter, but it is very brief. My questions are:
1) Before I start, are there are any public scripts which already do this?
2) How do I run my script which does this account generation process? I presume it's not written inside my application since I only want it run once!
How do I run my script which does this account generation process?
Like you run any other script …
I presume it's not written inside my application since I only want it run once!
It does not have to be “inside” of anything, it just has to use your app access token while doing it’s Graph API calls.
I think all of what the document you referred to says should be easily understandable to a developer with a solid basic knowledge about how apps and their interactions with the Grapf API work. Should you not have such knowledge yet … then I don’t see any use in testing an app with 500 test users already.