How to obtain a persistent login when using a CasperJS login script? - webserver

I have a CasperJS script which logs into our test website platform. Our website application produces dynamic data which is updated every second, and normally using a web browser the login is left running (as you would using webmail)
The script logs into the website as a user, waits five seconds for the page to populate with data and uses the this.capture function to grab a screen shot to confirm the details are correct.
What I want to do, is follow on from the login as I've noticed the CasperJS script does not stay logged in as our customers logins are persistent.
I want to do this because we are load testing a new proof of concept platform.
Does anyone know how I make CasperJS do this?
I also want to parse a csv list of username/passwords to simulate logins - I'm presuming that I have to do this via a shell script or get PhantomJS invoke each login sequentially?
(background: I'm not a web developer, but someone with 20 years of IT and Unix/Infrastructure - so I would class myself as an intermediate skill scripting)

Persistent login
This is what the --cookies-file commandline option is for. It stores all cookies in a file on disk and on subsequent invocations of the script will use the stored cookies to restore the session. So just run your script like this:
casperjs --cooies-file=cookies.txt yourScript.js
yourScript.js should be able to tell that are already logged in.
Multiple credentials
Your other problem can be solved in different ways, but none of them should be invoked with the --cookies-file option.
Since a CSV is a simple file format you can read it through the PhantomJS fs module and iterate over them with casper.eachThen. For each iteration, you would need to login, do your thing and don't forget to log out just in the same way you would do in a browser session.
Parse the CSV somehow in the shell and pass the pairs into CasperJS. Then you can access casper.cli to get the credentials to log in. With this option you don't need to log out, since each invocation runs in its own PhantomJS instance and doesn't share cookies.
This option can be combined with your first question, if that is what you want. Add on each invocation the option --cookies-file=cookies_<username>.txt, so you can run the shell script multiple times without logging in each time.
Load testing
If I understood correctly, then the web application is password protected. You would need to run a separate CasperJS process for each username/password pair. You should check the memory footprint for one script invocation and scale up. Memory is the primary limiting factor which you can calculate for your test machine, but CPU will also hit a limit somewhere.
PhantomJS/CasperJS instances are full browsers and are therefore much heavier than a slim webserver. So you will probably need multiple machines each with many instances that run your script to load test the webserver.

Related

Create Pool of logins that can be checked in and out by protractor parallel runs

I have a large suite of protractor tests that currently are setup to run with a unique login per spec. This allows us to run the specs in parallel of each other. But now we are looking to use protractors built in parallel runs, where it runs the tests within a spec in parallel. Problem is we need the tests to all login to their own unique login. Rather than creating a unique login for each and every test. What I am trying to do is create a pool of tests that the tests would check out when they start, and then check back in when finished. This way we could have something like 10 logins for a spec of 50 tests, and run 10 of the tests at the same time. Each one checking out one of the logins and then checking it in for the next test to use.
My original thought was to create a two dimensional array, with a list of logins and a boolean that says whether that login is in use or not. Then I figured the beforeEach function could log in to the next available account and mark that login as checked out. Then have the afterEach log out and check the account back in. But I am struggling to find any way for the AfterEach to be aware of the login that needs to be checked back in.
So I need some way for afterEach to know which login the test that just finished was using.
Is there a way to do this? Or is there a better way to manage a login pool?
Thanks in advance.
We've done something similar using a .temp directory and files. What you could do is before the tests run, generate a list of 10 users represented by 10 files that contain a json of the username and password, for example. When a test starts running it reserves a user by deleting the file so that no other test can use it. When the test is done running you have it write the file back to the temp directory so that another test can then "reserve" the user.

Running different Powershell startup Scripts on Different Users

Recently I decided to split My Pc into two Users one for gaming and one for work to increase my work productivity. Now I am wondering is there a way to:
run a PowerShell script on UserLogin Opening Certain Apps/Programs and maybe even putting in custom input like login info for immediate login.
run different scripts depending on the logged-in User.

Script to generate test users with random data for Facebook application

I want to test my Facebook application with the maximum 500 test users available. I've had a go at using the interface which facebook provide and another good one called "FacebookTestUserManager", but these create blank user profiles and I want to populate certain parts of the profiles with random information e.g. profile picture, education etc.
I don't think getting this data should be too difficult (I'm thinking a list of options and getting a random number generator to select a choice), but I'm confused as to how I input this information into the accounts and how I run my script.
This http://developers.facebook.com/docs/test_users/ is basically the only resource I can find on the matter, but it is very brief. My questions are:
1) Before I start, are there are any public scripts which already do this?
2) How do I run my script which does this account generation process? I presume it's not written inside my application since I only want it run once!
How do I run my script which does this account generation process?
Like you run any other script …
I presume it's not written inside my application since I only want it run once!
It does not have to be “inside” of anything, it just has to use your app access token while doing it’s Graph API calls.
I think all of what the document you referred to says should be easily understandable to a developer with a solid basic knowledge about how apps and their interactions with the Grapf API work. Should you not have such knowledge yet … then I don’t see any use in testing an app with 500 test users already.

Perl application move causing my head to explode...please help

I'm attempting to move a web app we have (written in Perl) from an IIS6 server to an IIS7.5 server.
Everything seems to be parsing correctly, I'm just having some issues getting the app to actually work.
The app is basically a couple forms. You fill the first one out, click submit, it presents you with another form based on what checkboxes you selected (using includes and such).
I can get past the first form once... but then after that it stops working and pops up the generated error message. After looking into the code and such, it basically states that there aren't any checkboxes selected.
I know the app writes data into .dat files... (at what point, I'm not sure yet), but I don't see those being created. I've looked at file/directory permissions and seemingly I have MORE permissions on the new server than I did on the last. The user/group for the files/dirs are different though...
Would that have anything to do with it? Why would it pass me on to the next form, displaying the correct "modules" I checked the first time and then not any other time after that? (it seems to reset itself after a while)
I know this is complicated so if you have any questions for me, please ask and I'll answer to the best of my ability :).
Btw, total idiot when it comes to Perl.
EDIT AGAIN
I've removed the source as to not reveal any security vulnerabilities... Thanks for pointing that out.
I'm not sure what else to do to show exactly what's going on with this though :(.
I'd recommend verifying, step by step, that what you think is happening is really happening. Start by watching the HTTP request from your browser to the web server - are the arguments your second perl script expects actually being passed to the server? If not, you'll need to fix the first script.
(start edit)
There's lots of tools to watch the network traffic.
Wireshark will read the traffic as it passes over the network (you can run it on the sending or receiving system, or any system on the collision domain).
You can use a proxy server, like WebScarab (free), Burp, Paros, etc. You'll have to configure your browser to send traffic to the proxy server, which will then forward the requests to the server. These particular servers are intended to aid testing, in that you'll be able to mess with the requests as they go by (and much more)
As Sinan indicates, you can use browser addons like Fx LiveHttpHeaders, or Tamper Data, or Internet Explorer's developer kit (IIRC)
(end edit)
Next, you should print out all CGI arguments that the second perl script receives. That way, you'll know what the script really thinks it gets.
Then, you can enable verbose logging in IIS, so that it logs the full HTTP request.
This will get you closer to the source of the problem - you'll know if it's (a) the first script not creating correct HTML, resulting in an incomplete HTTP request from the browser, (b) the IIS server not receiving the CGI arguments for some odd reason, or (c) the arguments aren't getting from the IIS server and into the perl script (or, possibly, that the perl script is not correctly accessing the arguments).
Good luck!
What you need to do is clear.
There is a lot of weird excess baggage in the script. There seemed to be no subroutines. Just one long series of commands with global variables.
It is time to start refactoring.
Get one thing running at a time.
I saw HTML::Template there but you still had raw HTML mixed in with code. Separate code from presentation.

How can I debug a Perl CGI script?

I inherited a legacy Perl script from an old server which is being removed. The script needs to be implemented on a new server. I've got it on the new server.
The script is pretty simple; it connects via expect & ssh to network devices and gathers data. For debugging purposes, I'm only working with the portion that gathers a list of the interfaces from the device.
The script on the new server always shows me a page within about 5 seconds of reloading it. Rarely, it includes the list of interfaces from the remote device. Most commonly, it contains all the HTML elements except the list of interfaces.
Now, on the old server, sometimes the script would take 20 seconds to output the data. That was fine.
Based on this, it seems that apache on the new server is displaying the data before the Perl script has finished returning its data, though that could certainly be incorrect.
Additional Information:
Unfortunately I cannot post any code - work policy. However, I'm pretty sure it's not a problem with expect. The expect portions are written as expect() or die('error msg') and I do not see the error messages. However, if I set the expect timeout to 0, then I do see the error messages.
The expect timeout value used in the script normally is 20 seconds ... but as I mentioned above, apache displays the static content from the script after about 5 seconds, and 95% of the time does not display the content that should retrieved from expect. Additionally, the script writes the expect content to a file on the drive - even when the page does not display it.
I just added my Troubleshooting Perl CGI scripts guide to Stackoverflow. :)
You might try CGI::Inspect. I haven't needed to try it myself, but I saw it demonstrated at YAPC, and it looked awesome.