How to write Integration test: Forced logout by logging in from another terminal? #Flutter - flutter

I am using webview for logging in my application. If i log in that account on device A, device B automatically logs out. Thanks for the help

There are a couple different paths you could take on this one:
1. Fake the check if another device logs out.
For this one, whatever call you make to see if the user should automatically log out, inject a mock or a fake to force it to say "yes, log out".
I'd suggest combining this with a separate test on the backend that ensures that when someone logs in, the old credentials are flagged as expired as well. Between the two, you have good coverage that the system as a whole works.
2. Manually test this functionality with two devices.
This is pretty self explanatory, but trying to get two separate automated tests to run in the proper order on two different devices consistently is probably not worth the effort. Especially when a quick manual test can verify this functionality when a full end to end validation is required.
It wouldn't be a bad idea to do both options above. #1 for quick validations during development, then do #2 as part of your major release process.

Related

Create Pool of logins that can be checked in and out by protractor parallel runs

I have a large suite of protractor tests that currently are setup to run with a unique login per spec. This allows us to run the specs in parallel of each other. But now we are looking to use protractors built in parallel runs, where it runs the tests within a spec in parallel. Problem is we need the tests to all login to their own unique login. Rather than creating a unique login for each and every test. What I am trying to do is create a pool of tests that the tests would check out when they start, and then check back in when finished. This way we could have something like 10 logins for a spec of 50 tests, and run 10 of the tests at the same time. Each one checking out one of the logins and then checking it in for the next test to use.
My original thought was to create a two dimensional array, with a list of logins and a boolean that says whether that login is in use or not. Then I figured the beforeEach function could log in to the next available account and mark that login as checked out. Then have the afterEach log out and check the account back in. But I am struggling to find any way for the AfterEach to be aware of the login that needs to be checked back in.
So I need some way for afterEach to know which login the test that just finished was using.
Is there a way to do this? Or is there a better way to manage a login pool?
Thanks in advance.
We've done something similar using a .temp directory and files. What you could do is before the tests run, generate a list of 10 users represented by 10 files that contain a json of the username and password, for example. When a test starts running it reserves a user by deleting the file so that no other test can use it. When the test is done running you have it write the file back to the temp directory so that another test can then "reserve" the user.

how do you dynamically insert console logs on a development server

When you're developing on localhost, then you've got full access to a terminal that you can log anywhere you want. But, in a project, I work on (and am new to team collaboration as a whole) they use something called weavescope to view logs that developers have created at the time of coding.
Now what the difference between this and logging locally, everytime you'll create a change in the code, you gotta send a pull request, they approve it, and merge it, deploy it and we finally see it in the log. Now, sometimes the state of local and deployed things don't match and it really makes us wanna dynamically log on to the development server without having to go through all these cycles over again. Is there any solution already around that helps us insert some quick log statements without having to go through the routine PR, merge, deploy cycle?
EDIT: I think from discussions I had below, the tool I am looking for is more or less a logging statment code injection tool. A tool that would keep track of the logs I'm inserting into the production code, and on/off them at spin of a command.
This seems like something that logging levels can help with (unless I'm misunderstanding). Something I typically do is leave debug-level log messages on commonly problematic or complex functions, but change the logging level to something higher when I move out of local. Sometimes depending on the app and access these can be configured at the environment rather than in the build.
For example there are Spring libraries that will let you import a static logger, set the level of each message you log out. Then locally you can keep the level at DEBUG, in UAT the level can be INFO, and if you only want ERROR OR WARN messages in prod you can separate that too. At the time of deployment you can set what environment it is and store a separate app.properties or yml file for each environment storing the desired level for each
Of course there is a solution for fast pace code changes.
Maybe this kind of hot reloading is what you're looking for. This way you can insert new calls to a logger or console.log quickly.
Although it does come with a disclaimer from the author.
I honestly haven't looked into whether this method of hot reloading would provide stable production zero-downtime deploys, however my "gut feel" says don't do it. And production deployments are probably one area where we should stick to known, trusted procedures unless we have good reason.

Testing a Product that Includes Syncing and other Network Requests

I am nearing the release of an iOS app that syncs and otherwise interacts with a server. I am struggling with a testing procedure that can cover most/all possible situations. I don't have any experience with automated testing so I have been doing everything manually so far with the iPhone simulator and a physical device.
How would I start designing automated tests that can help me get better coverage of possible situations and also serve me well in the future as I make changes and add new features?
You probably need to be more specific in your question. ie. outline how you communicate with your server, what technology is being employed etc.
But as a general approach the first thing I would be doing is looking to find a way to get reproducable results from the server. For example if I send a message asking for a record with an id of 'x' then the server will alwasy return the same record with the same data. There are severa ways to do this, one would be to load a set of test data into your server. Another would be to create a local test server and talk to that instead. Another option is to avoid the server all together in your automaticed tests and mock out the communication classes in your app. It totally depends on what you are trying to test and how.
Once you have your back end dealt with you can then look into automating the tests. This very much depends on how you have dealt with the server. For example, if you are performing an integration style test where you actually talk to a server, then the test might take the form:
Reset or clear the server data.
Load it with predictable data.
Run the iOS app using some testing framework and verify any data sent from the server.
Access the server and verify any changes made there.

What is the perlish way to test a web application, especially with regards to concurrent access?

I am working on a web application that uses CGI.pm to realize the user interaction. Now I like to test my changes by implementing unit tests. These are my requirements:
Perlish way
Low effort to implement the Unit test validating a simple workflow.
My web application consists mainly of two forms displaying and changing
the content of a flat file database.
Allow testing the concurrent access of multiple user.
This should ensure that e.g. the locking is performed the right way.
I am not interested in performance measurement.
Integration with Ecplise (EPIC)
Readable and Speaking unit tests
So far I have found these packages: CGI::Test, Test::HTTP, HTTP::WebTest and Test::WWW::Mechanize.
CGI::Test as a project seams rather dead, last change in Oct. 2003.
Test::HTTP focuses on the HTTP connection.
HTTP::WebTest is running tests from test specifications. Many more packages, but last change in Sep. 2003.
Test::WWW::Mechanize comprehensive and modern interface. Maintained for some time by multiple persons. Readable tests, but seams to focus on testing static pages (perhaps this is not correct, but only based on quantity of methods). Filling a form is possible using submit_form_ok, but there is no example showing that it is possible to check the returned page. Testing concurrent access is not obvious for me, too.
So my research would lead to Test::WWW::Mechanize. Will this be a correct way?
Thanks in advance for your help.
Test::WWW::Mechanize is a good way to go.
Test::WWW::Mechanize is a subclass of WWW::Mechanize and Test::WWW::Mechanize->new returns an object that is a subclass of LWP::UserAgent. So it would help you a lot if you read and understand the documentation for those libraries. For example, the WWW::Mechanize documentation will explain to you how to submit a form and retrieve its content.
Example
Here is an example that tests simultaneous access by 2 users and shows how to check the results (adapted from the Catalyst testing tutorial):
my $ua1 = Test::WWW::Mechanize->new; # user agent 1, Bud
my $ua2 = Test::WWW::Mechanize->new; # user agent 2, Ace
# Log in as each user
$ua1->get_ok("http://localhost/login?username=Bud&password=xxx", "Login 'Bud'");
$ua2->get_ok("http://localhost/login?username=Ace&password=xxx", "Login 'Ace'");
# Go back to the login page and it should show that we are already logged in
$_->get_ok("http://localhost/login", "Return to '/login'") for $ua1, $ua2;
$_->title_is("Login", "Check for login page") for $ua1, $ua2;
$_->content_contains("Please Note: You are already logged in as ",
"Check we ARE logged in" ) for $ua1, $ua2;
Brief explanation:
get_ok($url, $msg):
Checks to make sure $url can be retrieved. $msg is displayed when the test fails.
title_is($title, $msg):
Checks the contents of the <title>...</title> tags. $msg is displayed when the test fails.
content_contains($content, $msg):
Checks if the regular expression $content matches anything in the html body. $msg is displayed when the test fails.
More things to think about
You might want to look at Test::WWW::Mechanize::CGI. It allows you to test without running a webserver.
The WWW::Mechanize::FAQ could be useful to you if you are looking for examples.
I propose to split testing into two parts:
Testing interface with Test::WWW::Mechanize. Test::WWW::Mechanize is good for both static and dynamic pages, but it's purpose is dynamic pages. After submit_form_ok you need to use methods like content_contains, content_like and other methods of "CONTENT CHECKING" group. Also, Test::WWW::Mechanize is a WWW::Mechanize subclass, so you can use any WWW::Mechanize methods like "content".
Testing parallel access. Split that part of your program into separate library and test it using Test::More and fork.

Perl application move causing my head to explode...please help

I'm attempting to move a web app we have (written in Perl) from an IIS6 server to an IIS7.5 server.
Everything seems to be parsing correctly, I'm just having some issues getting the app to actually work.
The app is basically a couple forms. You fill the first one out, click submit, it presents you with another form based on what checkboxes you selected (using includes and such).
I can get past the first form once... but then after that it stops working and pops up the generated error message. After looking into the code and such, it basically states that there aren't any checkboxes selected.
I know the app writes data into .dat files... (at what point, I'm not sure yet), but I don't see those being created. I've looked at file/directory permissions and seemingly I have MORE permissions on the new server than I did on the last. The user/group for the files/dirs are different though...
Would that have anything to do with it? Why would it pass me on to the next form, displaying the correct "modules" I checked the first time and then not any other time after that? (it seems to reset itself after a while)
I know this is complicated so if you have any questions for me, please ask and I'll answer to the best of my ability :).
Btw, total idiot when it comes to Perl.
EDIT AGAIN
I've removed the source as to not reveal any security vulnerabilities... Thanks for pointing that out.
I'm not sure what else to do to show exactly what's going on with this though :(.
I'd recommend verifying, step by step, that what you think is happening is really happening. Start by watching the HTTP request from your browser to the web server - are the arguments your second perl script expects actually being passed to the server? If not, you'll need to fix the first script.
(start edit)
There's lots of tools to watch the network traffic.
Wireshark will read the traffic as it passes over the network (you can run it on the sending or receiving system, or any system on the collision domain).
You can use a proxy server, like WebScarab (free), Burp, Paros, etc. You'll have to configure your browser to send traffic to the proxy server, which will then forward the requests to the server. These particular servers are intended to aid testing, in that you'll be able to mess with the requests as they go by (and much more)
As Sinan indicates, you can use browser addons like Fx LiveHttpHeaders, or Tamper Data, or Internet Explorer's developer kit (IIRC)
(end edit)
Next, you should print out all CGI arguments that the second perl script receives. That way, you'll know what the script really thinks it gets.
Then, you can enable verbose logging in IIS, so that it logs the full HTTP request.
This will get you closer to the source of the problem - you'll know if it's (a) the first script not creating correct HTML, resulting in an incomplete HTTP request from the browser, (b) the IIS server not receiving the CGI arguments for some odd reason, or (c) the arguments aren't getting from the IIS server and into the perl script (or, possibly, that the perl script is not correctly accessing the arguments).
Good luck!
What you need to do is clear.
There is a lot of weird excess baggage in the script. There seemed to be no subroutines. Just one long series of commands with global variables.
It is time to start refactoring.
Get one thing running at a time.
I saw HTML::Template there but you still had raw HTML mixed in with code. Separate code from presentation.