click_no_wait and then execute_script - watir-webdriver

I'm using the page object gem with a page that contains a link.
When the link is clicked, the browser navigates to a new page, and the HTML renders as expected.
However, my script is always returning a Timeout::Error: Timeout::Error. After looking at the browser network activity, I noticed a pattern with a long running get that never completes.
If I run the script and then go to the console and issue the command
$.connection.hub.stop(), the script will not time out.
Is there a way to perform a click_no_wait or to click and then execute a script via the use of page_object_gem ?
Here is my page-object attempt, but it is resulting in a timeout still.
class MyThing
include PageObject
include PageObject::PageFactory
link(:show_details, :id => 'detailLink')
def click_show_details_no_wait
begin
show_details_element.click
rescue Exception => e
execute_script('$.connection.hub.stop();')
end
end
end

It is hard to suggest without your page. My first idea is to try to do
show_details_element.fire_event('click')
instead of a usual click.
The second idea is take a look at
Stop loading page watir-webdriver
Is there have method to stop page loading in watir-webdriver
how to click on browser “stop loading this page” using ruby watir?
Good luck.

Related

Perl WWW::Mechanize::Firefox timeout implementation

I am using WWW::Mechanize::Firefox along with MozRepl plugin in Firefox. The code works properly to fetch content from sites by sending them an HTTP GET request.
I am going through a list of URLs and sending an HTTP GET Request to each of them.
However, if the request hangs on a particular URL, it keeps waiting.
Please note that I am referring to cases where a part of the web page content is loaded while some of the content is still pending. It happens in cases where a web page loads a lot of content from third party sites and if one of the resources (an image for instance) could not be loaded, the browser keeps waiting for it.
I want the request to timeout after 'n' seconds so that I can read the next URL from the list and continue with the code execution.
In WWW::Mechanize perl module, the constructor supported the timeout option as shown below:
$mech=WWW::Mechanize->new(timeout => 10);
However, I could not find a similar option in the documentation for the Perl Module, WWW::Mechanize::Firefox here:
http://metacpan.org/pod/WWW::Mechanize::Firefox
I tried this:
$mech=WWW::Mechanize::Firefox->new(timeout => 10);
But I think it does not work as there are still some sites for which the request hangs.
WWW::Mechanize::Firefox uses Mozrepl to connect with Firefox browser, so you don't need to declare timeout parameter, because Firefox will wait for page load.
If you want to check if site is really fully loaded, you should check that the element that you want (e.g. div) is present:
while (!$mech->is_visible( xpath => '//div[#class="myDivClassAtHtml"]')) {
sleep 1;
};
# do something with your page

Having trouble AFTER form submission with zombie.js

I have the following setup on my site:
You enter credentials on a login page and that takes you to a second page (which normally produces no screen output) which validates the user and redirects them to the appropriate homepage.
My step definitions consist of three steps:
Load up initial login page.
Enter credentials and submit.
Verify (by checking page title) that I made it in the homepage.
My first step passes with flying colors.
My second step claims to pass.
My third step fails.
Upon review, I found that it's because the second step, while officially it didn't fail, didn't do what it was supposed to do. Zombie got stuck on the validation page. At first I thought it was just missing the redirect, but it seems that it doesn't execute ANYTHING on the validation page. I even commented out the entire page and simply put an output of "Hello" at the top of the page. If my browser.html(); can be believed, it doesn't even see that. I know I make it to the second page because I have
console.log("\n" + browser.location.href);
which shows me the URL of the second page.
I then have
console.log(browser.html());
which is empty.
I even have a:
browser.wait(10000,callback);
beforehand to give it some processing time but to no avail.
Some information that might be relevant:
This is a ColdFusion site. I know zombie's handling the concept of CF since it's loading the login page initially, although there's not much actual CF processing going on there.
There's DB access happening. If zombie is accessing like a regular browser, it shouldn't make a difference, but it's there. Although even when I comment everything out, it still doesn't work, so I doubt that's actually relevant.
This is my script portion for the login step. Please advise if I'm approaching this the wrong way.
this.When(/^I input my credentials$/, function(callback) {
browser.fill("login", "myusername").fill("password", "mypassword");
browser.document.forms[0].submit();
// Put in here to account for redirect time it will take to get past validation page to actual home page
browser.wait(10000,callback);
callback();
});
If you need any other information, please let me know. I would appreciate any help whatsoever in being able to make this work!
I am not sure I fully understand your problem, but it looks like this may be an issue with the way you are handling asynchronous calls. At any rate, you should not need to browser.wait for something like this at all. Try something like the following:
this.When(/^I input my credentials$/, function(callback) {
browser
.fill("login", "myusername")
.fill("password", "mypassword")
.pressButton("#selectorForYourButton", function (err) {
// Check for errors or any other behaviour this test is actually about
callback();
});
});
First, the pressButton method is preferable because it gets closer to testing actual browser interaction. But more importantly, callback() is only executed after all the events fired off by pressing the button have been resolved.

Redirecting to second page in GWT doesn't load GWT components in second page

i am using GWT app engine to deploy my application in local host.
i want to redirect to second page when user completed his registration & clicked "submit" button, the browser has to redirect to automatically to his Profile page with his registration details.
i used fallowing code to redirect to second page from first page;
String url = GWT.getHostPageBaseURL()+"/UserViewProfile.html";
Window.Location.replace(url);
in my case the first page is URL is like:
http://127.0.0.1:8888/UserRegistration.html?gwt.codesvr=127.0.0.1:9997
when i submitted on "Submit" button it is edirecting to URL like:
http://127.0.0.1:8888/UserViewProfile.html
In second page(UserViewProfile.html) i developed simple HTML content & simple Textbox widget to check it's functionality. But i am seeing HTML content only but not "Textbox".
To see text box i has to type URL like:
http://127.0.0.1:8888/UserViewProfile.html?gwt.codesvr=127.0.0.1:9997
how i can access last part "?gwt.codesvr=127.0.0.1:9997" at end of my URL pattern automatically? if i add it manually, at the time of hosting it may leads to problem. please if any body give solution, that would be great.
I do not understand the use case. Anyway I guess you need to conditionally check if you are in DevMode or ProdMode, and add the gwt.codesvr=127.0.0.1:9997 query string accordingly. Something like:
String url = GWT.getHostPageBaseURL()+ "/UserViewProfile.html";
if (GWT.isProdMode()) {
Window.Location.replace(url);
} else {
Window.Location.replace(url + "?gwt.codesvr=127.0.0.1:9997");
}
The gwt.codesvr=127.0.0.1:9997 query string parameter is used by GWT to (simplifying) bootstrap your app in the so called Development Mode, instead of the Production Mode (the actual compiled version of your application). Without this check, if you are in DevMode, you end up requesting the UserViewProfile.html that looks for the compiled version of your app (that does not show anything, if you've never compiled it, or if you have simply recently clean the project).
Do also note that URL rewriting (by not simply changing the # fragment identifier), means application reloading.

How to redirect to an external url with Selenium, and come back?

I am working in perl with Selenium RC, server version 2.19.0-b09 and I cannot figure out whether it is even possible to redirect to an external URL and come back to my application. I am trying to test Facebook OAuth in my application, which means I have to go to Facebook and come back to my app.
use Test::WWW::Selenium::Catalyst 'MyApp', -selenium_args => 'injectProxyMode -trustAllSSLCertificates -debug -log /home/me/browserlog.txt -firefoxProfileTemplate /home/me/.mozilla/firefox/SeleniumUser.default/';
my $selenium = Test::WWW::Selenium::Catalyst->start({
browser => '*chrome',
});
The reason I think this is possible at all is because a custom Firefox profile and the -injectProxyMode, *chrome browser and -trustAllSSLCertificates options enable me to post to and see all the redirects in my debug log, but my Remote Control window always disappears after the redirects. I can see the PROXY URL to which Facebook is trying to send me back, e.g., a URL on my own base domain. But it looks like there is no window for it to return to. In multiWindow mode I am left with my application in a Firefox window. In singleWindow mode my tests just end and all the windows close.
I have tried both -singleWindow and -multiWindow mode. I have gotten the list of windows after I make my post to https://www.facebook.com:443/login.php... and before all the redirects. I see a single window that is never available to select_window, and it always disappears on the second iteration if I run get_all_window_names in in a while loop: a window with a name like "_e_0RWG".
So, how could I conceivably do what I am trying to accomplish with Selenium? It seems so near and yet so far.

Greasemonkey script not executed when unusual content loading is being used

I'm trying to write a Greasemonkey script for Facebook and having some trouble with the funky page/content loading that they do (I don't quite understand this - a lot of the links are actually just changing the GET, but I think they do some kind of server redirect to make the URL look the same to the browser too?). Essentially the only test required is putting a GM_log() on its own in the script. If you click around Facebook, even with facebook.com/* as the pattern, it is often not executed. Is there anything I can do, or is the idea of a "page load" fixed in Greasemonkey, and FB is "tricking" it into not running by using a single URL?
If I try to do some basic content manipulation like this:
GM.log("starting");
var GM_FB=new Object;
GM_FB.birthdays = document.evaluate("//div[#class='UIUpcoming_Item']", document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null);
for (i = GM_FB.birthdays.snapshotLength - 1; i >= 0; i--) {
if (GM_FB.birthdayRegex.test(GM_FB.birthdays.snapshotItem(i).innerHTML)) {
GM_FB.birthdays.snapshotItem(i).setAttribute('style','font-weight: bold; background: #fffe88');
}
}
The result is that sometimes only a manual page refresh will make it work. Pulling up the Firebug console and forcing the code to run works fine. Note that this isn't due to late loading of certain parts of the DOM: I have adding some code later to wait for the relevant elements and, crucially, the message never gets logged for certain transitions. For example, when I switch from Messages to News Feed and back.
Aren't they using ajax to load content in a div? You can find the element which is being updated by using Firebug for example.
When you click something and the URL changes, but with a # on the URL and after this some text, it means the text is not a path, it's a parameter, the browser won't change the page you are, so since GreaseMonkey inject the script on the page loads it won't inject again, because the page is not reloading.
As in your example the URL facebook.com/#!/sk=messages is not navigating away from facebook.com/ it will not fire window.load event.
So you need to find which element is being changed and add an event listener to that element, you can do is using Firebug as I mentioned before.
After you find out what element is getting the content, you have to add an event listener to that element and not the page (GreaseMonkey adds only on the window load event).
So in you GM script you would have ("air code")
document.getElement('dynamic_div').addEvent('load', /*your script*/);