Xpath matching one node but not working in webdriver - eclipse

Is it normal for an XPath that was validated correctly in firepath and matched 1 node to fail to work in selenium webdriver (java)? I have a dynamic element and I have generated an XPath using the "contains" method which matched a single node that happens to be the same element I was looking for. In eclipse, webdriver throws a "NoSuchElementException" as it was unable to find the element. After you think you have mastered the tricks behind Xpath, some stubborn webelements uncover your flaws.
For the attached html, I have generated the below Xpath. Can anyone help generate an XPath or even CSS that would work?
//div[contains(#id, 'gwt-uid') and #aria-selected='true']

Yes. There is always a possibility of XPATH matching using Firebug (during development mode, where you visited page manually) may not identify during run time (browser launched using selenium). It is not because of Firebug showing wrong, but the HTML against which XPATH is using, may not be the same (might have changed, may be subtle changes).
I would strongly suggest pause (not stopping it) the execution during the run-time (one way is, using Thread.sleep(100) (100 seconds)) to give you enough time to evaluate your XPATH again to see the matches. post your observations.
XPATH seems fine to me.
Suspect may be, aria-selected set to false

I would think finding By.CssSelector would be easier:
driver.FindElement(By.CssSelector("div[id^='gwt-uid']"));
Though I would be concerned there may be other elements with ids prefixed with 'gwt-uid' because of what I assume is a dynamic unique id at the end of it. You could get the known closest parent (id='consumerTree') first to ensure you don't end up getting the wrong element. In C#:
IWebElement parent = driver.FindElement(By.Id("consumerTree"));
IWebElement element = parent.FindElement(By.CssSelector("div[id^='gwt-uid']"));

(Assuming you're using Java) If you are getting NoSuchElementException as your provided exception, There may be following reasons :-
May be when you are going to find element, it would not be present on the DOM, So you should implement WebDriverWait to wait until element visible as below :-
WebDriverWait wait = new WebDriverWait(driver, 10);
WebElement el = wait.until(ExpectedConditions.visibilityOfElementLocated(By.cssSelector("div#consumerTree div.v-tree-node[id*='gwt-uid']")));
May be this element is inside any frame or iframe. If it is, you need to switch that frame or iframe before finding the element as below :-
WebDriverWait wait = new WebDriverWait(driver, 10);
//Find frame or iframe and switch
wait.until(ExpectedConditions.frameToBeAvailableAndSwitchToIt("your frame id or name"));
//Now find the element
WebElement el = wait.until(ExpectedConditions.visibilityOfElementLocated(By.cssSelector("div#consumerTree div.v-tree-node[id*='gwt-uid']")));
//Once all your stuff done with this frame need to switch back to default
driver.switchTo().defaultContent();

Related

vscode.workspace.openTextDocument fails silently

With the same value for Uri, openTextDocument fails to have any discernible effect yet executeCommand successfully opens the document.
vscode.workspace.openTextDocument(uri);
vscode.commands.executeCommand("vscode.open", uri);
Are there any known problems with vscode.workspace.openTextDocument?
This might simply be a misunderstanding of what openTextDocument() does. It just creates a vscode.TextDocument instance, actually showing it in the UI is independent of that. That's why it's in the vscode.workspace namespace rather than vscode.window.
vscode.window.showTextDocument is used for actually showing a document:
Show the given document in a text editor. A column can be provided
to control where the editor is being shown. Might change the active editor.
vscode.workspace.openTextDocument(...).then(
document => vscode.window.showTextDocument(document));

Xpath starting retuning None on Scrapy

I'm trying to crawl a site and to do so, I'm using Scrapy. So, when doing requests to nested pages, the procedure usually gets the the information correctly on the first trials, but, on later requests the nodes starts to return None. I'm using xpath's functionality. Below I'm pasting some lines of the parse function:
(I tried this one with the approach of explicitly comparing the class value)
title = response.xpath('//span[#class="inlineFree"]/text()').extract_first()
(With this one I used the contains function)
view = response.xpath('//span[contains(#class,"count")]/text()').extract_first()
(I've also used this one when I found more suitable)
comments = response.css('div.commentMessage > span::text').extract()
Am I doing something wrong on paths?
Is there any reason for the crawler to stop reading the nodes correctly?
Cannot say what the problem is without the log messages or the spider code but..
What happens most of the time is that websites fo not follow a strict html structure .For some properties the 'title' may be inside the span
but for the next iteration it may be
span[#class="inlineFree"]/h1/text() or or any other tag
so you should check the html for those returning None

Protractor Implementation in Angular2 without using ids

I have application in Angularjs2, and developers have not been using ids into it. Now I have to implement the Protractor on same application. Is there anyway to implement the Protractor without using "absolute XPath"?
Thanks in advance!
Please find a huge range of locator-possibilities on the official Protractortest API Page
Every element on a page needs to be uniquely identifiable... else the page wouldn't work, no matter which technology. Therefore with the help of any of the above provided locator-possibilities you'll always find the element you're looking for.
And there is never a need for XPath, except for this only one. (though there is an parentElementArrayFinder introduced in the meantime, so not even that one exception is valid anymore)
UDPATE
If you could use XPath, you can for sure use CSS-Locators.
Here some examples for locators:
$('div.class#id[anyAttribute="anyValue"] div.child.somewhere-below-div-point-class')
element(by.cssContainingText('div[data-index="2"]', 'select this option'))
Or as a specific example the "Learn More" of the "Tree List" section of https://js.devexpress.com/ :
treeListSection = element(by.cssContainingText('div.tab-content h2', 'Tree List')).getDriver();
learnMoreBtn = treeListSection.element(by.cssContainingText('a.tab-button','Learn More'));
learnMoreBtn.click();
Those are just examples, but there is always a way to do it.
If you provide some example-HTML in your Question, I can direct you towards a solution.
UPDATE 2
For getting the Parent Web Element, one could use getDriver() as well

Apache Wicket event on Page "page was mouted on ..."

I have mount Page in this form (with one predefined parameter):
mountPage("/lista/${variant}", StronaEntityV2.class);
when parameter "variant" is given all is OK. But when parameter is absent (is OK too from application point of view) URL is build in form
wicket/bookmarkable/all....package...StronaEntityV2?8
It is ok too, but I will know that situation. In simple situation (with one predefined parameter) checking parameter is good, but in more complicated isn't so simple (and must maintain code in distinct places).
My ideal imaged solution is event
page.OnPageIsMountedOn(URL to_me)
I will accept wide range of solutions.
FORMAL: please integrate synonyms on tags wicket-1.6 & wicket-6, and create new wicket-7
Your page is configured to listen to /lista/${variant}.
When you do: setResponsePage(StronaEntityV2.class, paramsWithVariant) then Wicket will use the mount point and produce: /lista/variantValue.
But if you do: setResponsePage(StronaEntityV2.class), i.e. no PageParameters provided, then Wicket will ignore /lista/${variant} (because it doesn't match) and will produce a "default" page url, i.e. /wicket/bookmarkable/com.example.StronaEntityV2.
So the application controls which url should be used.
You can use optional parameter placeholder: /lista/#{variant}. Note that I use # instead of $ now. This way Wicket will produce /lista/ when there is no variant parameter provided. In the page constructor you will know that the url is always "/lista" but the parameter may be null, so better use: pageParameters.get("variant").toXyz(defaultValue) or .toOptionalXyz().

How do I delay selecting from a dropdown until it's populated?

I'm using Watir WebDriver. I'm new to ruby.
The following dropdown list is always present. It fails unless I precede it by sleep(1). The developer said that the dropdown is not populated until the previous controls are set.
Which of the Wait commands do I need for this? I think in Selenium I waited until the hidden contents of the list contained the value that I wanted, then I selected that value.
def enterCompany(company)
#browser.select_list(:id, "ddlCompanyName").select(company)
end
A question was just asked of me offline on this one, so I wanted to provide an updated answer for the latest Watir versions that avoids the deprecated #when_present method:
browser.select_list(id: 'ddlCompanyName').wait_until { |el| el.include? company }.select
You can use the when_present to wait until the option is present before selecting it. Basically, Watir will wait up to 30 seconds for the option to appear. If it appears sooner than 30 seconds it will proceed with the action (ie select). Otherwise, a timeout exception is thrown.
#browser.select_list(:id, "ddlCompanyName").option(:text => company).when_present.select
Note that the above assumes that company is the text of the option.
Another option is to wait for anything to appear in the dropdown.
#browser.wait_until{ #browser.select_list(:id, "ddlCompanyName").options.length > 0 }