This could sound like a pretty basic question, but I could not find a proper answer to my question. How does Selenium element location implementation work? For example:- When doing findElement by ID, does selenium(some engine or implementation) traverse the entire DOM? I assume it does.
In that case how is findElement by ID faster than findElement by Xpath. Because if I provide an Xpath like //input[#id=''] then Selenium (some engine or implementation) will not traverse the entire DOM and directly search for input elements hence resulting a faster search.
The implementation for xpath and id etc (all Bys) are different depending on which browser you are running against, certain browsers will run some bys faster than others because they have native support for searching via that way when others require selenium to have to fudge the implementation so it works. An example of this is xpath which is not implemented natively in IE if i remember correctly so using xpath there is significantly slower than id.
WebDriver closely models the underlying Javascript implementation of the browser.
According to the W3C WebDriver Draft, for example, a locator by ID should be functionally equivalent to the javascript document.getElementById() call.
Implementations of WebDriver for different browsers should use the native support for the location strategies such as xpath. If native support is not available, a pure JS implementation may be used. Due to the varied range of browsers and their native implementations, the performance of each location strategy will be different across each combination of browser and locator strategy.
References, and further reading:
WebDriver - W3C Editor's Draft - Element Location Strategies
Selenium Webdriver Architecture - Simon Stewart
Related
I'm writing scala-js frontend framework, the key feature of which is server-side rendering. The idea was that there are components that manipulate dom with document.createElement, element.appendChild and others. On the server I'd subclass HTMLDocument, Element and others, override their methods with server dom implementation that can be converted to plain string html. So I added scalajs-dom_sjs dependency to the server module and tried to do that. But HTMLDocument, Element and most likely other classes have calls to js.native inside their constructors which throw exceptions saying "use JVM version of the library". Which doesn't exist obviously. I could use the other way and implement my own dom library, but that is twice as much work, cause I'd have to implement it on server and client, while using the first approach I'd implement it only once on server.
So my question is: why is it forbidden to use scala-js library versions on server so strictly and is there a work around it?
The reason this is forbidden is that, as you noticed, the DOM API is full of js.natives. These classes are not implemented in Scala. They are part of the browser's DOM API, which does not have an equivalent on the JVM. You cannot use the types defined in scalajs-dom on the JVM and expect them to do anything useful. Where would the implementations of the methods come from?
You will indeed need to implement your own DOM-like library for the JVM side. If you do not want to "reimplement" it on the client side, you could reuse the org.scalajs.dom namespace for your classes, and give them exactly the same structure and types as in scalajs-dom (except they won't extend js.Any, obviously).
Note that this is semantically dubious. Types extending js.Any do not have the same semantics as normal Scala types. You might be able to come up with some "compatible enough" API for normal use, but it's still dubious.
Usually, to enable so-called isomorphic DOM manipulations on server and client, one would write a DOM-agnostic cross-compiling library. On the client side, it would offer a "rendering" function to actual DOM nodes; and on the server side, it would render to strings to be sent to the client in the HTML.
This is precisely what Scalatags does.
I am working on a GWT app that needs to serve a different layout to mobile device users. I can easily determine if a user is using a mobile browser; however, I'm not sure about the best pattern for handling them.
I am currently using the MVP pattern - would it be best to simply pass a browser-specific view to the Presenter or is there a more appropriate method?
You could set up GWT to detect the web browser used, as described in this question. Then, via Deferred Binding, let the compiler "slip" the correct view into place for the, say, mobilesafari user agent. That way, you won't have to litter your Java code with browser detection, etc.
The way I've done it is to have different GWT modules (with their own entrypoint, Gin modules, even different CssResources) and then on the myapp.html page you just have to check out what browser is requesting the content and based on it (javascript checks) the appropriate module
<script src="myapp/myapp.nocache.js"/>
or
<script src="mymobileapp/mymobileapp.nocache.js"/>
is loaded.
If you are working with GIN and an MVP framework (gwt-platform is my platform of choice) you can then reuse the code that was already written for the presenters and only implement different views.
I am evaluating if there is a performance variation between calls made using GWT-RPC and HTTP Call.
My appln services are hosted as Java servlets and I am currently using HTTPProxy connections to fetch data from them. I am looking to convert them to GWT-RPC calls if that brings in performance improvement.
I would like to know about pros/cons of each...
Also any suggestions on tools to measure performance of Async calls...
[A good article on various Server communication strategies which can be employed with GWT.]
GWT-RPC is generally preferred when the backend is also written in Java because it means not having to encode and decode the object at each end -- you can just transmit a regular Java object to the client, and use it there.
JSON (using RequestBuilder) is generally used when the backend is written in some other language, and requires the server to JSON-encode the response object and the client to JSON-decode it into a JavaScriptObject for use in the GWT code.
If I had to guess I'd say that GWT-RPC also results in smaller transport objects because the GWT team optimizes for this case, but either will work, and JSON can still be pretty small. It just comes down to a matter of developer convenience in most cases.
As for tools to measure request time, you can either use Chrome/Webkit's developer tools, or Firefox's Firebug extension, or measure request time in your app and send that metrics data back to your server in a deferred request for collection and analysis.
I wrote that article mentioned in the question (thanks for the link!).
As always, the answer is 'it depends'. I've used both GWT-RPC and JSON.
As outlined above, GWT-RPC allows for some serious productivity in shipping java objects (with some limits) over the wire. Some logic can be shared, and GWT takes care of marshalling/unmarshalling your object.
JSON allows for cross domain access and consumption by other, non GWT clients. You can get by with overlay types, but no behavior (like validation) can be shared. JSON can also be easily compressed and cached, unlike GWT-RPC (last time I looked).
Since we have no idea what the payload is, performance recommendations are hard to give. I'd recommend (again, as someone does above) testing yourself.
Just an addition to the other answers, there's one point to consider which could influence your decision towards JSON, even if you're using Java on the back-end:
Maybe sometime in the future, you want to allow non-GWT clients to talk to your server. Many modern sites offer some kind of API access, and if you're using JSON, you basically already have a comparatively open API.
In general I agree with Jason - if your server side uses Java, go with GWT-RPC. You'll be able to reuse the POJOs, validation logic, etc. RPC also tends to "play" better with MVP and code-splitting.
However, if your server side uses anything else use JSON - but don't fret, with JavaScript Overlay Types using JSON is a breeze. You won't be able to reuse the code from client side on the server, though (YMMV).
From a performance point of view - I'd say that JSON has the edge here. Modern browsers have some seriously good methods for fast encoding/decoding for JSON. I'm not sure what GWT-RPC is "behind the scenes", but I doubt it can beat JSON when it comes to speed. As for the payload - that depends on the developer (the names of the objects in JSON, etc), but I'd say that in general JSON is also (marginably) smaller. Enable compression on your server (for example, mod_deflate on Apache HTTP) to squeeze the bits even more ;)
How would I model a system that needs to be able to provide content in a format that would be consumable by iphone, Android or web browser (or whatever). All a new consumer would have to do is build a UI with rules on how to handle the data. I'm thinking something RESTful returning JSON or something.
I'm really looking for suggestions on the kinds of things I'd need to learn in order to be able to implement a system on this scale.
As an ASP.NET MVC developer, would that be the best framework/archetectrue to go with?
Thanks
I think you're on the right track with REST returning JSON. This is a format that's consumable by pretty much any language on any platform.
As an ASP.NET MVC developer, you should have no problems making a web service that's RESTful and passes data via JSON.
iPhone, Android, and modern web browsers such as Firefox, Opera, Safari, Chrome, have excellent Javascript implementations, splendid CSS, and reasonable subsets of HTML5 -- but you can't use either fact if you also want to support Internet Explorer or other old browsers. Fortunately, Javascript frameworks such as jQuery and dojo can compensate in good part for such issues (I personally prefer dojo, but jquery's more popular, and the choice between two such good frameworks is more of a matter of taste -- plus, there are advantages with going for the popular choice, such as, you can probably get better support on SO;-).
For REST returning JSON, just about any decent server-side arrangement will be fine, so, you may as well stick with what you know best, in your case ASP.NET MVC (just as I'd stick with Python and Werkzeug on App Engine, and people with other server-side preferences would stick with theirs -- ain't gonna matter much;-). Client-side, pick one of the two most popular frameworks, Jquery and Dojo, and go with it -- both have good books if that's your favorite style of study, but also good online resources. (Less-popular frameworks surely have a lot going for them as well, but there are risks in getting far away from popular choices;-).
As a general/philosophical approach, Thin Server Architecture is well worth a look (except for one detail: they used to recommend XML rather than JSON -- dunno if they've seen the light since, but JSON's clearly the right approach so ignore any suggestion to the contrary;-).
I am working on a project now that has to do this very thing. While searching the net on this I found Aleem Bawany's article on how it could be done in ASP.Net MVC. I really like the fact that it uses an action filter to handle the response. I modified the code in his article to look at the extension of the request instead of the content type.
For example /products/1.xml would return the xml representation of the
product whose id is 1 from the database.
Also /products/1.json would return the json representation of the product whose id is 1 from the database.
And /products/1 would return the html representation of the product whose id is 1 from the database.
The nice thing about returning data this way is that it lets the consumer decide how they want to consume the data.
This is a multi-part question. I just watched a very interesting presentation on YQL by the lead developer (a graduate of my MS program). While it was very compelling, and I am looking forward to trying it out, I am wondering if anyone knows of alternative frameworks for querying multiple web service APIs to make them appear seamless, the apparent purpose of YQL?
Yahoo's strategy has been to create XML schema definitions that bind a given web service's parameters into their YQL Open Table query parameters, which I think is very clever. Is there any tool that attempts (perhaps I am naive here) to automate the discovery of parameters in say a REST API? I am aware that with SOAP APIs, because there is a published WSDL, it makes automation easier, but is there yet no way to do this with REST? Is anyone trying?
Yes people are trying to produce description languages for REST. The most popular effort is WADL. There are lots of questions about WADL here on SO. Is it a good idea? In my opinion no.
REST does not need a discovery model beyond what it already has with hypermedia, because is trying to solve a problem at a different architectural layer than web services. Web services deliver data to an application's business logic/domain model. REST is about delivering content and behaviour to a presentation layer.
How about an analogy? Think of the different between an object and struct in C++. A struct is just simple data that some client process is going to manipulate. That's what a web service does, it returns a chunk of data, a struct. Sure maybe it did a bunch of server side processing to produce the result, but the end result is a lump of data. A REST interface delivers an object. i.e. It contains both data and the methods that can be used to manipulate that object. By definition, if you understand the uniform interface and you understand the returned media type, you already know what you can do with the response. Discovery mechanisms are redundant.
If you find this hard to believe, the think about the web. How does a web browser discover web pages? The web has no formalized discovery mechanism, and yet there is a world of information out there that we can discover with a web browser.
There is this little website http://zachgrav.es/yql/tablesaw/ which indeed auto-discovers parameters in a REST api and turns it into a YQL compatible table.
There are two ways to find information. Either you use a 100% unambiguous language or you use a natural language. Anything in between like YQL is doomed to fail because it delivers neither and works well only with the examples its authors tout.
I blogged about this at http://zscraper.wordpress.com/2012/05/30/enough-with-crawling-2. My personal stance is that you'll always get the most accurate results if you do your homework first, i.e. study the target domain and figure out how to query it unambiguously.
To answer your question and give you an alternative -- try Bobik. This is a cloud-backed scraping service that you control via REST API. Compose your "queries" in traditional syntax (Bobik supports Javascript, JQuery, XPATH and CSS) and call Bobik to run them from any client-side environment (webpages, mobile apps, or your server).
Hope this helps.