2 questions about the client-side and server-side scripting languages? - client-side-scripting

this is questions about the client-side and server-side scripting languages
and i need know what the difference between them?
1 – What are the client-side scripting languages?
A. HTML5
B. CSS3
C. jQuery
D. ASP
E. Ajax
F. PHP
i think the answer q,b,c,e
2 - What are the server-side scripting languages?
A. JavaScrist
B. jQuery
C. Ajax
D. PHP
E. XHTML
F. XML
i think the answer d
is my answers correct ?

Client-side scripting languages are executed on the client's side in the browser. So for the first question it would be A, B, C, E. I'm not sure about A or B though, because technically HTML and CSS aren't scripting languages, they're markup languages. HTML and CSS are used to render the page, jQuery is a JavaScript library. JavaScript is used to make pages interactive, and Ajax is used on the client side to request pages in the background.
If we want to be really technical, none of the answers for #1 are correct, as jQuery is a library written in JavaScript (which is a language) and Ajax is a JavaScript construct.
Server-side scripting languages are executed on the server's side. Your answer for the second question is correct. PHP is used on the server to create dynamic pages.

Related

Consuming hateoas restful webservice with javascript (framework)

Is it possible to consume a hateoas type of restful webservice via (a) javascript (framework - e.g. angularjs)? I imagine that the client needs to implement quite a lot of logic to reach the actual endpoint. Any feedback would be very much appreciated. Thanks!
At least part of the issue here is that your API needs to return a media type that supports structured linking (which the usual "REST" API defaults application/json and application/xml do not). To get this support, checkout the HAL or JSONAPI projects.
With a structured linking definition, it becomes much easier to consume - HAL has several libraries to work with it, including a javascript library:
https://github.com/mikekelly/backbone.hal
For an interesting client, checkout the HAL Talk demo.
Yes it is possible. Javascript is just another user-agent.
Yes there is work to do. No I am not aware of any frameworks to do this for you. I have written tooling for supporting hypermedia driving applications on the desktop and I don't consider it a significant amount of work to produce the infrastructure to support hypermedia based applications.
The challenge is less about the tooling and more about the fact that it is a very different approach to building applications. It takes some getting used to.
On a related note there is some ongoing work in the Browser/JS space that will make doing hypermedia driven applications on the client much easier. See NavigatingController.
Currently a JS user-agent can only manage javascript links. With NavigatingController it becomes possible to intercept HTML links also, making JS driven applications much more seamless in the browser.

GWT and Search Engines

Does GWT app are indexed by search engines???? if yes, how to accomplish that?
Thanks.
GWT apps and more generally ajax can't be fully indexed by search engines... yet. But work is being done to make ajax applications crawlable. The most common alternative used by developers to get their gwt app referenced is to publish an html version.
Search engines don't prefer html that generated on client side by javascript(Ajax). They prefer static html that generated from server side. That is why, AJAX applications are difficult to index because they are generated dynamically on the client side.

Comparing GWT and Turbo Gears

Anyone know of any tutorials implemented across multiple web application frameworks?
For example, I'm starting to implement GWT's Stock Watcher tutorial in Turbo Gears 2 to see how difficult it will be to do in Turbo Gears 2.
Likewise, I'll be looking for a Turbo Gears 2 tutorial to implement in GWT.
But I hate to re-create the wheel - so I was wondering if anyone was familiar with such projects and/or would be interested in helping me work on such a project.
Thanks,
--Spencer
While it is possible to combine the two frameworks, I hope to convince you not to do so.
Most web-frameworks, including Turbogears, have server-side page flow management. A page is served to the user by generating html, user interacts by clicking on links or by posting a form, the browser sends a fresh request to the server, and finally server responds with new html altogether. You AJAX'ify the page by using a js library, or the framework has some support. But, in general, transition from one view to another is done on the server side.
GWT is totally different. There is only a single HTML page in the system. Once this page is downloaded, everything happens on the browser through javascript. When the user clicks on a link, its essentially just a javascript function call. History management is done through fragment urls (the portion after the #).
These two philosophies are poles apart. So apart that I daresay GWT doesn't work well with any server-side web technology. See this discussion on GWT vis-a-vis JBPM/Struts/Spring Webflow. And see this discussion on GWT v/s JQuery.

Looking for Suggestion on Multi-Consumer Service Development

How would I model a system that needs to be able to provide content in a format that would be consumable by iphone, Android or web browser (or whatever). All a new consumer would have to do is build a UI with rules on how to handle the data. I'm thinking something RESTful returning JSON or something.
I'm really looking for suggestions on the kinds of things I'd need to learn in order to be able to implement a system on this scale.
As an ASP.NET MVC developer, would that be the best framework/archetectrue to go with?
Thanks
I think you're on the right track with REST returning JSON. This is a format that's consumable by pretty much any language on any platform.
As an ASP.NET MVC developer, you should have no problems making a web service that's RESTful and passes data via JSON.
iPhone, Android, and modern web browsers such as Firefox, Opera, Safari, Chrome, have excellent Javascript implementations, splendid CSS, and reasonable subsets of HTML5 -- but you can't use either fact if you also want to support Internet Explorer or other old browsers. Fortunately, Javascript frameworks such as jQuery and dojo can compensate in good part for such issues (I personally prefer dojo, but jquery's more popular, and the choice between two such good frameworks is more of a matter of taste -- plus, there are advantages with going for the popular choice, such as, you can probably get better support on SO;-).
For REST returning JSON, just about any decent server-side arrangement will be fine, so, you may as well stick with what you know best, in your case ASP.NET MVC (just as I'd stick with Python and Werkzeug on App Engine, and people with other server-side preferences would stick with theirs -- ain't gonna matter much;-). Client-side, pick one of the two most popular frameworks, Jquery and Dojo, and go with it -- both have good books if that's your favorite style of study, but also good online resources. (Less-popular frameworks surely have a lot going for them as well, but there are risks in getting far away from popular choices;-).
As a general/philosophical approach, Thin Server Architecture is well worth a look (except for one detail: they used to recommend XML rather than JSON -- dunno if they've seen the light since, but JSON's clearly the right approach so ignore any suggestion to the contrary;-).
I am working on a project now that has to do this very thing. While searching the net on this I found Aleem Bawany's article on how it could be done in ASP.Net MVC. I really like the fact that it uses an action filter to handle the response. I modified the code in his article to look at the extension of the request instead of the content type.
For example /products/1.xml would return the xml representation of the
product whose id is 1 from the database.
Also /products/1.json would return the json representation of the product whose id is 1 from the database.
And /products/1 would return the html representation of the product whose id is 1 from the database.
The nice thing about returning data this way is that it lets the consumer decide how they want to consume the data.

REST Client Implementation Embracing HATEOAS Constraint?

Does anybody know of an implementation of a REST client that embraces the constraint of Hypermedia as the Engine of Application State (HATEOAS)?
The Sun Cloud API seems to be a good candidate, judging from the way it's documented and a statement by the author to the effect that Ruby, Java, and Python implementations were in the works. But so far I've found no trace of the code.
I'm looking for anything - even a partial implementation would be helpful.
The very first thing you should look at is the common web browser. It is the standard for a client that embraces HATEOAS (at least to some degree).
This is how Hypermedia works. It's so simple that it's almost painful:
you point your browser to http://pigs-are-cool.org/
the browser loads the HTML page, images, CSS and so on.
At this point, the application (your browsing experience) is at a specific URI.
The browser is showing the content of that URI
you see a link in the application
you click the link
the browser follows the link
at this point, the application is at a different URI
The browser is showing the content of the new URI
Now for a short explanation of how the two terms relate to the web browsing experience:
Hypermedia = HTML pages with the embedded links
Application state = What you're seeing in the browser at any point in time.
So HATEOAS actually describes what happens in a web browser when you go from web page to web page:
HTML pages with embedded links drive what you see in the browser at any point in time
The term HATEOAS is just an abstraction of this browsing experience.
Other examples of RESTful client applications include:
RSS and Feed readers. They traverse links given to them by users
Most AtomPub blog clients. They need merely a URI to a services document, and from there they find out where to upload images and blog posts, search and so on.
Probably Google Gadgets (and similar), but they're merely browsers in a different skin.
Web crawlers are also RESTful clients, but they're a niche market.
Some characteristics of RESTful client software:
The client works with with any server, given that it is primed with some URI and the server responds with an expected result (e.g. for an atom blog client, an Atom services document).
The client knows nothing about how the server designs its URIs other than what it can find out at runtime
The client knows enough media types and link relations to understand what the server is saying (e.g. Atom or RSS)
The client uses embedded links to find other resources; some automatically (like <img src=) some manually (like <a href=).
Very often they are driven by a user, and can correctly be termed "user agents", except for, say GoogleBot.
Restfulie is a Ruby, Java, and C# framework which aims to enable building clients and servers which employ HATEOAS. I haven't used it, but it does look interesting.
Here's some example code from their java project:
Order order = new Order();
// place the order
order = service("http://www.caelum.com.br/order").post(order);
// cancels it
resource(order).getTransition("cancel").execute();
Again, I'm not sure exactly what this does, or how well it works in practice, but it does seem intriguing.
The problem with REST HTTP and HATEOAS is that there is no common approach to specifying links so it is hard to follow links since their structure might change from a service provider to another. Some would use <link href="..." /> others would use proprietary structure for of links ex. <book href="..." />. It is not like in HTML or atom were link are part of a standard defined.
A client can't know what a link is in your representation is it doesn't know your media type unless there is a standard or conventional representation of a link
The HATEOAS design principle (REST is a set of design principles also) means that each resource should have at most a single fixed URL.
Everything else related should be discoverable dynamically from that URL through "hypermedia" links.
I just started a wikipedia stub here
In the meanwhile, there is the Spring HATEOAS project. It has also a client implementation:
https://docs.spring.io/spring-hateoas/docs/current/reference/html/#client
Map<String, Object> parameters = new HashMap<>();
parameters.put("user", 27);
Traverson traverson = new Traverson(URI.create("http://localhost:8080/api/"), MediaTypes.HAL_JSON);
String name = traverson
.follow("movies", "movie", "actor").withTemplateParameters(parameters)
.toObject("$.name");