I am able to achieve friendly urls in Liferay 6.2 CE as follows: localhost:8080/pagename/mapping/dynamic-id
But I want it as below:
localhost:8080/pagename/dynamic-id
OR
localhost:8080/mapping/dynamic-id
Is there a way to do this?
There is no official, neither simple way which you can already achieve what you want. Neither I wouldn't recommend it at all for common requests. Portal architecture defines it this way because, each request should clearly define informations such as:
page destination (which one?)
portlet (which one?)
portlet instance id (when instanceable=true)
some specified portlet context information
As you can see, removing any of these information can lead to situation where request can be understood in more than one way (which page? which portlet on page? which instance? which action?)
If there is no some serious reason, I wouldn't try to fight with this, since this can lead to some other problems in future. If there is no other way, just provide more information, so we maybe can find some workaround solution for your case.
Related
I know the question is about solving the problem probably by different approach but let me specify in details what I want to ask and how much I understand about it.
We have two mvc approach used in ATG(or many other framework) pull based and push based.
As I understand it formhandlers and droplet both are playing the part of controller in different need, repositories are our model and jsps are providing views..
And if i am right till this point then what purpose the servlet chain is solving?How it fits into this picture of MVC?
Please If possible explain with the help of flow diagram from request to response (end to end).
Thanks a lot in advance to experts.
Please help.I could not find this kind of explanation anywhere.
The first thing to remember with ATG is that it is an old platform. So when trying to understand the different mechanisms through an MVC lens, remember that nothing was designed with MVC in mind-the platform predates widespread knowledge and acceptance of the MVC pattern in web development.
Droplets are a generalized mechanism for invoking Java code from a JSP (and previously, JHTML) template. The closest analogue in J2EE-land would be tag libraries. So you can use them to accomplish many different tasks. You can also use them as a poor man's MVC controller. This is done by coding a custom droplet class dedicated to handling the business logic of your page, and setting relevant page state as request parameters. Then you invoke the droplet as the first step in your JSP page. This is not very different from J2EE model 2 with the odd quirk that first your compiled JSP begins executing, then invokes your "controller" code and then resumes with processing the "view."
Form handlers, as the name indicates, are designed for processing form submissions. These are executed by a link in the servlet pipeline (DAFDropletEventServlet), before the page which was posted to begins rendering. Typical usage with form handlers is that they will send a redirect after executing, and cancel rendering of the page, which will abort any further processing by the servlet pipeline. Form handlers are the closest thing to a "controller" ATG provides. But they are awful for handling GET requests and result in some very unfortunate URLs when used for this purpose.
Why did they create two different mechanisms? Why have they not subsequently introduced an updated MVC model as part of the platform? These questions I cannot answer.
ATG, at the page rendering level, is not MVC. Don't look at it as MVC. ATG, at its core, is an extension of the basic Servlets API.
Forms and form-processing, with the successUrl and errorUrl is kind-of MVC. Droplets certainly are not.
It is perfectly acceptable in an ATG application to be fetching data as the page is rendering (i.e. Droplets)
I need to programmatically interact with a WebObjects website and extract data from the responses. The particular WebObjects site I am scraping uses component actions and stores sessions in cookies (not urls). This means that all urls look something like this:
http://example.com/WOApp/WebObjects/WOApp.woa/wo/7.0.0.0.29.1.1.1
My first questions are:
Does urls like this not completely destroy local and shared caching opportunities (cachable constraint in REST)? I imaging the only effective caching with such urls is the WebObjects server itself.
Isn't addressability broken as well? Each resource does have a unique endpoint, but it changes constantly. Furthermore (I think) that WebObjects also makes too old URLs invalid since they "time-out" after a period of time. I'm not sure whether this applies only to urls with sessions though.
Regarding the scraping I am not sure whether it's possible to extract any meaningful endpoints from the website. For example, with a normal website I would look through the HTML and extract the POST urls, then use them in my scraper by posting directly to them instead of going through the normal request-response cycle.
In this case I obviously cannot use any URLs extracted from the HTML since they are dynamically generated on each request, but I read something about being able to access WebObjects components directly if the security settings have not been set to disallow this (see https://developer.apple.com/legacy/library/documentation/LegacyTechnologies/WebObjects/WebObjects_3.5/PDF/WebObjectsDevGuide.pdf, p. 53 "Limitations on Direct requests"). I don't understand exactly how to do this though or if it's even possible.
If it's not possible what would be a good approach then? The only options I can think of is:
Using a full-blown browser client to interact with the website (e.g. WatiR or Selenium) and extract & process the HTML from their responses
Manually extracting the dynamic end-points by first request the page where they are on and then find the place in the HTML where they're located. Then use them afterwards as if they were "static".
I am interested in opinions on how to approach this scenario since I don't believe any of the solutions above are particularly good.
You've asked a number of questions, and I'll see if I can cover each in turn.
Does urls like this not completely destroy local and shared caching
opportunities (cachable constraint in REST)? I imaging the only
effective caching with such urls is the WebObjects server itself.
There is, indeed, a page cache within the WebObjects application server, and you're right to observe that these component action URLs probably thwart any other kind of caching. Additionally, even though the session ID is not present in the URL, you'd need the session ID in the cookie to re-create the same page, so having just that URL would get you a session restoration error from the application server.
Isn't addressability broken as well? Each resource does have a unique
endpoint, but it changes constantly.
Well, yes, on the face of it this is true. You've given a component action URL as an example, and they're tied to the session.
Furthermore (I think) that
WebObjects also makes too old URLs invalid since they "time-out" after
a period of time. I'm not sure whether this applies only to urls with
sessions though.
Again, all true. Component action URLs generate sessions, and sessions time out.
At this point, let me take a quick diversion. I'm assuming you're not the owner of the WebObjects application—you're talking about having to scrape a WebObjects app, and you've identified some ways in which this particular app doesn't conform to REST principles. You're completely right—a fully component-action-based WebObjects application won't be RESTful. WebObjects pre-dates REST by a few years. Having said that, there are ways in which a WebObjects application can be completely RESTful:
Using session-less direct actions gives a degree of REST-like behaviour, and would certainly solve the problems you identify with caching, addressability and expiry.
Using the ERRest framework to create a 100% RESTful application.
Of course, none of this will help you if you're just trying to scrape a legacy application.
Regarding the scraping I am not sure whether it's possible to extract
any meaningful endpoints from the website. For example, with a normal
website I would look through the HTML and extract the POST urls, then
use them in my scraper by posting directly to them instead of going
through the normal request-response cycle.
Again, if it's a fully component action-based application, you're right—all those URLs will be dynamically generated and useless to you.
In this case I obviously cannot use any URLs extracted from the HTML
since they are dynamically generated on each request, but I read
something about being able to access WebObjects components directly if
the security settings have not been set to disallow this…
That's talking about getting a component to render directly from its template with some restrictions:
As you note, the application can easily prevent it from happening at all.
As mentioned on p.53, the user input and action-invocation phases of rendering the component are skipped, which probably means this approach would be limited to rendering a component that didn't have any dynamic content anyway. This might be of some very limited use to you, though you'd need to know the component names you were interested in, and they wouldn't normally be exposed anywhere.
I'm not sure you're going to find anything better than the types of high-level functional approaches you've already suggested above, such as automating at the browser level with Selenium. If what you need is REST-style direct addressability of resources within the application, you're not going to get that unless you can re-write the application to use direct actions or ERRest where you need them.
A little late, but could help.
I use the Apache's mod_ext_filter (little modified) to pre/post filter the requests/responses from our WebObjects application. The filter calls PHP scripts and can read the dynamical hyperrefs and other things from the HTML pages. The scripts can also modify the HTTP requests, so we can programatically add/remove parameters from the request to implement new workflows in front of the legacy app and cleanup the requests before they will reach WebObjects. It is also possible to handle an additional database within the scripts and store some things over multiple requests.
So you can get the dynamically created links (maybe a button's name or HTML form destination) and can recognize these names within the request.
It is also possible to "remote control" such applications with little scripts like "click on the third button on the page". The only thing you need is a DOM parser to get the structure of the HTML pages and then rebuild the actions which the browser would do (i.e. create the HTTP request manually and send it as POST to the extracted form destination href). The only problem is the Javascript code, which we analyze and reprogram within PHP (i.e. enable/disable input elements, so they will not be transmitted within the requests)
There were some problems within the WebObjects Adapter Module for Apache. It still uses Content-Length within the HTTP header, which you cannot change in mod_ext_filter. If you change the HTML or the parameters within the request, the length of the content will not longer match. But it is possible to change that.
Theoretically it could also be possible to control such an closed-source legacy application from a new UI on a tablet or smartphone, which delegates the user interaction to the backend WebObjects app.
The scripts depends on the page structure, so if your WebObjects app will be changed, you have to correct some things in the scripts (i.e. third button could be now the fourth button).
It should also be possible to add a Restful interface in front of the application and query the data from the legacy app by the filter scripts.
Is it possible to get the Portal base URL (like http://www.thisismyportal.com) from a Portlet using Portlet 2.0 API?
Right now I'm planning to manually build it concatenating PorletRequest.getServerName(), PortletRequest.getServerPort() and PortletRequest.getContextPath(); but it seems kind of clumsy (and there's no PortletRequest.getProtocol())
While it is clumsy, it is the safest way to construct the URL; and while there is no PortletRequest.getProtocol() method, you can conclude the protocol using the PortletRequest.isSecure() method.
I would advise against using an external configuration for the base URL, for a couple of reasons.
First, it would be yet another configuration item for you to maintain across environments (test, integration, production and so forth). There's very little justification to hold, in configuration, something that is fully reproducible using the current request.
Second, under certain circumstances, it might be impossible to designate a particular URL as a "base URL" for the portal. An example would be the case in which the portal server is associated with multiple hosts, or multiple host aliases.
We had those configuration properties in Resource Environment Provider for the purpose of generating external URLs for sending them in emails. It was specific solution and it wasn't a problem for us as we had other properties stored there as well so we knew it will be available at runtime. I don't know if that suits your needs. It depends on your scenario.
Also, we used https only during login, so we always generated http URLs.
Hope this helps.
I see a lot of articles and posts on how to create a custom MembershipProvider, but haven't found any explanation as to why I must/should use it in my MVC2 web app. Apart from "Hey, security is hard!", what are critical parts of the whole MembershipProvider subsystem that I should know about that I don't, because I've only read about how to override parts of it? Is there some "behind the scenes magic" that I don't see and will have to implement myself? Is there some attribute or other piece of functionality that will trip over itself without a properly setup MembershipProvider?
I am building a web app, using a DDD approach, so the way I see it, I have a User entity and a Group entity. I don't need to customize ValidateUser() under the provider; I can just have it as a method on my User entity. I have to have a User object anyways, to implement things not under the MemebrshipProvider?
So, what gives? :)
No, you don't need it. I have sites that use it and sites that don't. One reason to use it is that plumbing is already there for it in ASP.NET and you can easily implement authentication by simply providing the proper configuration items (and setting up the DB or AD or whatever).
A RoleProvider, on the other hand, comes in very handy when using the built-in AuthorizeAttributes and derivatives. Implementing a RoleProvider will save you a fair amount of custom programming on the authorization side.
I'm in the middle of converting an existing app built on top of zend framework to work as a plugin within wordpress as opposed to the standalone application it currently is.
I've never really used zend so I've had to learn about it in order to know where to begin. I must say that at first I didn't think much of zend, but it's funny because the more I understand how it works the more I keep questioning why I'd want to remove dependency when it's a clearly well thought out framework. Then I'm reminded that it's because of wordpress.
Now I already know there are WP plugins to make zend play nice with WP. In fact I'm aleady using a zend framework plugin just to get the app functional within the WP admin area which is allowing me to review code, modify code, refresh the browser, review changes, debug code, again and again.
Anyway, I really don't have a specific question but instead I'm looking for advice from any zend masters out there to offer advice on how to best go about a task like this one.... so any comments, advice, examples or suggestions would be super.
One area I'm a little stuck on is converting parts of zend->db calls to work as wpdb calls instead... specifically the zend->db->select.... not sure what to do with that one.
Also on how to handle all the URL routing with automatic calls to "whatverAction" within thier respective controllers files.
Any help would be great! Thanks
You're probably facing an uphill battle trying to get some of the more major components of ZF to work in harmony with Wordpress. It sounds like you've got a full MVC app that you're trying to integrate into a second app that has very different architecture.
You probably want to think about which components handle which responsibilities. Wordpress has it's own routing and controller system that revolves around posts, pages and 'The Loop'. This is entirely different from Zend's Action Controllers and routing system.
It's possible you could write a WP hook to evaluate every incoming request and decide if it should be handled by WP or a ZF controller. However, it is doubtful you would be able to replace WP's routing system outright with ZF's or vice versa.
Same idea, where Zend_Db is concerned. There's nothing stopping you from using Zend_Db to access Wordpress's database, but trying to somehow convert or adapt Zend_db calls into wpdb calls sounds painful. If you have a large model layer, you probably want to hang on to it, and find a way to translate data from those models into the posts/pages conventions that Wordpress uses.
Personally, I would use ZF to build a robust business layer that can be queried through an object model via a Wordpress plugin, and then rely on Wordpress to do the routing and handle the views.
Zend_DB_Select is simple SQL query (but created using objects) that can be used like any other query. Just turn it into string. Ex.:
mysql_query((string)$zendDbSelectObject);