Why we need public render parameters in jsr286 - event-handling

I am trying to udnerstand the concept of public render paramter in jsr286 portlets.
http://publib.boulder.ibm.com/infocenter/wpexpdoc/v6r1/index.jsp?topic=/com.ibm.wp.exp.doc_v6101/dev/pltcom_pubrndrprm.html
Now inter portlet communication can happen like this:Portlet 1 publishes an event, Portlet 2 processes it and generates a response and puts it in session scope. So now portlet 1 can see it also since both portlets share same session object. So what is the purpose of public render parameters as a way of sharing information between portlets?

Both have there advantages. Generally Public render parameters are lightweight communication. Following are some of the important features of both.
Public render parameters:
They are limited to simple string values.
They do not require explicit administration to set up coordination.
They cause no performance overhead as the number of portlets sharing
information grows.
They can be set from links encoded in portal themes and skins.
Portlet events:
They can contain complex information.
They allow fine-grained control by setting up different sorts of
wires between portlets (on-page or cross-page, public or private).
They can trigger cascaded updates with different information. For
example, portlet A can send event X to portlet B, which in turn sends
a different event Y to portlet C.
They cause increasing processing overhead as the number of
communication links grows.

Related

One to One vs One to Many dispatcher configuration in aem

The mapping between the dispatcher and the publisher is very important in designing the application. There are two ways,
One to One -> One pub is connect to one dispatcher
One to Many -> One pub is connect to 3 or more dispatcher
I could not understand which one should be selected on when. Can anyone tell me pros and cons on each options?
In general publisher and dispatcher have a different role in your setup. Of both of them you need as many as you have load. In theory you can start with 2 of them. Whenever they cannot handle the load (CPU or Disk over 100%), then you add one of them. (actually AEMaaCS is doing it that way dynamically)
With some experience you can forecast the number of required dispatcher and publishers.
The following scenarios will cause a high load on the dispatchers:
many static pages (which seldom change), and a lot of static assets (images, pdf, ...)
few pages and extremely high traffic for those
In general your site is very good cacheable. Because the dispatcher is a cache in front of the "CMS". Then you probably need several dispatchers for each publisher = one to many (good caching is great, because the dispatcher is cheaper and can handle more load than a publisher)
The following scenarios will cause a higher load on the publisher. Then you will have a one to one scenario
There is a CDN in front of the CMS. The CDN does a lot of static caching, so cache ratio of the dispatcher will go down
A lot of static content is already handled outside of the CMS (e.g. images are served elsewhere, e.g. Adobe Dynamic Media)
You have many dynamic pages (rendered for each user seperately, e.g. a banking application)
PS: you will have at least one dispatcher for each publisher. As reverse proxy it has an imported security function. It also is a major backup to avoid downtimes. I know a customer, that is running during maintenance up to 24 hours only the dispatchers. Then they just serve the static content like a normal Apache webserver.

REST on non-CRUD operations

I have a resource called “subscriptions”
I need to update a subscription’s send date. When a request is sent to my endpoint, my server will call a third-party system to update the passed subscription.
“subscriptions” have other types of updates. For instance, you can change a subscription’s frequency. This operation also involves calling a third-party system from my server.
To be truly “RESTful,” must I force these different types of updates to share an endpoint?
PATCH subscriptions/:id
I can hypothetically use my controller behind the endpoint to fire different functions depending on the query string... But what if I need to add a third or fourth “update” type action? Should they ALL run through this single PATCH route?
To be truly “RESTful,” must I force these different types of updates to share an endpoint?
No - but you will often want to.
Consider how you would support this on the web: you might have a number of different HTML forms, each accepting a slightly different set of inputs from the user. When the form is submitted, the browser will use the input controls and form metadata to construct an HTTP (POST) request. The target URI of the request is copied from the form action.
So your question is analogous to: should we use the same action for all of our different forms?
And the answer is yes, if you want the general purpose HTTP application to understand which resource is expected to change in response to the message. One reason that you might want that is cache invalidation; using the right target URI allows all of the caches to understand which previously cached responses should not be reused.
Is that choice free? no - it adds some ambiguity to your access logs, and routing the request to the appropriate handler in your code takes a bit more work.
Trying to use PATCH with different target URI is a little bit weird, and suggests that maybe you are trying to stretch PATCH beyond the standard constraints.
PATCH (and PUT) have remote authoring semantics; what they mean is "make your copy of the target resource look like my copy". These are methods we would use if we were trying to fix a spelling error on a web page.
Trying to change the representation of one resource by sending a remote authoring request to a different resource makes it harder for the general purpose HTTP application components to add value. You are coloring outside of the lines, and that means accepting the liability if anything goes wrong because you are using standardized messages in a non standard way.
That said, it is reasonable to have many different resources that present representations of the same domain entity. Instead of putting everything you know about a user into one web page, you can spread it out among several that are linked together.
You might have, for example, a web page for an invoice, and then another web page for shipping information, and another web page for billing information. You now have a resource model with clearer separation of concerns, and can combine the standardized meanings of PUT/PATCH with this resource model to further your business goals.
We can create as many resources as we need (in the web level; at the REST level) to get a job done. -- Webber, 2011
So, in your example, would I do one endpoint like this user/:id/invoice/:id and then another like this user/:id/billing/:id
Resources, not endpoints.
GET /invoice/12345
GET /invoice/12345/shipping-address
GET /invoice/12345/billing-address
Or
GET /invoice/12345
GET /shipping-address/12345
GET /billing-address/12345
The spelling conventions that you use for resource identifiers don't actually matter very much.
So if it makes life easier for you to stick all of these into a hierarchy that includes both users and invoices, that's also fine.

Fetching potentially needed data from repository - DDD

We have (roughly) following architecture:
Application service does the infrastructure job - fetches data from repositories which are hidden behind interfaces.
Object graph is created and passed to appropriate domain service.
Domain service does it thing and raises appropriate events.
Events are handled in different application services which perform some persistent operations (altering repositories, sending e-mails etc).
However. Domain service (3) has become so complex that it requires data from different external APIs only if particular conditions are satisfied. For example - if Product X is of type Car, we need to know price of that car model from some external CatalogService (example invented) hidden behind ICatalogService. This operation is potentially expensive one (REST call).
How do we go about this?
A. Do we pre-fetch all data in Application Service listed as (1) even we might not need it? Do we inject interface ICatalogService into given Domain Service and fetch data only when needed? The latter solution might create performance issues if, some other client of Domain Service, calls this Domain Service repeatedly without knowing there is a REST call hidden inside it.
Or did we simply get the domain model wrong?
This question is related to Domain Driven Design.
How do we go about this?
There are two common patterns.
One is to pass the capability to make the query into the domain model, allowing the model to fetch the information itself when it is needed. What this will usually look like is defining an interface / a contract that will be consumed by the domain model, but implemented in the application/infrastructure layers.
The other is to extend the protocol between the domain model and the application, so that we can signal to the application layer what information is needed, and then the application code can decide how to provide it. You end up with something like a state machine for the processes, with the application code coordinating the exchange of information between the external api and the domain model.
If you use a bit of imagination, you've already got a state machine something like this; as your application code is already coordinating the movement of inputs to the repository and the domain model. The difference, of course, is that the existing "state machine" is simple and linear enough that it may not be obvious that there is a state machine present at all.
how exactly would you signal application layer?
Simple queries; which is to say, the application code pulls the information it needs out of the domain model and uses that information to compute the next action. When the action is completed, the application code pushes information to the domain model.
There isn't enough information to give you targeted good advice. I suspect you need to refactor your domains into further subdomains. It sounds like your domain service has way more than 1 responsibility. Keep the service simple.
In addition, If you have a long running task like a service call that takes a long time, then you need to architect it away. The most supple design will not keep the consumer waiting. It'll return immediately with some sort of result to the user even if it's simply a periodic status update.

Maximum Form count in Application

Is there any limitation on the number of forms in Delphi applications?
I developed an application with 40 or more Forms (with Delphi XE4), and I'm concerned about its performance!
Is it a good idea to create Forms on demand instead of creating all of them at application startup?
No, there is no limitation to the number of Forms, other than available system memory. Forms (and child components) are being kept in TList descendants. Theoretically, a TList hás its boundary, but you will hit the limit of system memory, window handle or GDI limits long before, guaranteed.
Yes, it is preferred to create Forms on demand. Creating all Forms at application startup unnecessarily slows down the startup and will consume unnecessary memory, because most likely many Forms will never be used in an application's session. Therefore you should always disable automatic form creation in the Form Designer Options of the Environment. A related issue concerns the global form variables that the IDE adds to form units by default: delete them immediately. Instead, use your own reference-holding mechanism for Forms created.
On existing projects where that option wasn't disabled, you should remove all forms - besides the Main Form - from the auto-create-forms listbox in the Form Options of the Project. Synonymous to this is removing all Application.CreateForm(...) lines from the project file.
Of course, there can be exceptions to this guideline of creating Forms on demand. Some Forms may be used often enough (and may be very expensive to create) to justify their creation once at startup and keeping them alive. Users are more accustomed to a somewhat long taking application startup then a long taking action when it is already active. In this case, keeping the global Form variable could make sense to express its never-ending existence.
I have project with 450 forms and 500 fast reports.I create forms on demand and release it on form close.application startup is 3 seconds.

State management in GWT?

How does one manage state in a GWT application? I am much more experienced in JSF development and every bean is scoped to either request, session, application, conversation, page etc etc. How does that work in GWT? Any reading tips on state management in GWT?
It depends on whether or not you're presenting your site as a browser-based application or a series of pages. In the application style, the user rarely navigates away from the app's URL, so the GWT module is long-lived and the server is relatively stateless. In the sequence-of-pages style, the GWT module would be restarted each time the user browses to a new URL, so the server has to maintain state to send back to the client on each page load.
Writing state-management for the application style uses the same patterns as any kind of desktop or server app. You usually have some service object that brokers data exchange with the server (GWT-RPC or RequestFactory) and the broker is made available to the various objects in your module that require state. Objects store their state in fields and have a lifetime corresponding to their usefulness (e.g. Widgets vs. caches vs. ephemeralia).
Well, in general I view it this way: Usually, your GWT app is one website with a lot of javascript code. In that code, all fields (member variables) of all the client java classes are your state. Additionally, you can embed IDs or variable values in the DOM of the dynamic website (e.g. an attribute "xyz" as part of an tag). They also contribute to your state. Therefore, all the "data" plus the DOM is the state.
On a coarser level, you can encode state's in the URL after a "#" sign. They are called "Places" or "History", depending on which implementation you choose. (GWT's History, mvp4g, ...)
If you need some user management, you can then request a token from the server after successful authentication, store it locally in the client (change it's state), and then include it for each server request.