I am using Grails version 2.4.4
I have many templates in my project, I need to send them as e-mail
they trigger point of this mails are either a background process or a web request
groovyPagesTemplateEngine does not work with jobs, background process
and anything else without a webrequest
Can I use PageRenderer instead of groovyPagesTemplateEngine ?
Are there any Limitations of PageRenderer ?
Grails issue which is closed has a comment https://jira.grails.org/browse/GRAILS-3818
"There are some limitations as to what you can do since there is no request. And all links to others templates, controllers etc. have to be fully qualified. "
I tried this for a background job LinkToController it seems to be working ?
I do not understand what will not work with PageRenderer bean?
PageRenderer is what you should use yes, GroovyPagesTemplateEngine is more low level and although you can use it, is a more difficult API to work with.
Related
I'm doing a Java Backed Webscript to put in Alfresco and call it via REST. This Webscript must do a set of 3 operations (find a path, create a folder and upload a document).
I read about this and found similar examples to do this operations throw the native Alfresco API, with methods like getFileFolderService, getContentService, etc. of Repository or ServiceRegistry classes. All in Java, without javascript.
But I would rather use REST calls instead of Alfresco API inside my Webscript. I think that if already exists Webscripts to do these operacions, is easier call them than use Alfresco API methods to try to do it. And if the API changes in future versions, the REST calls would remain the same. But I'm new here and I don't know if I'm wrong.
In summary: to do these 3 operacions, one after another, in my backed webscript, what is better and why? Use native API methods or use REST calls to existing webscripts?
And if I try to do the second option, is possible to do this? Using HttpClient class and GetMethod/PostMethod for the REST calls inside my Java Webscript may be the best option for Rest calls?. Or this could give me problems? Because I use a Rest call to my backed webscript that do another rest calls to another webscripts.
Thanks a lot!
I think it's bad practice to do it like this. In a lot of Alfresco versions the default services didn't change a bit. Even when they changed they still had deprecated methods.
The rest api changed as well. If you want to make an upgrade proof system I guess it's better to stick with the Webservices (which didn't change since version 2.x) or go with CMIS.
But then it doesn't make sense to have your code within Alfresco, so putting it within an interface is better.
I'd personally just stick with the JavaScript API which didn't change a lot. Yes more functions were enabled within, but the default actions to search & CRUD remained the same.
You could even to a duo: Have your Java Backendscript do whatever fancy stuff and send the result to je JavaScript controller and do the default stuff.
Executing HTTP calls against the process you are already in is a very very bad idea in general. It is slower, much more complex and error-prone, hogs more resources (two threads), and in your case, you will even lose transaction safety. Just imagine the last call fails for some reason. Besides you will most likely have to handle security context propagation yourself. Use the native public API and it will be easy, safe and stable.
Alright, so a better title here may have been "Progressive Enhancement with REST in CakePHP", but at least now I'll know you didn't read the question if your answer just refers to the difference between the two ;)
I'm pretty familiar with REST and how to integrate it with CakePHP, but I'm not 100% on board with how to still maintain a conventionally functioning website. Using Router::mapResources sounds like a great idea, but this creates a problem with maintaining the "gracefully degradation" version of the site, because both POST requests to /resource/ AND GET requests for /resource/add will route to the same action (add). Clearly I'll want this action to return a JSON object if they're using the REST api, but if they're using the degraded version of the site (no JS perhaps), it should be a add form, right?
What's the best way to deal with this. Do you route your REST requests to other action names using Router::resourceMap()? Do you do that crazy hack I saw to have the /api/ prefix part of the resourceMap so you can use api_action functions? Do you have the actions handle both REST and conventional requests via checking isAjax()? If so, how do you ensure that you can rely on the browser to properly support the other two request types?
I've searched around quite a bit but haven't found anything about how to keep conventional requests available in Cake along side REST, so if anyone has any advice or experience, I'd love to hear it!
CakePHP uses extension routing as well, via Router::parseExtension() so;
/test/action will render views/test/action.ctp
/test/action.html also
/test/action.json will render views/test/json/action.ctp
/test/action.xml will render views/test/xml/action.ctp
If all views are designed to handle the same data as set by your controller, you'll be able to show a regular HTML form and handle the posted data the same way as you'd handle the AJAX request.
You'll probably might have to add checks if any data is posted/submitted inside the /add, /edit, /delete actions to prevent items being deleted without a form being posted (haven't tested that though, it might be that cake blocks these urls if mapresources is set for the controller)
REST in CakePHP:
http://book.cakephp.org/2.0/en/development/rest.html
(Extension) Routing
http://book.cakephp.org/2.0/en/development/routing.html#file-extensions
Ok...
I'm writing a ASP.Net MVC 2 application, and one of the requirements is that I log the headers on the requests we receive, and also on the responses we send...
My approach to do this has been to create a controller that overrides OnActionExecuting and OnActionExecuted, and then create our actual "live" controllers by inheriting from this rather than from the usual base class. This way, I basically get the logging functionality for free.
While this approach works fine for handling the requests, responses seem to be another matter. I am getting an error telling me that the Headers property of the HTTPResponseBase class requires IIS to be using the Integrated Pipeline. I therefore have two questions.
Question 1.
Can anyone suggest a means to get the headers through a means other than HTTPResponseBase.Headers? I have considered for example simply parsing the entire resposne and getting them that way myself, but I was hoping someone might have a better way...
Question 2.
What is this Integrated Pipeline? What does it do? How do I enable it?
Cheers in anticipation...
Martin.
In response to Question 2:
Integrated Pipeline is a new feature in IIS 7 and higher, you can change the application pool in IIS7 to use this new pipeline.
I find no documentation on how to update objects vaadin asynchronously. Can anyone help me? What I need is to render a table and then update the values of a column with a call rather slow, and so I want to make it asynchronous ..
This has been discussed a lot on this thread on the Vaadin forum. You might want to read it, it contains a lot of useful information.
Just do the updates in another thread. UI modifications from background threads must be synchronized to application object. Add icepush, refresher or proggresbar to get changes from server to client.
As far as I know Vaadin provides two add-ons for solving this problem: ServerPush and DontPush. Both add-ons can be imported via maven and both support WebSockets as well as fallback solutions for browsers without WebSocket support. Although ServerPush provides seemingly more features than DontPush, it is lower rated than DontPush, probably because it is more complicated.
For pushing updates to the client DontPush provides a very simple solution that does not require any changes to the web application. Only the servlet-class in web.xml needs to be replaced by org.vaadin.dontpush.server.impl.jetty.DontPushServlet and the widget set has to be updated afterwards via mvn vaadin:update-widgetset. That's all. Any changes on the server will be automatically pushed to the client. I successfully tested this add-on with Chrome 14. Unfortunately, I could not get it working with Firefox 7.
According to the web page of ServerPush the ServerPush add-on should provide this functionality, too. However, I could not figure out how to setup ServerPush to be working with jetty. Moreover, it seems to be more complicated in use. It requires several changes to the web.xml as well as additional configuration files for the atmosphere server.
In contrast to DontPush ServerPush provides also an explicit pushing mechanism which allows to update the GUI manually by calling the push() method of a certain pusher component which needs to be added to the main window beforehand. However, I also failed to get this working.
Hey guys. We're using OSGi services in an Eclipse RCP application. To track them, we're using the org.osgi.util.tracker.ServiceTracker class. A sample code from the application looks like
mailServiceTracker = new ServiceTracker(context, MailService.class.getName(), null);
mailServiceTracker.open();
MailService service = (MailService) mailServiceTracker.getService();
Now my problem is that the getService() method frequently returns null when I created a new service. The code works very well for services that are existing for a long time in the application, but each time I create a new service, I have to do many things until the service is finally found and tracked. I regularly try for example
'Clean...' in Eclipse
'Refresh' all projects in Eclipse
Rebuild the project on the command line
Sometimes those things help, and sometimes they don't. Does anyone have experiences with those trackers and can tell me how to avoid this behavior and how to get the services tracked immediately upon creation?
Thanks
The problem is that the services you want may not have been created yet (especially in an bundle activator, as some bundles may not yet have started). If you still want to use the service tracker, you will need to provide a ServiceTrackerCustomizer, and keep track (sorry, no pun intended) of the services as they come and go.
Or, you could just switch over to Declarative Services that handle this for you.
There is nothing wrong with using ServiceTrackers other than the fact that it's a fairly low-level way of tracking services. Whilst I agree that declarative services are a nice mechanism, simply dismissing ServiceTrackers because of "all sorts of issues" sounds like bad advice.
Back to the question.
As soon as a service tracker is created and opened, it gives you access to all services that match the filter condition you specified upon creation. There is no delay there. The only thing I can think about is that somehow your bundles are not correctly resolved, so services that are registered from a bundle A are simply not visible to a bundle B using a ServiceTracker. To check this, first locate the bundle that exports the package containing the service interface, and then make sure both A and B are actually wired to it.
Explaining the update/refresh mechanism in OSGi a bit more:
Whenever you update something in OSGi, it's a two step process.
Let's assume you update a bundle that contains a new version of an exported package. Let's also assume there is some consumer that imports it. As long as you only update the bundle but not explicitly refresh the wiring (of which import links to which export) the consumer will still be wired to the old version of the package. As soon as you do a package refresh (something you can do in OSGi via the PackageAdmin service) your consumer will be resolved again and will be wired to the new version.
The reason this is decoupled is that you might want to do updates of several bundles and not "refresh" after each one but instead defer such a refresh until all of them are updated.
It's quite possible that this is the effect you're seeing. Initially you only do an update, and only after the refresh will the tracker actually see the new version of the service.
Not being flippant at all, don't use service trackers. They appear to make your life simple, but there are all sort of issues with them. I'd recommend that you look into using Declarative Services instead. The support for DS in Eclipse has been very good from 3.5 onward.
You might want to check out this book and the associated presentations for more information on why using Service Trackers is a bad idea.
http://equinoxosgi.org/