Looking at some of the sample code for hybrid mobile apps that speak to Node.js on BM (http://mbaas-gettingstarted.ng.bluemix.net/hybrid), you will see various examples that demonstrate how to use a logger on the client side:
var config = {
applicationId:'<applicationId>',
applicationRoute:'<applicationRoute>',
applicationSecret:'<applicationSecret>'
};
IBMBluemix.initialize(config).done(function(status){
// Initialize the Services
}).catch(function(err){
IBMBluemix.getLogger().error("Error intializing SDK");
});
I've confirmed this works fine in a Cordova app. My question is - why does this exist? As far as I can see, it does nothing more than wrap calls to console.log. It does not ever send logs to the Bluemix server app as far as I can tell.
There is documentation here, https://www.ng.bluemix.net/docs/starters/mobile/mobilecloud/nodejsmobile.html#log, that talks about the feature both server-side and client-side, but unless I'm missing it, there's no persistence for the client-side version.
If so - then what exactly is the point of this abstraction then? I have to imagine it was built for some reason, but I'm not seeing it.
this wrapper is used to "wrap" and to make "standard" the console log api, especially because this javascript API isn't available for all the browsers (especially old ones). By wrapping it the library could check the browser and its availability, in order to avoid an execution error
Another reasons is to wrap some configuration utilities, like providing different libraries to use (eg log4js) or other configuration, and so on.
Last but not least, probably it provide a singleton interface for performance optimization
I have to build a component that runs in a jvm, uses MongoDB as database and doesn't need a UI. It will be integrated into other products. I'm planning to build this using scala and related tools.
My first thoughts are to just let it expose REST API and let other products integrate using the API. While this is acceptable for some products, it isn't for others due to performance reasons. So I have to enable other components to communicate to this using either http or ipc or message queues. How can achieve this without much duplication of business logic.
Would Play framework be the right choice for this even though there is no UI involved and there is a need to accept messages via http or ipc or message queues?
Using Play for that is ok but there are frameworks better fit for what you are planning to do, as you already said, play has a lot of support for frontend features you don't need.
It will not so much affect the runtime speed as the time you will need for programming, compiling, building and deployment.
There are some framworks that might fit you needs better:
Scalatra Nice, easy to use, integrates good with JavaEE-Stack http://www.scalatra.org/
Finatra Cool if you have the twitter stack running. Metrics and other stuff almost for free http://finatra.info/
Skinny Framework : Looks nice, never tried myself
Spray : Cool features to come, a little elitist
I find no documentation on how to update objects vaadin asynchronously. Can anyone help me? What I need is to render a table and then update the values of a column with a call rather slow, and so I want to make it asynchronous ..
This has been discussed a lot on this thread on the Vaadin forum. You might want to read it, it contains a lot of useful information.
Just do the updates in another thread. UI modifications from background threads must be synchronized to application object. Add icepush, refresher or proggresbar to get changes from server to client.
As far as I know Vaadin provides two add-ons for solving this problem: ServerPush and DontPush. Both add-ons can be imported via maven and both support WebSockets as well as fallback solutions for browsers without WebSocket support. Although ServerPush provides seemingly more features than DontPush, it is lower rated than DontPush, probably because it is more complicated.
For pushing updates to the client DontPush provides a very simple solution that does not require any changes to the web application. Only the servlet-class in web.xml needs to be replaced by org.vaadin.dontpush.server.impl.jetty.DontPushServlet and the widget set has to be updated afterwards via mvn vaadin:update-widgetset. That's all. Any changes on the server will be automatically pushed to the client. I successfully tested this add-on with Chrome 14. Unfortunately, I could not get it working with Firefox 7.
According to the web page of ServerPush the ServerPush add-on should provide this functionality, too. However, I could not figure out how to setup ServerPush to be working with jetty. Moreover, it seems to be more complicated in use. It requires several changes to the web.xml as well as additional configuration files for the atmosphere server.
In contrast to DontPush ServerPush provides also an explicit pushing mechanism which allows to update the GUI manually by calling the push() method of a certain pusher component which needs to be added to the main window beforehand. However, I also failed to get this working.
We're planning on creating a feedreader as a windows desktop- and iPad application. As we want to be able to show Websites AND to run (our own) JavaScript in this application, we thought about delivering the application as HTML/CSS/JavaScript, just wrapped by some .NET control or a Cocoa Touch webbrowser component. So the task at hand is to find out which framework to use to create the HTML/CSS/JS files to embed in the application.
For the development of the HTML/CSS/JavaScript we would be happy to use Vaadin, GWT, or some other framework, as we're a lot better with Java than with JS. We favor Vaadin after a short brainstorming, as the UI components are very nice, but I fear that most of the heavy lifting will be on the server and not in the client (and that wouldn't be too nice). We would also like GWT, but the Java-to-JS compiling takes a lot of time and an extra step, and slowed down development time in the past when using it.
The question is: which development framework would you choose (given you wanted to implement this project and you mostly did Java so far) and why? If there are better framework options (List of Rich Client Frameworks), please let me know.
Edit: The application will need to talk to our server from time to time (sync what has been read for example), but mainly should get the xml feeds itself. Therefore I hope that most of the generated code can be embedded in the application and there doesn't need to be heavy activity with our server.
Edit2: We (realistically even if you doubt) expect at least 10000 users.
Based on my experience with Vaadin, I'd go for that, but your requirements are somewhat favoring pure-GWT instead.
Vaadin needs the server and server connection. If building mostly offline desktop application, this can be solved with an embedded Jetty for example. (synchronize with an online service only when needed), but for iPad you would need to connect online right away to start the Vaadin application.
GWT runs completely at the client-side and you can build a JavaScript browser application that only connects when needed.
Because Vaadin is much quicker to develop, you could build a small Vaadin version first and see if that is actually problem on the iPad.
On the other hand, if you can assume going online right away, you can skip the local server installation altogether. Just run the application online and implement the desktop version using operating systems default browser control (i.e. the .NET control you suggested). Then Vaadin is easier.
Vaadin is just framework base on GWT but have two very important features:
don't need to run GWT compiler. It is pure java. Of course if not add addons because then gwt compiler must run.
you don't need to write communication code. So you don't need to solve DTO problems, non-serializable object mappings and dont need to write servlets.
I use Vaadin in my work for one year and we haven't performance problems yet (desktop like application with ~500 users). IMO very good solution is to use Vaadin just for UI, logic move to independent beans and connect this two elements using Spring or Guice.
In this case you should use MVP pattern and Domain Driven Development.
Bussines beans is domain objects and logic that use view interfaces to send responses.
Custom Vaadin components (could extends standard components) implements view interfaces.
That way is good when you decide to change UI engine if Vaadin is not for you. Just rewrite guice/spring mappings and write new implementations of view interfaces.
My 3 cents:
If you decide to use vaadin, You will benefit from already GOOD LOOKING components. Since you dont want to write (alot of) CSS , vaadin is already good looking out of the box. How ever, Vaadin is a SERVERSIDE framework. User interface interactions will hit the back end even if they dont involve getting any data (e.g moving from one tab to the other) .
If you decide to use GWT, you will have to atleast style the application (this is not hard) . There is also the problem of long compilation time (but you can test and debug on hosted mode which allows you to run the application without compiling , then you compile only when deploying). The main advantage of gwt is that you control what gets sent to the wire, For UI interactions that dont require getting data from the backend, it will work purely on the client side. You the developer will determine when to send a request to the back end. (Doing RPC requests in GWT is very easy)
Hope this will help you make the decision.
I am trying to figure out if I should migrate my gwt-rpc calls to the new GWT2.1 RequestFactory cals.
Google documentation vaguely mentions that RequestFactory is a better client-server communication method for "data-oriented services"
What I can distill from the documentation is that there is a new Proxy class that simplifies the communication (you don't pass back and forth the actual entity but just the proxy, so it is lighter weight and easier to manage)
Is that the whole point or am I missing something else in the big picture?
The big difference between GWT RPC and RequestFactory is that the RPC system is "RPC-by-concrete-type" while RequestFactory is "RPC-by-interface".
RPC is more convenient to get started with, because you write fewer lines of code and use the same class on both the client and the server. You might create a Person class with a bunch of getters and setters and maybe some simple business logic for further slicing-and-dicing of the data in the Person object. This works quite well until you wind up wanting to have server-specific, non-GWT-compatible, code inside your class. Because the RPC system is based on having the same concrete type on both the client and the server, you can hit a complexity wall based on the capabilities of your GWT client.
To get around the use of incompatible code, many users wind up creating a peer PersonDTO that shadows the real Person object used on the server. The PersonDTO just has a subset of the getters and setters of the server-side, "domain", Person object. Now you have to write code that marshalls data between the Person and PersonDTO object and all other object types that you want to pass to the client.
RequestFactory starts off by assuming that your domain objects aren't going to be GWT-compatible. You simply declare the properties that should be read and written by the client code in a Proxy interface, and the RequestFactory server components take care of marshaling the data and invoking your service methods. For applications that have a well-defined concept of "Entities" or "Objects with identity and version", the EntityProxy type is used to expose the persistent identity semantics of your data to the client code. Simple objects are mapped using the ValueProxy type.
With RequestFactory, you pay an up-front startup cost to accommodate more complicated systems than GWT RPC easily supports. RequestFactory's ServiceLayer provides significantly more hooks to customize its behavior by adding ServiceLayerDecorator instances.
I went through a transition from RPC to RF. First I have to say my experience is limited in that, I used as many EntityProxies as 0.
Advantages of GWT RPC:
It's very easy to set-up, understand and to LEARN!
Same class-based objects are used on the client and on the server.
This approach saves tons of code.
Ideal, when the same model objects (and POJOS) are used on either client and server, POJOs == MODEL OBJECTs == DTOs
Easy to move stuff from the server to client.
Easy to share implementation of common logic between client and server (this can turn out as a critical disadvantage when you need a different logic).
Disadvatages of GWT RPC:
Impossible to have different implementation of some methods for server and client, e.g. you might need to use different logging framework on client and server, or different equals method.
REALLY BAD implementation that is not further extensible: most of the server functionality is implemented as static methods on a RPC class. THAT REALLY SUCKS.
e.g. It is impossible to add server-side errors obfuscation
Some security XSS concerns that are not quite elegantly solvable, see docs (I am not sure whether this is more elegant for RequestFactory)
Disadvantages of RequestFactory:
REALLY HARD to understand from the official doc, what's the merit of it! It starts right at completely misleading term PROXIES - these are actually DTOs of RF that are created by RF automatically. Proxies are defined by interfaces, e.g. #ProxyFor(Journal.class). IDE checks if there exists corresponding methods on Journal. So much for the mapping.
RF will not do much for you in terms of commonalities of client and server because
On the client you need to convert "PROXIES" to your client domain objects and vice-versa. This is completely ridiculous. It could be done in few lines of code declaratively, but there's NO SUPPORT FOR THAT! If only we could map our domain objects to proxies more elegantly, something like JavaScript method JSON.stringify(..,,) is MISSING in RF toolbox.
Don't forget you are also responsible for setting transferable properties of your domain objects to proxies, and so on recursively.
POOR ERROR HANDLING on the server and - Stack-traces are omitted by default on the server and you re getting empty useless exceptions on the client. Even when I set custom error handler, I was not able to get to low-level stack traces! Terrible.
Some minor bugs in IDE support and elsewhere. I filed two bug requests that were accepted. Not an Einstein was needed to figure out that those were actually bugs.
DOCUMENTATION SUCKS. As I mentioned proxies should be better explained, the term is MISLEADING. For the basic common problems, that I was solving, DOCS IS USELESS. Another example of misunderstanding from the DOC is connection of JPA annotations to RF. It looks from the succinct docs that they kinda play together, and yes, there is a corresponding question on StackOverflow. I recommend to forget any JPA 'connection' before understanding RF.
Advantages of RequestFactory
Excellent forum support.
IDE support is pretty good (but is not an advantage in contrast with RPC)
Flexibility of your client and server implementation (loose coupling)
Fancy stuff, connected to EntityProxies, beyond simple DTOs - caching, partial updates, very useful for mobile.
You can use ValueProxies as the simplest replacement for DTOs (but you have to do all not so fancy conversions yourself).
Support for Bean Validations JSR-303.
Considering other disadvantages of GWT in general:
Impossible to run integration tests (GWT client code + remote server) with provided JUnit support <= all JSNI has to be mocked (e.g. localStorage), SOP is an issue.
No support for testing setup - headless browser + remote server <= no simple headless testing for GWT, SOP.
Yes, it is possible to run selenium integration tests (but that's not what I want)
JSNI is very powerful, but at those shiny talks they give at conferences they do not talk much about that writing JSNI codes has some also some rules. Again, figuring out how to write a simple callback was a task worth of true researcher.
In summary, transition from GWT RPC to RequestFactory is far from WIN-WIN situation,
when RPC mostly fits your needs. You end up writing tons conversions from client domain objects to proxies and vice-versa. But you get some flexibility and robustness of your solution. And support on the forum is excellent, on Saturday as well!
Considering all advantages and disadvantages I just mentioned, it pays really well to think in advance whether any of these approaches actually brings improvement to your solution and to your development set-up without big trade-offs.
I find the idea of creating Proxy classes for all my entities quite annoying. My Hibernate/JPA pojos are auto-generated from the database model. Why do I now need to create a second mirror of those for RPC? We have a nice "estivation" framework that takes care of "de-hibernating" the pojos.
Also, the idea of defining service interfaces that don't quite implement the server side service as a java contract but do implement the methods - sounds very J2EE 1.x/2.x to me.
Unlike RequestFactory which has poor error handling and testing capabilities (since it processes most of the stuff under the hood of GWT), RPC allows you to use a more service oriented approach. RequestFactory implements a more modern dependency injection styled approach that can provide a useful approach if you need to invoke complex polymorphic data structures. When using RPC your data structures will need to be more flat, as this will allow your marshaling utilities to translate between your json/xml and java models. Using RPC also allows you to implement more robust architecture, as quoted from the gwt dev section on Google's website.
"Simple Client/Server Deployment
The first and most straightforward way to think of service definitions is to treat them as your application's entire back end. From this perspective, client-side code is your "front end" and all service code that runs on the server is "back end." If you take this approach, your service implementations would tend to be more general-purpose APIs that are not tightly coupled to one specific application. Your service definitions would likely directly access databases through JDBC or Hibernate or even files in the server's file system. For many applications, this view is appropriate, and it can be very efficient because it reduces the number of tiers.
Multi-Tier Deployment
In more complex, multi-tiered architectures, your GWT service definitions could simply be lightweight gateways that call through to back-end server environments such as J2EE servers. From this perspective, your services can be viewed as the "server half" of your application's user interface. Instead of being general-purpose, services are created for the specific needs of your user interface. Your services become the "front end" to the "back end" classes that are written by stitching together calls to a more general-purpose back-end layer of services, implemented, for example, as a cluster of J2EE servers. This kind of architecture is appropriate if you require your back-end services to run on a physically separate computer from your HTTP server."
Also note that setting up a single RequestFactory service requires creating around 6 or so java classes where as RPC only requires 3. More code == more errors and complexity in my book.
RequestFactory also has a little bit more overhead during the request processing, as it has to marshal serialization between the data proxies and actual java models. This added interface adds extra processing cycles which can really add up in an enterprise or production environment.
I also do not believe that RequestFactory services are serialization like RPC services.
All in all after using both for some time now, i always go with RPC as its more lightweight, easier to test and debug, and faster then using a RequestFactory. Although RequestFactory might be more elegant and extensible then its RPC counter part. The added complexity does not make it a better tool necessary.
My opinion is that the best architecture is to use two web apps , one client and one server. The server is a simple lightweight generic java webapp that uses the servlet.jar library. The client is GWT. You make RESTful request via GWT-RPC into the server side of the client web application. The server side of the client is just a pass though to apache http client which uses a persistant tunnel into the request handler you have running as a single servlet in your server servlet web application. The servlet web application should contain your database application layer (hibernate, cayenne, sql etc..) This allows you to fully divorce the database object models from the actual client providing a much more extensible and robust way to develop and unit test your application. Granted it requires a tad bit of initial setup time, but in the end allows you to create a dynamic request factory sitting outside of GWT. This allows you to leverage the best of both worlds. Not to mention being able to test and make changes to your server side without having to have the gwt client compiled or build.
I think it's really helpful if you have a heavy pojo on the client side, for example if you use Hibernate or JPA entities.
We adopted another solution, using a Django style persistence framework with very light entities.
The only caveat I would put in is that RequestFactory uses the binary data transport (deRPC maybe?) and not the normal GWT-RPC.
This only matters if you are doing heavy testing with SyncProxy, Jmeter, Fiddler, or any similar tool that can read/evaluate the contents of the HTTP request/response (like GWT-RPC), but would be more challenging with deRPC or RequestFactory.
We have have a very large implementation of GWT-RPC in our project.
Actually we have 50 Service interfaces with many methods each, and we have problems with the size of TypeSerializers generated by the compiler that turns our JS code huge.
So we are analizing to move towards RequestFactory.
I have been read for a couple of days digging into the web and trying to find what other people are doing.
The most important drawback I saw, and maybe I could be wrong, is that with RequestFactory your are no longer in control of the communication between your Server Domain objects and your client ones.
What we need is apply the load / save pattern in a controlled way. I mean, for example client receive the whole object graph of objects belonging to a specific transaction, do his updates and them send the whole back to the server. The server will be responsible for doing validation, compare old with new values and do persistance. If 2 users from different sites gets the same transaction and do some updates, the resulting transaction shouldn't be the merged one. One of the updates should fail in my scenario.
I don't see that RequestFactory helps supporting this kind of processing.
Regards
Daniel
Is it fair to say that when considering a limited MIS application, say with 10-20 CRUD'able business objects, and each with ~1-10 properties, that really it's down to personal preference which route to go with?
If so, then perhaps projecting how your application is going to scale could be the key in choosing your route GWT RPC or RequestFactory:
My application is expected to stay with that relatively limited number of entities but will massively increase in terms of their numbers. 10-20 objects * 100,000 records.
My application is going to increase significantly in the breadth of entities but the relative numbers involved of each will remain low. 5000 objects * 100 records.
My application is expected to stay with that relatively limited number of entities AND will stay in relatively low numbers of e.g. 10-20 objects * 100 records
In my case, I'm at the very starting point of trying to make this decision. Further complicated by having to change UI client side architecture as well as making the transport choice. My previous (significantly) large scale GWT UI used the Hmvc4Gwt library, which has been superseded by the GWT MVP facilities.