Structure for big GWT project - gwt

Context
big project with multi maven module or single maven module structure
Question
did you finally use multi-maven-module or single-maven-module structure?
Details
If you've worked on a big project that had long development duration and contains lots of functionality(i.e. not a trivial project), did you choose to split the project into multiple maven modules or went with the single-module approach?
For example, having a multi-module structure, crashes when running maven commands like mvn gwt:eclipse(see http://bit.ly/gs4Rmo). I guess this would have worked well with single module GWT project. And there could be other commands like the above that has issues with multi-module structure.
However, the multi-module structure could bring the benefits of a faster development, i.e. if you separate the "server" from "client" module, you could compile the business logic(server) separately and package it into resulting web archive. Compiling the GWT code, takes about 20 seconds, so if you only modify the server package, it could save you lots of time in the long run.
Which other cases like the one above did you encounter when working with a multi-module/single module project?
Thank you!

A few notes:
On development server you don't have to compile the code "by hand": dev server compiles the code automatically and reloads it. Just keep dev server running, change some code and then reload the page in browser. (this is only true if you change existing classes and don't change project structure)
Multiple maven modules have nothing to do with multiple GWT modules.
You would want to have multiple GWT modules (= multiple entry points) if you have code that executes in different environments: for example you have web and mobile sites that have quite different code bases. Then you would split the project into three modules: web, mobile and common. Then you'd reference common in both web and mobile.
Another case for multiple GWT modules would be if you, for some reason, want to have a multiple host (entry) HTML pages. There are rare cases when you'd want this, for example when you need to do redirects when integrating OpenID. The other case would be that you already have existing Web pages where you are only adding GWT to add some functionality.
Don't split the GWT project into multiple modules just to reduce download size: use Code Splitting instead.
If your main gripe is long gwt compile times then read: How do I speed up the gwt compiler?

We started with multiple modules and eventually merged into a single module. The main reason for this was maintenance of modules is a huge overhead. Each module had a pom to build the UI, RPC layer and backend services. So with 30 modules we had 90 maven projects to manage.
Merging the modules turned 90 maven projects into 3 with one parent level pom.
This considerably lowers maintenance overhead and improves build times. GWT's compiler is notoriously slow so having a single compile parsing source files once instead of multiple times makes things much faster.
On the flip side, a single module means the compiler slurps everything up into memory at once. This could make split points intolerably slow to find if you insert any in your code. Therefore if you intend to split, it may be worth considering where you're putting those points and arrange your project accordingly.

Related

Assemble transitive closure of a main file in scala sbt project

I'm working on a scala sbt project, and I am at a point where I want to assemble the whole thing to share a .jar-file with others, so that they can use it on their side. For my local testing, I'm doing so using the sbt-assembly plugin, that works nicely.
When sharing, I would though prefer to only share the parts that are important for the other party (the project has huge components that irrelevant at the current point, and I'd prefer not to share these for various reasons). Concretely, they will be executing one particular main file, so it would be enough to pack everything that this file depends on.
Is there a way to accomplish this? I'd also be interested in doing this on the code level (i.e., create a copy of the project that only contains dependencies of that main file), but also while assembling or even modifying the jar file after assembly is okay. I did not find the tools to achieve any of these.
As I said, I'm dealing with a scala sbt project, and I'm working with IntelliJ IDEA; I'd also be happy with an IDE tool that does the job.

Cross-cutting "logical" scopes in multi-project sbt builds

Let's say I have a build with a handful of subprojects that are related but have some logical classification above the sorts of things that the build is normally aware of.
For example, I might have a collection of subprojects
Foo
Bar
Baz
Quux
Woof
Oink
I know that Woof and Bar are part of what I'd call the server component. Baz is a common dependency of both, and Foo, Quux, and Oink are on the client side. The whole build is an aggregate of the various subprojects, but sometimes I'd just like to "focus" on the server side, or the client side, or whatever.
On one hand, I've considered nested aggregated projects, but I'm not sure how well that works with sbt's other functionality.
On the other, I was thinking of making custom scopes that cut across the subprojects. I'd like to be able to configure related projects with similar keys, so it is handy to be able to say that I'd like to update a certain key for the group of related projects.
What's a good approach for this sort of thing? Am I thinking about it wrong?
Dealt with the similar situation.
My solution was to break things out into sub-projects all in sbt, and use dependsOn to link the sub-projects. The nice unintended consequence here is that sub-projects with dependsOn mimic Maven dependencies perfectly. If you publish one of these, it will magically have maven dependencies for the others. Tools like eclipse and intelli will generally also see these dependency chains rather easily.
This gives you the option of keeping things in the same SBT project or not, with almost no difference in the end. They just all tie together at the 'maven-style' dependency. Using a local maven repo here or a remote one works great. At this point, you're free to make a choice based on code-management and not dep-management, as to whether or not to break things into different SBT projects.
EDIT: so in the end, if you want to move a module out of the multi-project SBT thing, you just need to add regular maven deps to whatever code base is requesting the other
My opinion: Skip the sub-projects. Splitting a project is nice, when you publish them as independent projects. But multiple projects make also trouble: It is quite hard to get good and stable interfaces between multiple projects.
I would split a project into multiple projects as late as possible. Dependency management (especially cyclical dependencies), refactoring (especially moving of classes) and tool support in general (especially IDE integration) is much better with only one big project.
Things like measuring code coverage can also become surprisingly difficult, when the coverage spans multiple projects.
Custom scopes and six sub-projects: In my opinion this is totally over engineered. Stay with one project, get the whole thing running and split it later (if necessary).

How best to structure and build Clojure apps with plugins?

I think (see below) that I would like to structure a Clojure project as multiple modules, with ordered dependencies - just like Maven lets me do with multi-modules projects.
But I can't see how to do this with Leiningen - all I can see is the checkouts fix described in the FAQ which doesn't seem to be as powerful.
Will lein do this? Should I be using Gradle instead? Or is this kind of thing not needed?
Some more context: I am wondering how to architect a modular application that supports plugins (which I imagine means jars dumped on the classpath). And am wondering to what extent I can structure that as a core + plugins (I am thinking I should be able to do something with Clojure's dynamic code loading and not have to go with Java/OSGi). So I guess the driving motivation for a single project comes from wanting some way of packaging everything (the core + default plugins) as a single blob that is easy for the end user, but which can also be divided up (and which is built and tested in fragments, testing the logical independence of each module). More general advice about this is welcome
Update
A possible solution that wasn't mentioned below is using a Maven plugin - I assume that supports everything Maven does, but compiles Clojure, so will work with nested modules, etc.
First, it does not seem like Leiningen supports a module hierarchy like Maven does. The checkouts are the next closest thing it has. It should be sufficient though to develop a modular application in Clojure though.
For the project structure, I would have an API project, a "core" project, the plugins themselves, and a separate packaging project. The core and the plugins should only depend on the API. Which build tool you use to create the packaging project is up to you. Gradle would probably be more effective at handling the packaging, however having the "checkout" functionality Leiningen offers could make development of the system as a whole easier.
I would take a look at the code for Leiningen and Noir to figure out how to effectively handle this.
For dynamically loading the plugins, I would start with looking how Noir handles it in two of their files:
server.clj has namespace loading for all files under a particular namespace. Under the hood it uses tools.namespace, but you can easily see how it's used to require every namespace under a particular base. This is how Leiningen handles custom tasks as well - the base definition for the task should be in the leiningen.$task namespace.
core.clj has what I would use for plugin registration. In summary, use a map under an atom and add plugins to that map. I would advice wrapping the registration with a macro to keep your code cleaner.
What I listed above should be sufficient if you don't need to handle adding plugins at run time. If you don't have every plugin on the classpath during start-up, I would recommend utilizing pomegranite to add entries to the classpath. You can see an example in classpath.clj.

How to filter ressources during build in Eclipse project?

I have an application that uses several configuration files (let just consider appli.properties here).
These files contain several values that depend on the environment. We can find some information such as:
server.port=${envi.server.port}
On other side, I have a set of properties files, one per environment (dev.properties, homolo.properties, etc.).
They contain the values for some properties in configuration files. We can find here this kind of properties:
envi.server.port=4242
My build is handled by Maven2. Everything is working fine.
However, I now need to import my project into Eclipse.
My main concern is about the configuration files filtering. Indeed, if I do not modify anything in my Eclipse parameter for my project (after a mvn eclipse:eclipse command), then all my configuration file will keep the property keys (i.e. ${envi.server.port}) instead of their values. And with such configuration files, my application will not run inside Eclipse...
So I tried two solutions:
A full-Maven solution, using m2eclipse plugin. I add a Maven Builder in the project configuration, and then, each time a build is made, the filtering is done on the files.
Ant (which is only used inside Eclipse). I've hardly defined a task that simulates the Maven2 filtering of files in Ant. This task is only dedicated to the filtering, no compilation.
The common problem of these two solutions is that the filtering is made at every operation (essentially saves on Java class edition), and then take time. The second solution is however quicker (3 seconds) than the first one (more than 10 seconds).
What do you think of my approach?
How would you do that, in a better way?
If the resources are not changed that often, you can set the Maven build to only run after a Clean build, then it won't interfere so much, this doesn't do anything to speed up the build however.
As far as making the filtering quicker, I don't know of any other simple mechanism that will help, as you've said you need either Ant or Maven to run the filtering, and they both take some time to set up before building, resulting in the slow down.
If this is causing you a lot of problems, you can write a custom Incremental Eclipse builder that performs the filtering on the deltas. This should be considerably quicker, but obviously a lot more effort to write.

Large apps in GWT: one module, or several?

In order to provide nice URLs between parts of our app we split everything up into several modules which are compiled independently. For example, there is a "manager" portion and an "editor" portion. The editor launches in a new window. By doing this we can link to the editor directly:
/com.example.EditorApp?id=1
The EditorApp module just gets the value for id and loads up the document.
The problem with this is ALL of the code which is common between the two modules is duplicated in the output. This includes any static content (graphics), stylesheets, etc.
And another problem is the compile time to generate JavaScript is nearly double because we have some complex code shared between both modules which has to be processed twice.
Has anyone dealt with this? I'm considering scrapping the separate modules and merging it all back into one compile target. The only drawback is the URLs between our "apps" become something like:
/com.example.MainApp?mode=editor&id=1
Every window loads the main module, checks the value of the mode parameter, and and calls the the appropriate module init code.
I have built a few very large applications in GWT, and I find it best to split things up into modules, and move the common code into it's own area, like you've done. The reason in our case was simple, we had some parts of our application that were very different to the rest, so it made sense from a compile size point of view. Our application compiled down to 300kb for the main section, and about 25-40kb for other sections. Had we just put them all in one the user would have been left with a 600kb download, which for us was not acceptable.
It also makes more sense from a design and re-usability point of view to seperate things out as much as possible, as we have since re-used a lot of modules that we built on this project.
Compile time is not something you should generally worry about, because you can actually make it faster if you have seperate modules. We use ant to build our project, and we set it to only compile the GWT that has changed, and during development to only build for one browser, typical compile times on our project are 20 seconds, and we have a lot of code. You can see and example of this here.
One other minor thing: I assume you know that you don't have to use the default GWT paths that it generates? So instead of com.MyPackage.Package you could just put it into a folder with a nice name like 'ui' or something. Once compiled GWT doesn't care where you put it, and is not sensitive to path changes, because it all runs from the same directory.
From my experience building GWT apps, there's a few things to consider when deciding on whether you want multiple modules (with or without entry points), or all in one: download time (Javascript bundle size), compile time, navigation/url, and maintainability/re-usability.
...per download time, code splitting pretty much obviates the need to break into different modules for performance reasons.
...per compile time, even big apps are pretty quick to compile, but it might help breaking things up for huge apps.
...per navigation/url, it can be a pain to navigate from one module to another (assuming different EntryPoints), since each module has it's own client-side state...and navigation isn't seamless across modules.
...per maintainability/re-usability, it can be helpful from an organization/structure perspective to split into separate modules (even if there's only one EntryPoint).
I wrote a blog post about using GWT Modules, in case it helps.
Ok. I really get the sense there really is no "right" answer because projects vary so much. It's very much dependent on the nature of the application.
Our main build is composed of a number of in-house modules and 3rd party modules. They are all managed in seperate projects. That makes sense since they are used in different places.
But having more than one module in a single project designed to operate as one complete application seems to have overcomplicated things. The original reason for the two modules was to keep the URL simple when opening different screens in a new window. Even though had multiple build targets they all use a very large common subset of code (including a custom XML/POJO marshalling library).
About size... for us, one module was 280KB and the other was just over 300KB.
I just got finished merging everything back into one single module. The new combined module is around 380KB. So it's actually a bit less to download since most everyone would use both screens.
Also remember there is perfect caching, so that 380KB should only ever downloaded once, unless the app is changed.