GWT Module Design - gwt

I have an app with two components.
A customer facing one for submitting restaurant orders.
A vendor facing one for viewing restaurant orders.
Should I have two modules with different entry points as there is no shared code(except for the domain model objects) between the components?

There is one reason I can think of for why you may want to do this - which is to reduce download size, since some screens/logic may not be used by the customer (and you want the customer pages to load as fast as possible). However you can also achieve this with code splitting: https://developers.google.com/web-toolkit/doc/latest/DevGuideCodeSplitting
I think having two modules is fine as well. No big deal there.

If you are not going to deploy them on two separate nodes, I would go with one module. Because you have to maintain only one I18n files, less static files (html), there will be just one module descriptor (no duplication).
If you decide to use just one module, the code splitting is a good thing to consider to reduce the size of JS user have to download.

There can't be 100% correct answer, it really depends on your project.
Separation into two compiled modules might be good idea, in case when the size of your common logic, which has to be shared between two modules is quite small compared to customer/vendor specific logic and most of the time you are writing code only for customer/vendor. In such case you will get faster refresh time in development mode and faster compilation of individual modules to the case when everything is merged together.
But there is catch, at some point of time, there might be a requirement to create merged customer/vendor mode, because there are users who are customers and vendors at the same time.
I personally prefer approach, when different logical parts of application get their own gwt module, and then there is a root module which links all of them together, plus you have couple of DevOnly modules, which allow you to start only some specific part of application. Example module structure:
Customer module - not compiled separately, depends on Common module
Vendor module - not compiled separately,depends on Common module
Common module - not compiled separately
App module - compiled separately, depends on Customer and Vendor modules
VendorStandalone module - compiled separately, depends on Vendor module, used only for
development
CustomerStandalone module - compiled separately, depends on Customer module, used only for development
Such structure allows you to have fast developing mode ( if possible at all), and at the same time you prepared for case when Vendor & Customer functionality have to be provided together.

My choice of design(Using MVP):
1)Single module
2)Same login page (User pojo must have type i.e vendor or customer).
3)In OnmoduleLoad based upon type I'l open Corresponding vendor or customer presenter
Why??
1)Code re-usability.
2)Reducing maintenance of 2 modules.
Well,i am also waiting to see more design options.
Please refer

Related

Can you share models between features with clean architecture?

I have the following folder structure for my application
app
core
features
feature1
domain
entities
entity1
entity2
entity3
entity4
entity5
entity6
data
models
model1
model2
model3
model4
model5
model6
presentation
feature2
domain
entities
entity1
entity2
entity3
entity4
entity5
entity6
data
models
model1
model2
model3
model4
model5
model6
presentation
Model 1 to 6 for both features are exactly the same and more are coming as the application scales. This is becoming hard to maintain. Does clean architecture allow for sharing models and entities across the multiple features? Would that be done through the core folder?
Yes you can and you maybe should not do it.
Yes, because the clean architecture doesn't make a statement about this. As long as the dependency rule is honored you are compliant in terms of the clean architecture.
But you maybe should not do it, because there are other principles and considerations you should take into account.
First, you should ask your self if it would be a violation of the single responsiblility principle. Sometimes things look the same but are not really douplicated code. It's more an accident that they look the same. The question is "Who is the change trigger?". Usually the features change for different reasons and thus the features use cases and entities change for different reasons. If so you should not share the models between them.
Second, from a DDD (domain driven design) perspective the features can be in different bounded contexts. In this case you can have two entities with the same name, but they have different meanings in the differnt context. Therefore, the models have different properties and different methods.
If you decide to share the modles you should take a close look at them in intervalls and scan the properties and methods. Are there properties and methods that are exclusively dedicated to one feature? If so you are maybe mixing concerns.
Finally if you are facing problems like "Uhh, we changed someting here and broke something there", you should rethink if the sharing of some code is a good way to go. Sharing of code always couples code, because all clients depend on the shared code. It's a trade off between duplication (maintenance costs) and dependencies. Principles like the SRP help you to make an educated guess.
Does clean architecture allow for sharing models and entities across the multiple features?
I think DRY principles should be applied whether you use clean architecture or not.
As for the answer:
I think you could abstract your shared models and entities into separate modules or packages. If it's all dart code, I suggest choosing packages. You can place it inside the root project (Monorepo) or separate it to another repository, this way you could achieve modularity by abstracting the all layer of shared dependency (abstract class, interface, clients, or maybe repositories) out of the main application
There is a good video about this topic on Google I/O'19, it is about Android but you can get great insight and applied to general mobile development. I suggest you give it a try
I'm assuming you're following the ResoCoder course. I did too. I used that design in our team for a few weeks before realizing there soon become problems with it (which ResoCoder himself realizes if you check the Github issues and responses for his repo):
Sharing between features doesn't work well.
It's hard to have functionality split between multiple features.
Not well organized with a lot of code.
Some more which I forget off the top of my head (He wrote some article on the failing of it somewhere).
Hence, for my team's app (getting quite complex now), we've adapted this which seems to work:
(ignore the 2 \core directories, that's by mistake. There's meant to be only 1)
Then, inside all of these top-level directories (excluding /core - that's specifically for things like API clients, routers, etc.), there's folders for each feature. Ex: authentication, settings, posting, etc.
Then, here's the important bit, in each of these feature-split directories (ex: /domain, /presentation, etc.) we have a sub-directory called /shared that resembles what each folder needs to look like, except it just contains functionality that's categorized as (example) domain or data. This stuff is then split between all features.
For example, if I have an app that allows users to post content, I'm going to create the post entity (using ResoCoder terminology) inside the /posts feature. Except, UH OH, I need to have it displayed inside the /feed feature as well! This is then a perfect case for /shared inside the general /domain directory.
Let me know if this helps, or if you have any further questions!

gwt multiple modules without redundant code

I'm trying to find a way to rid redundant compilation and js from a client's GWT code. Problem is that they have a multiple EntryPoint site and a massive model that gets compiled for every module. We're talking about 30 GWT modules and entry points each compiling the entire model package of the app separately. It takes about 15 minutes on my 8 core monster just to GWT compile this beast. And yes, compilation is parallellized and uses all cores (can hardly move my mouse in Ubuntu :) )
To change the architecture to a single module is not really an option I think. Is there no way to have inherits be shared between modules? The modules aren't necessarily that big all of them, but the problem is again that all inherits are compiled redundantly for each module. This of course has negative effects for the end user as well since every page basically has to load the entire model-js again and again.
According to
http://www.gwtproject.org/doc/latest/DevGuideOrganizingProjects.html#DevGuideModuleXml
the suggestion still seems to be to just make one great monolithic module. Isn't there any better way?
Any tips highly appreciated!
As it is said in the GWT Documentation you refer to, GWT mechanism to face the problem of avoiding redundant code is merging all modules in just a a super-gwt-module which includes all sub-modules you have in your applications.
I suppose you are producing a module for a different page or feature at your website, so using a unique module, as I say, implies that you will need a mechanism to run the appropriate application-code per page, based on the url or something.
You can take advantage of using code-splitting, so your modules will be EntryPoints instead of RunAsyncCallbacks, and each module will be compiled in one js fragment which will be loaded asynchronously.
Note that you will include the same javascript fragment in all pages, and this will load other fragments depending on the page.
The advantages of this solution are many:
You only have one compilation process. It could take a long time, but for sure it will take much less than compiling all modules individually because redundant code will be compiled once.
You can maintain different .gwt.xml, one to continue developing the individual modules with its own EntryPoint, and another without EntryPoint which will be inherited by your super-module.
Once compiled, the first fragment loaded (shared by all apps) would be very small, and it will be cached just once, so all apps would load very fast.
Many of the code shared by the modules (gwt-core, jre, etc), could go to the first fragment and would be shared by all the modules, decreasing the final downloaded size of each app.
This is an out-of-the-box solution, gwt compiler makes a good job splitting the code, merging shared code to intermediate modules, and adding the methods to load fragments asynchronously when demanded.
Java ecosystem facilitates modular apps (dependencies, maven, etc).
Otherwise, if you continue wanting individual modules, the way to compile all of them is what you are actually doing: executing gwt compiler once per module (and permutation). You can improve your compilation time, though, having a continuous integration cluster like Jenkins and running jobs in parallel, or using more brute force (memory, cpu, ...).
As you probably know GWT compiles each module into one big JavaScript file and optimizes everything based on all available information about everything in the whole module. This is why you need to compile everything for each module.
One solution might be to do create one big module, but use code splitting similar to the module structure. Than you don't get one very large monolithic JavaScript file, but 'modules' are loaded as needed.
Did you try compiling with less localworkers, instead of using all possible available cores? I've had the best results with localworker set to 4 (even on a 6-core machine).

Wicket Application Structure Best Practice

I am working with an application that has some Wicket pages, divided into some Applications. We are expanding the Wicket development to substitute other legacy content. Right now, there is no clear path wether to write new Wicket Applications for each workflow, or if we should have one big Application with many URL mappings. I did not find any information about this either.
As far as we are, we see following issues:
Many Wicket Applications pattern:
Each Application (Workflow) can be easily mounted without much of a hassle.
Even if it's not more time consuming, you end up writing more Java Classes (at least for each Application you need at least some basic structure).
Each Application default URL get's accessed by it's homepage, so no further config is necessary.
One big Application pattern:
Each Workflow needs a Page, which has to be mapped in the Application class. As far as I've seen, there is no configuration in xml files to archieve this, but it should be possible to develop some schema that allows to structure this in some xml file. Disatvantage: more time consuming for the first time
For further addings, it should be somewhat easier than with the Application pattern, but it doesn't make a difference that would make a real difference considering that the workflow development is always way bigger than the initial config.
Each Workflow default URL can be accessed by the URL mapping, and can be changed easily, it seems a little easier than with the Application approach, but doesn't make a big difference either.
Now, what I'm looking for:
Opinion based on experiences, maybe arguments for deciding for one or another way.
Is there any documentation from Apache or some source for this? If so, some reference would be a great advice.
As I understand it, you would still deploy all of your Wicket Applications within one single Web Archive.
Doing that, in my opinion you lose the only real advantage of separating your code into different Wicket Applications. If you separate your code into multiple Wicket Application classes
you have to think of configuring each Wicket Application the same way and not forget a single one (include it in the web.xml, call the same settings in the init()-method, ...)
you are writing more boilerplate code as you already said yourself
The configuration and code would be more complex than with the "single application" approach. With a single application
you only have to mount the start page of each workflow in your single application class...which is one line of code compared to a new class and some lines of web.xml config with the multiple applications approac
So, if you don't want to deploy your workflows separately, I'd go with a single application. It makes it so much easier. Especially when you have accumulated more than a couple workflows the single application approach will probably be much easier to maintain.
How much shared coda are you likely to have?
Are there different performance/load tolerance/availability requirements for the different workflows?
These are the questions I use in general to decide whether two things should go in one application or not, and that's pretty much independent from Wicket.
Obviously much shared code points towards a single application. Of course you can still use separate applications with all of them depending on a set of shared modules but in practice you'll spend a lot of time trying to keep your modules in sync.
Similarly, wildly different availability requirements might steer you in the direction of separate applications as you'd probably want to deploy them separately.
The most difficult scenario is if you have much shared code AND you still want to deploy them separately, in that case a multi-tiered approach (multiple frontends connecting to a common backend) might be worth considering.

One big executable or many small DLL's?

Over the years my application has grown from 1MB to 25MB and I expect it to grow further to 40, 50 MB. I don't use DLL's, but put everything in this one big executable.
Having one big executable has certain advantages:
Installing my application at the customer is really: copy and run.
Upgrades can be easily zipped and sent to the customer
There is no risk of having conflicting DLL's (where the customer has not version X of the EXE, but version Y of the DLL)
The big disadvantage of the big EXE is that linking times seem to grow exponentially.
Additional problem is that a part of the code (let's say about 40%) is shared with another application. Again, the advantages are that:
There is no risk on having a mix of incorrect DLL versions
Every developer can make changes on the common code which speeds up developments.
But again, this has a serious impact on compilation times (everyone compiles the common code again on his PC) and on linking times.
The question Grouping DLL's for use in Executable mentions the possibility of mixing DLL's in one executable, but it looks like this still requires you to link all functions manually in your application (using LoadLibrary, GetProcAddress, ...).
What is your opinion on executable sizes, the use of DLL's and the best 'balance' between easy deployment and easy/fast development?
A single executable has a huge positive impact on maintainability. It is easier to debug, deploy (size issues aside) and diagnose in the field. As you point out, it completely sidesteps DLL hell.
The most straightforward solution to your problem is to have two compilation modes, one that builds a single exe for production and one that builds lots of little DLLs for development.
The tenet is: reduce the number of your .NET assemblies to the strict minimum. Having a single assembly is the ideal number. This is for example the case for Reflector or NHibernate that both come as a very few assemblies. My company published free two white books on the topic One big executable or many small DLL's:
Partitioning code base through .NET assemblies and Visual Studio projects (8 pages)
Defining .NET Components with Namespaces (7 pages)
Arguments are developed in these white-books come with invalid/valid reasons to create an assembly and a case study on the code base of the tool NDepend.
The problem is that MS fosters(and is still fostering) the idea that assemblies are components while assemblies are just physical artifact to pack code. The notion of component is a logical artifact and typically an assemblies should contains several components. It is a good idea to partition component with the notion of namespaces although it is not always practicable (especially in the case of a framework with a public API where namespace are used to partition the API and not necessarily the components)
One big executable is definitely beneficial - you can have whole program optimization and less overhead and maintenance is much simpler.
As for the link time - you could have both the "many DLLs" and "one big executable" at the same time. For each DLL have a project configuration that builds a static library. So when you debug things you compile the "DLL" configuration of the project and when you need to ship you compile the "static library" configurations of your projects. Sometimes you will have different behavior in different configurations, but this will have to be addressed per incident.
An easier way to maintain large programs is to compose them into smaller manageable parts. A program can be composed into a shell and modules that add feature to the shell. Large programs like Visual Studio, outlook all use the same concepts. Try this approach to make a more maintainable and robust programs.

Common Libraries at a Company

I've noticed in pretty much every company I've worked that they have a common library that is generally shared across a number of projects. More often than not this has been a single companyx-commons project that ends up as a dumping ground for common programs including:
Command Line Parsers
File Utilities
Framework Helpers
etc...
Some of these are well thought out and some duplicate functionality found in Apache commons-lang, commons-io etc..
What are the things you have in your common library and more importantly how do you structure the common libraries to make them easy to improve and incorporate across other projects?
In my experience, the single biggest factor in the success of a common library is user buy-in; users in this case being other developers; and culture of your workplace/team(s) will be a big factor.
Separate libraries (projects/assemblies if you're in .Net) for different application tiers is essential (e.g: there's obviously no point putting UI and data access code together).
Keep things as simple as possible; what you don't put in a common library is often at least as important as what you do. Users of the library won't want to have to think, so usage needs to be super easy.
The golden rule we stuck to was keeping individual functions focused on a single task - do one thing and do it well (or very very well); don't try and provide something that tries to take every possibility into account, the more reusable you think you're making it - the less likely it is to be used. Code Complete (the book) has some excellent content on common libraries.
A good approach to setting/improving a library up is to do regular code reviews and retrospectives; find good candidates that you've already come up with and consider re-factoring them into a library for future projects; a good candidate will be something that more than one developer has had to do on more that one project (for example).
Set-up some sort of simple and clear governance of the libraries - someone who can 'own' a specific library and ensure it's overal quality (such as a senior dev or team lead).
I have so far written most of the common libraries we use at our office.
We have certain button classes that are just slightly more useful to us than the standard buttons
A database management class that does some internal caching and can connect to ODBC, OLEDB, SQL, and Access databases without even the flip of a parameter
Some grid and list controls that are multi threaded so we can add large amounts of data to them without the program slowing and without having to write all the multithreading code every time there is a performance issue with a list box/combo box.
These classes make it easier for all of us to work on each other's code and know how exactly they work since we all use the exact same interfaces throughout our products.
As far as organization goes, all of the DLL's are stored along with their source code on a shared development drive in the office that we all have access to. (We're a pretty small shop)
We split our libraries by function.
Commmon.Ui.dll has base classes for ui elements.
Common.Data.Dll is sort of a wrapper around Enterprise library Data access classes.
Common.Business is a dumping ground for other common classes that don't fit into one of those.
We create other specialized dlls as needs arise.