The project i'm currently involved requires the development of about 60 portlets. I'm aware that a portlet WAR can include any number, so I'm not sure how many Portlets should I include in a WAR.
The extreme situation would be having only one Portlet by WAR. With that approach I'll gain Portlet independence but the number of deployment artifacts would be very difficult to manage. On the other side if a have only one or two WARS for 60 Portlets the deployment artifacts would be only two but even the minor changes made to one Portlet would imply the re-deployment of many of them.
Is there any best-practice or recommendation for that?
For efficiency you want to bundle many Portlets per WAR. What really matters is the number of EARs, but I'm assuming you are building a separate EAR for each WAR. So the original statement still stands.
As you point out bundling all 60 into one WAR is extreme, and will result in other problems with your deployment and re-testing requirements. I'd recommend packaging your Portlets into groups of logically similar functions. For instance Portlets that collaborate together to provide a single function should be packaged together, as they would tend to be changed and need re-deploy together anyway. Certainly 10 Portlets per WAR is manageable, and having your app split into 6 EARs is also quite a managable number. Remember each EAR has a certain overhead, so just thinking about your development lifecycle, once you have too many EARs the restart times on your dev servers start getting ridiculous. A big EAR doesn't take too much longer to start than a small one. This also holds true for deployment times.
It's all about striking a happy medium between build/deployment flexibility (many EARs) and runtime memory use, deployment and restart times (fewer EARS).
Related
We are about to embark on a large programme of work to migrate a small number of hugely monolithic 3 tier frameworks into a SOA/Microservice architecture. However there is one thing that I haven't really managed to nail down, version management (note the use of the word management, not control)
One of the core principles of this programme is that each component is absolutely independent, and therefore is designed, developed, built, versioned, deployed, operated, monitored and deprecated independently of all other Consumers and Services. This is the right principle and therefore means that the future holds 15+ clients and 50+ services. In operation we need to quickly and very reliably know all the dependencies. In a world where a service may have 3 or 4 versions of its API in production and a consumer may use 20+ services the dependency tree very quickly becomes large and complex.
So my question is how do you guys manage this? How do you maintain your "enterprise version matrix" (if that is even the correct terminology)?
I'm testing out Azure Service Fabric and started adding a lot of actors and services to the same project - is this okay to do or will I lose any of service fabric features as fail overs, scaleability etc?
My preference here is clearly 1 actor/1 service = 1 project. The big win with a platform like this is that it allows you to write proper microservice-oriented applications at close to no cost, at least compared to the implementation overhead you have when doing similar implementations on other, somewhat similar platforms.
I think it defies the point of an architecture like this to build services or actors that span multiple concerns. It makes sense (to me at least) to use these imaginary constraints to force you to keep the area of responsibility of these services as small as possible - and rather depend on/call other services in order to provide functionality outside of the responsibility of the project you are currently implementing.
In regards to scaling, it seems you'll still be able to scale your services/actors independently even though they are a part of the same project - at least that's implied by looking at the application manifest format. What you will not be able to do, though, are independent updates of services/actors within your project. As an example; if your project has two different actors, and you make a change to one of them, you will still need to deploy an update to both of them since they are part of the same code package and will share a version number.
I have a play 2 Scala application and my customer wants to add a blog solution in a subfolder of this application. I came accross this java blog solution called Apache Roller.
The issue is that I am not able to find it as a jar on maven repo to download it with sbt as we do with other libs because it comes as a war. Is there any way to use a war INSIDE a play 2 app? If yes, where to put it?
I'm on the Apache Roller team and thanks for considering our product. Roller is meant to be a stand-alone web application, just configure your database, drop the WAR into Tomcat and you're set. If desired, Roller offers an LDAP authentication option so users won't need a second set of passwords. [Incidentally, while not yet released, our 5.1-SNAPSHOT is already considerably ahead of our current production 5.0.4 and is expected to be released "soon", so you may wish to consider that option.] Trying to merge WARs will take an exceedingly long time and probably result in a buggy solution, so I would first confirm that your customers will not approve a separate application before trying to integrate blogging software. The Roller User's Mailing List is available if you have any questions.
There is another Java solution, JBake, as it's not a standalone blog server like Roller you may find it more integratable in your web application. (I have not worked with the product so am unsure.) You may end up needing to create the blog entry edit screens, however, prior to feeding the results to JBake.
A common goal of software design seems to be to structure an application so that changes impact a minimal amount of components (i.e compiled assemblies) which can then be published individually. Dependency Inversion Principle is applied so stable components don't depend on volatile ones, classes are packaged in a way again to limit a deployment to a minimal set of components.
So my question is, what is wrong with publishing an entire application in its entirety for each change? Especially if a publish is a completely automated 1-click solution. Surely deploying components individually means I then have to version and manage them individually as mini projects with their own dependencies? A large chunk of 'good design' seems to hang on this one principle that publishing each component separately is a good thing, but why?
What is wrong with publishing an entire application in its entirety
for each change?
I don't think there is anything wrong if you can manage it. It seems this is the approach that Facebook takes:
Because Facebook's entire code base is compiled down to a single
binary executable, the company's deployment process is quite different
from what you'd normally expect in a PHP environment. Rossi told me
that the binary, which represents the entire Facebook application, is
approximately 1.5GB in size. When Facebook updates its code and
generates a new build, the new binary has to be pushed to all of the
company's servers.
Partial deployments can sometimes be useful to maintain a degree of autonomy between component teams but they can result in unexpected behaviours if the particular version-combination of components hasn't been tested.
However, the motivation for modular design is more around ease of change (evolvability, maintainability, low coupling, high cohesion) than the ability to do partial deployments.
I'm sure that this is a common issue that other have solved before so I'm calling on the collective wisdom of other developers/project managers out there for some help.
I've got a number of projects:
Application
WebApp
ServerApp
Dev Utils
ORM
All of the apps/utils depend on the ORM: When the ORM changes, it needs to be compiled and all of the apps need to be re-compiled against it and then deployed. Right now my VCS structure is kind of a jumble:
AppName
Trunk
Application
WebApp
ServerApp
Dev Utils (around 4 folders right now, but growing)
ORM
Relase
ProjectName (be it Application or WebApp) version.number
Branches
ExperimentName_DevName
Ideally, I'd like to have a root folder per application (Application/WebApp/ORM etc.), each with its own Trunk/Branches/Releases etc. to logically and physically separate them. My reasoning is that because lots of work gets done on the Application, and it gets released far more often, each release branch has identical copies of the same utils etc. Also checking out the Trunk to work on it always means that all of the other projects come along for the ride.
However, separating would mean ripping certain projects out of solutions and making modifying any projects simultaneously a pain - jumping between 2-3 IDEs (especially when making changes to the ORM).
I'm thinking about this because very soon I'm putting together a CI machine (be ready for questions about that in a week or so) and I'm trying to figure out a way to have the releases automatically created/deployed. Normally, just the Application gets deployed, via script copy to the server where all of the workstations pull from on startup, but like I said before, if the ORM changes/releases, all of the other apps should re-build and deploy.
(Today I broke our website and 3 utilities cause I changed the ORM and deployed it with an updated version of the Application, but forgot to rebuild/deploy the other apps with the new ORM - oops.)
Put yourself in your developer's shoes. This shouldn't be hard because it sounds like you're one of them :-) Look at your version control layout not from an architectural point of view, but from a usability point of view. Ask yourself: on a daily basis what are the most common things that we as developers do with our source code? Think specifically with regards to your interactions with your VCS system and the project layouts. You want to make the common things brain-dead easy. It's ok for the less common use cases to be harder to get to, as long as you have a record of how to do them, so that when (not if!) people forget, they'll know where to look to remind themselves how.
I've seen a lot of VCS layouts that try to be architecturally "perfect", but end up causing no end of hassles and headaches from a day-to-day usage point of view. I'm not saying the two can't coincide and mesh well together, I'm just saying think about it from the users point of view, and let that be your guide.
From a CI point of view, I'd still take this approach. Even if your setup ends up becoming more complex to define in your CI of choice, you only have to do the setup there once. If the layout is easy to use during development, most of your CI setup should also be relatively easy. You then just focus on the last bits that will take more time.
Splitting up your code base into different "projects" can be very difficult. We have done this at each separately "deployable" boundary. Including a "platform" layer that is common/used by the others, as a separate project. But this isn't really perfect either.
The one thing that I cannot stress enough is that one needs to have some form of continuous regression/testing that runs after checkins and before you actually deploy anything. In addition a "release" process that might even involve some manual testing can be helpful and has definitely prevented a few egg on the face situations. (Better to release it a couple days late then broken.)
(Sorry to not directly address your problem)