Run multiple sites on the same GWT application - gwt

Can someone please point me to the right direction.
I need to be able to host my GWT application in a way that it allows multiple clients to use the same application which could be separated by url's but internally using the same application.
the different sites would probably be seperated by different configurations. eg. different database, different log path etc, etc,
any ideas.?

You could use the following way to arrange your projects :
- my.application.core.project : it holds all the business logic and views for the application except for the entry point
-my.application.customerX.project : it holds only the entry point and the property files used for having the connection to the db, probably customerX specific theme
-my.application.customerY.project : it holds only the entry point and the property files used for having the connection to the db, probably customerY specific theme
Such an organization of the projects would allow you to have a common core that is distributed to each of the customers and also the ability to build on top of the core customer-specific impelementations.

The url's per client can be done with URL rewriting. Be it with an apache server in front of your application and/or in combination with a Filter in your web application.
As for the configuration, logging, and/or database per client you want a solution that doesn't store a file per client on the file system next to your application. Preferable you store client specific settings in one database and have an admin interface to manage it. For the client's data you also don't want a separate database per client, because it doesn't scale well, and would be a maintenance mess if you need to upgrade your application and databases to a newer version. Look for a multitenant architecture.
I admit this is a vague answer, but without specific system and software descriptions it's kind of hard to give a concrete answer. Nevertheless I hope this answer does give you some direction.

I have successfully achieved this by setting up separate directories in tomcat for different clients and then creating soft-links to the main application within that folder. when it comes to database connection properties and other configuration properties, instead of pointing them to the main application I just created them separately.

Related

How to build microservice applications right? (backend, frontend, database)

I am developing monolithic applications since many years and want to try microservices and containers now.
For learning microservices and containers I am planning a small calendar application (as a web application), where you can create dates and invite others to them. I identified three services that I have to implement:
Authentication service -> handles login, gives back a JWT
UserData services -> handles registration, shows user data, handles edits of user data (profile picture, name, short description)
Calendar service -> for creating, editing and deleting dates, inviting others to dates, viewing dates and so on.
This is the part I feel confident about, but if I did already something wrong, please correct me.
Where I am not sure what to do is how to implement the database and frontend part.
Database
I never used Neo4j or any graphbased database before, but they seem to work well for this use case and I want to learn something new. Maybe my database choice may not be relevant here, but my question is:
Should every microservice have its own database or should they access the same database?
Some data sets are connected, like every date has a creator and multiple participants.
And should the database run in a container or should it be hosted on the server directly? (I want to self-host the database)
For storing profile pictures I decided to use a shared container volume, so multiple instances of a service can access the same files. But I never used containers before, so if you have a better idea, I am open to hear it.
Frontend
Should I build a single (monolithic) frontend application, or is it useful to build some kind of micro frontend, which contains only a navigation and loads other parts like the calendar view or user data view from the above defined services? And if yes, should I pack the UserData frontend and the UserData service into one container?
MVP
I want the whole application to be useable by others, so they can install it on their own servers/cloud. Is it possible to pack the whole application (all services and frontend) into one package and make it installable in kubernetes with a single step, but in a way, that each service still has its own container? In my head this sounds necessary, because the calendar service won’t work without the userdata service, because every date needs a user who creates it.
Additional optional services
Imagine I want to add some additional but optional features like real time chat. What is the best way to check if these features are installed? Should I add a management service which checks which services exist and tells the frontend which navigation links it should show, or should the frontend application ping all possible services to check if they are installed?
I know, these are many questions, but I think they are tied together, because choices on one part can influence others.
I am looking forward for your answers.
I'm further along than you but far from "microservice expert". But I'll try my best:
Should every microservice have its own database or should they access the same database?
The rule of thumb is that a microservice should have its own database. This decouples them, making it so that your contract between services is just the API. Furthermore it keeps the logic simpler within a service in that there's less types for data that your service is responsible for handling.
And should the database run in a container or should it be hosted on the server directly?
I'm not a fan of running a database in a container. In general, a database:
can consume a lot of resources (depending on the query)
sustains long-lived connections
is something you vertically scale, rather than horizontally
These qualities reflect a poor case for containerization, imho.
For storing profile pictures I decided to use a shared container volume
Nothing wrong with this, but using cloud storage like Amazon S3 is a popular move here, just so you don't have to manage the state of the volume. i.e. if the volume goes away, do you still want your pictures around?
Should I build a single (monolithic) frontend application?
I would say yes: this is would be the simplest approach. The other big motivation for microservices is to allow separate development teams to work independently. If that's not the case here, I think you should avoid yourself the headache for micro-frontends.
And if yes, should I pack the UserData frontend and the UserData service into one container?
I'm not sure I perfectly understand here. I think it's fine to have the frontend and backend service as part of the same codebase, which can get "deployed" together. How you want to serve the frontend is completely up to you and how it's implemented. Some like to serve it through a CDN to minimize latency, but I've never seen the need.
Is it possible to pack the whole application (all services and frontend) into one package and make it installable in kubernetes with a single step?
This is use-case for Helm charts: A package manager for kubernetes.
Imagine I want to add some additional but optional features like real time chat. What is the best way to check if these features are installed? Should I add a management service which checks which services exist and tells the frontend which navigation links it should show, or should the frontend application ping all possible services to check if they are installed?
There's a thin line between what you're describing and monitoring. If it were me, I'd pick the service that radiates this information and just set some booleans in a config file or something. That's because this state is isn't going to change until reinstall, so there's no need to check it on an interval.
--
Those were my thoughts but, as with all things architecture, there's no universal truth. Some times exceptions are warranted depending on your actual circumstances.

How can I deploy form / subform (i.e. display only) changes on Notes databases?

I have been asked by a client to assist in making the web frontends of number of Lotus / IBM Notes databases, used for critical LOB functions, compatible with modern browsers.
As it stands, the web frontends of these databases only work in IE7, and even then they're temperamental at best. The JS uses IE-specific extensions, everything is in tables, and they render poorly on pretty much every browser available today. With IE7 no longer in support, they want to modernise these interfaces.
I have very little experience with Notes, but as an exploratory exercise I've managed to open up the databases in Domino Designer, add a few Stylesheet / Script resources, include them in the $$HTMLHead variable and reworked one Form to use a frontend framework, which looks good.
Obviously working on live applications is out of the question, so my thinking is to take a copy of the NSF files, and make the changes on the copies. My question is: how can I then deploy only the form / subform / resource changes to the 'live' NSF files?
Deployment:
In your new modified database :
You define in the Database properties that is a Database file is a master template (give a name)
In the production database :
first do a backup ! copy (only design) to a new copy of the prod
You define in the Database properties that it inherits from master template (same name)
on the prod make refresh design
more details : https://www.ibm.com/support/knowledgecenter/SSVRGU_9.0.1/com.ibm.designer.domino.main.doc/H_ABOUT_REFRESHING_A_DESIGN.html
Sorry to state the obvious, but since you have a Notes client and a Domino server, you have a quite extensive documentation at your disposal in the form of databases located in the /help/ directory. Make sure they are full-text-indexed.
And since we are on the subject of templates, Domino comes with a host of ready-made, ready-to-use apps that you can customize and canibalize. Look for discussion9.ntf for starters.
You may want to start here, then go there, and finally that will give you the keys to build word-class web apps on Domino.
Last thing, if you are on V9, the Designer help is crap. Grap a copy of the 8.5 version. Seriously.
If you want to build a modern web based front-end to existing Domino data, take a look at the following presentations:
http://www.slideshare.net/TexasSwede/ad102-break-out-of-the-box
and
http://www.slideshare.net/TexasSwede/break-out-of-the-box-part-2
As others already said, you should create a template and then just refresh/replace the design of the production database using that template.
You may want to consider working with an experienced Notes/Domino developer for that project, there are quite a few caveats and workarounds you need to know know about...

Web development, protecting application code

I'm looking at some (PHP) Frameworks, and I just noticed this in the Laravel documentation:
Like most web-development frameworks, Laravel is designed to protect your application code, bundles, and local storage by placing only files that are necessarily public in the web server's DocumentRoot. This prevents some types of server misconfiguration from making your code (including database passwords and other configuration data) accessible through the web server. It's best to be safe.
I'm familiar with CodeIgniter and CakePHP, as far as I know, these two frameworks don't do this. Should you really split it up and place your core logic outside of the webroot? In my experience, most clients use shared hosting and are not able to change their VirtualHost settings.
What kind of misconfiguration could you possibly do that would output your passwords? When developing, should you really do this?
Yes, keeping only those files which should be publicly accessible in DocumentRoot is a best practice for web application security. Consider:
Every file which is private would need a rule configured with the web server to explicitly block it.
Anyone adding files to the project needs to consider web server security settings. Simply keeping the files in separate directories makes it obvious what's public. And developers don't need to change security configurations.
Separating executable code and static files is a good practice anyway.
Not blocking access to PHP scripts can cause unintended consequences. For example, you may have a script to update some DB records when run manually at the command line, so someone simply guessing a script name can run it over the internet.
Monitoring for and cleaning malicious code written to the public directory is much easier if the real application logic is elsewhere. See Wordpress breakins for an example.
CakePHP supports this - see deployment:
CakePHP applications should have the document root set to the
application’s app/webroot. This makes the application and
configuration files inaccessible through a URL.

How can I share MongoDB collections between Meteor apps?

I'd like to be able to have an admin app and a client app for my project. Ideally, I'd like to be able to have a shared MongoDB collection. How would I be able to accomplish this?
I tried creating collections with the same name in two different apps, but found that Meteor will keep the data separate. Any idea what I can do? Thanks.
export MONGO_URL=mongodb://localhost:3002/meteor
Then run meteor app, it will change the default database meteor uses. So share databases or collections won't be a problem!
For administrative reason, I would use a individual MongoDB server managed by myself other than using meteor's internal MongoDB.
A reasonable question and probably worth a discussion in excess of this answer:
The MongoDB connection is handled by the Meteor application process itself and this is - as far as I read and understood - part of Meteors philosophy targeting an approach that might be described like: One data source serves one application belonging to it but many clients subscribing to it.
This in mind, combining "admin" and "client" clients in one application (i.e. your Meteor app) is probably the preferred way.
From a server administrative view, however, connections are handled by Meteor in that way that there is always the default local data source which resides in your project directory (.meteor/local/db, try meteor mongo --url to obtain the mongo connection string while the meteor application process is running). But nevertheless one may specify an optional data source string for deployment purposes like described in these deployment instructions.
So you would need to choose a somewhat creepy way of "local development deployment" for your intended setup to get working. Or you go and hack the sources and... no, forget it. You probably want your application and clients to take advantage of e.g. realtime UI updates (publish) and that is why the Meteor application is tied to an "application data source" and vice-versa by now. When connecting from another app, events that trigger changes in the model would not be transported across those applications. The mongoDB instance itself of course isn't aware of that.
I'm sure the core team won't expose the data source connection to a configuration section for considered reasons unless they extend their architecture with some kind of module concept which provides a common service layer of core Model/Collections abstraction across Meteor instances - at least supporting awareness of publish/subscribe events.
Try this DDP test I hacked together for a way to bridge two apps (server A and B).
Both servers can manipulate data, but data is only stored in one collection on server A.
See this link as well

How do we share data between two different services

I am currently working on a web service which is periodically polled. It does not store its state and is instantiated everytime it is queried. Essentially, it retrieves the state of other external entities e.g. databases and delivers it back to the requester.
Recently, the need to store state as arisen in that
There is the need to continously collect data from a particular source and store the bits that are important/relevant
There is the need to collect the aggregate of a particular data source over a period of time
I came up with the following idea:
My main concern here is the fact that I am using a static class (essentially a global) to share data between the two services. Is there a better way to doing this?
edit: Thanks for the responses thus far. Apologies for the vaguesness of this question: just trying to work out what is the best way to share data across different services and am unsure as to the specifics (i.e. what is required). The platform that I am developing on is the .NET framework and both services are simply WCF services hosted as a Windows service.
The database route sounds like the most conventional way to go - however I am reluctant to go down that path for now (mainly for deployment/setup issues; it introduces the need to create new tables, etc in addition to simply installing the software) for at this point the transfer of relatively small amounts of data. This may of course change in the future and going the database route might be the way to go at that point.
Is there any other way besides adding a database persistance layer?
If you need to collect and aggregate data, you might want to consider using a database between the two layers. Or have I misunderstood something?
You should consider enhancing your question with more requirements: pretty much all options are open here.
Sure - how about data binding? I don't have a lot of information to go on here - about your platform but most sufficiently advanced systems offer it in some form.
You could replace your static shared data with some database representation, with a caching layer (like memcached) between the database and the webservice, so that most of the time the data is available very quickly from the cache, but can be retrieved from the database as needed.
I appreciate that you want to keep the architecture simple. Depending on the magnitude of items you have to look up and there permanency, you might just consider leveraging your file system or a message queue. It sounds like you want a file system, because that sounds the least amount of impact to your design.
If you start dealing with tens of thousands of small files, your directories could get hard to navigate and slow to do file lookups on. I typically shoot for about 1000 - 10000 files per directory, and concoct a routine that can generate a path to the file depending on the file name pattern. Keeping the number of subdirectories even is important, some file systems have a limit on the number of subdirectories in a parent directory.