WAR application and RDBMS for high volume transaction - postgresql

Can you please suggest some of the industry best practices (of the type "it depends") to deal with a WAR (web application archive) deployed into a J2EE container to connect with a RDBMS?
In the current scenario, we deploy the .war file to an Apache Tomcat instance that connects to a PostgreSQL data base instance. We are required to scale up both in terms of data storage and transactions (multi read-write).
It seems, we have two choices.
Build load balancing system
First, we start with Apache Tomcat clustering as described here with a short graphic (from their website) as below.
DNS Round Robin
|
Load Balancer
/ \
Cluster1 Cluster2
/ \ / \
Tomcat1 Tomcat2 Tomcat3 Tomcat4
Then, build the database cluster as described here whose graphic is shown below.
Both clustering (Tomcat and PostgreSQL) are front-ended by HAProxy as shown in the graphic below.
Use IaaS provider
Or, perhaps, just set-up the servers in AWS and let AWS manage all balancing while we just pay per use. For example, as described here for a typical set-up and here for AWS PostgreSQL.

Related

Possible to deploy or use several containers as one service in Google Cloud Run?

I am testing Google Cloud Run by following the official instruction:
https://cloud.google.com/run/docs/quickstarts/build-and-deploy
Is it possible to deploy or use several containers as one service in Google Cloud Run? For example: DB server container, Web server container, etc.
Short Answer NO. You can't deploy several container on the same service (as you could do with a Pod on K8S).
However, you can run several binaries in parallel on the same container -> This article has been written by a Googler that work on Cloud Run.
In addition, keep in mind
Cloud Run is a serverless product. It scales up and down (to 0) as it wants (but especially according with the traffic). If the startup duration is long and a new instance of your service is created, the query will take time to be served (and your use will wait)
You pay as you use, I means, you are billed only when HTTP requests are processed. Out of processing period, the CPU allocated to the instance is close to 0.
That implies that Cloud Run serves container that handle HTTP requests. You can't run a batch processing out of any HTTP request, in background.
Cloud Run is stateless. You have an ephemeral and in memory writable directory (/tmp) but when the instance goes down, all the data goes down. You can't run a DB server container that store data. You can interact with external services (Cloud SQL, Cloud Storage,...) but store only transient file locally
To answer your question directly, I do not think it is possible to deploy a service that has two different containers: DB server container, and Web server container. This does not include scaling (service is automatically scaled to a certain number of container instances).
However, you can deploy a container (a service) that contains multiple processes, although it might not be considered as best practices, as mentioned in this article.
Cloud Run takes a user's container and executes it on Google infrastructure, and handles the instantiation of instances (scaling) of that container, seamlessly based on parameters specified by the user.
To deploy to Cloud Run, you need to provide a container image. As the documentation points out:
A container image is a packaging format that includes your code, its packages, any needed binary dependencies, the operating system to use, and anything else needed to run your service.
In response to incoming requests, a service is automatically scaled to a certain number of container instances, each of which runs the deployed container image. Services are the main resources of Cloud Run.
Each service has a unique and permanent URL that will not change over time as you deploy new revisions to it. You can refer to the documentation for more details about the container runtime contract.
As a result of the above, Cloud Run is primarily designed to run web applications. If you are after a microservice architecture, which consists of different servers running each in unique containers, you will need to deploy multiple services. I understand that you want to use Cloud Run as database server, but perhaps you may be interested in Google's database solutions, like Cloud SQL, Datastore, BigTable or Spanner.

How to sync data between MongoDB's in a Cluster?

I'm a high school student. I implemented a spring boot application. It's deployed in AWS EC2. I'm using Nginx as my load balancer. My application is running on two different ec2 instances behind the load balancer both using their own MongoDB. How do I sync the data b/w the two DB's? Please pardon me for any mistake, I'm doing this for the first time and please feel free to point out mistakes in my design.

Best way to setup ELK in production

Is it a good practice to setup Elasticsearch, logstash and kiban on 3 different servers, with each server having RAM of 8GB.
Or
Setup ELK on 1 single machine with higher memory of 16GB.
The machine needs to be highly available.
Can anyone suggest or share inputs
it depends on your task and situation. normally it is good practice to setup Elasticsearch, logstash and kiban on 3 different server. or if you data if more so you have to make a cluster of elastic search or may have more than one server of logstash .
filefeats will be on all the data(log) server .
there are an example of handling 25000 logs per secoung
https://engineering.viki.com/blog/2015/log-processing-at-scale-elk-cluster-at-25k-events-per-second/
Its slightly more complicated than explained here,
Any distributed component would try to offer features with sharded or partioned way. In a similar way the Elastic Search at ELK which is based out of Master Slave model and maintains the data at ES data nodes. This means one needs to set up a cluster of nodes for Elastic search itself for its various components such as ES Master, ES data and ES client.
The next level if the system is required at production grade which requires Multi master setup with minimum 3 master nodes.
This would be the beginning of ELK.
If one needs to run such a complex system in a limited resources, then Containerizing the ELK components and running them in a container orchestration framework is the recommended option. Kubernetes/Docker swarm are the options to run ELK cluster based on the dockerized instances of ELK. Again these orchestration frameworks also require multimaster setup , but that would be fair as one would have lot more components in a cloud environment and all of them could be controlled under these orchestration frameworks.

Spring cloud data flow deployment

I wanna deploy the Spring-cloud-data-flow on several hosts.
I will deploy the server of Spring-cloud-data-flow on one host-A, and deploy the agents on the other hosts(These hosts are in charge of executing the tasks).
Except the host-A, all the other hosts run the same tasks.
Shall I modify on the basis of the Spring Cloud Data Flow Local Server or on the Spring Cloud Data Flow Apache Yarn Server or other better choice?
Do you mean how the apps are deployed on several hosts? If so, the apps are deployed using the underlying deployer implementation. For instance, if it is local deployer then, each app is deployed by spawning a new process. You can scale out the number apps deployment using the count property during the stream deploy. I am not sure what do you mean by the agents here.

AppFabric setup in a domain

So I am a little confused by reading the documents.
I want to setup AppFabric caching and hosting.
Can I do the following?
DC
SQL Server
AppFabric1
AppFabric2
All these computers are joined to the DC.
I want to be able to have AppFabric1 be the mainhost but also part of the cache cluster?
What about AppFabric2? or AppFabricX? How can I make them part of the cache cluster?
Do I have to make AppFabric1 and AppFabric2 configured in Windows as part of a cluster (i.e setup the entire environment as a cluster)?
Can I install AppFabric independently on AppFabric1 and 2 and have them cluster together and "make it work"? If so - how?
I see documentation about setting it up in a webfarm but also a workgroup... and that's it. nothing about computers joined to a domain.
I want to setup AppFabric caching and hosting.
Caching and Hosting are two totaly different things and generally don't share the same use cases.
AppFabric Caching provides an in-memory, distributed cache platform for Windows Server, previously named Velocity. The cache cluster is a collection of one or more instances of the Caching Service working together. You can easily add new cache host without restarting the cluster in the "storage location" (xml or sql server).
Can I install AppFabric independently on AppFabric1 and 2 and have
them cluster together and "make it work"? If so - how?
Don't worry... this can be done easily during installation. In addition, there are powerfull PS module to to the same thing.
AppFabric Hosting enhance the hosting of WCF and Workflow Foundation services in WAS (autostart, monitoring of hosted services, workflow persistence, ...). There is no cluster here and basically you just have to configure to monitoring/persistence DB for each server.
Just try it !
When you are adding the second node in the AppFabric cluster, make sure to choose the option Join Cluster (instead of New Cluster) and point to the path of the share where you stored the configuration (assuming that you used FILE SHARE to store the configuration of the cluster). The share that you used should be accessible from Appfabric2.