Endeca cluster setup - atg

I have configured an (Endeca Application Controller) EAC application on Multiple servers. I have two machines A and B with the following configurations.
Machine A: Oracle Endeca MDEX Engine, Oracle Endeca Platform Services (Endeca Application Controller Server and agent), Oracle Endeca Tools and Frameworks, Content Administration System (CAS).
Machine B: Oracle Endeca MDEX Engine, Oracle Endeca Platform Services (EAC agent only instance).
I have a Dgraph Cluster (1 MDEX and 1 Dgraph on each host)
I need to know is there any need of setting up an Endeca Server Cluster
when my website is up and Running? I have an ATG-Endeca Integration Environment and my indexed data is quite large.
Also I need to know is there any criteria for determining the number of servers, server topology, and the load balancer topology.

You only need a cluster if you are sharing disk between the mdex instances.
In terms of the number of servers and layout, it depends most on the number of queries that are expected and amount of effort (computation time) it takes to run the queries.
I am sure that the Oracle Endeca sales / support staff that set you up should be able to provide some baselines for these numbers.

Related

Best way to setup ELK in production

Is it a good practice to setup Elasticsearch, logstash and kiban on 3 different servers, with each server having RAM of 8GB.
Or
Setup ELK on 1 single machine with higher memory of 16GB.
The machine needs to be highly available.
Can anyone suggest or share inputs
it depends on your task and situation. normally it is good practice to setup Elasticsearch, logstash and kiban on 3 different server. or if you data if more so you have to make a cluster of elastic search or may have more than one server of logstash .
filefeats will be on all the data(log) server .
there are an example of handling 25000 logs per secoung
https://engineering.viki.com/blog/2015/log-processing-at-scale-elk-cluster-at-25k-events-per-second/
Its slightly more complicated than explained here,
Any distributed component would try to offer features with sharded or partioned way. In a similar way the Elastic Search at ELK which is based out of Master Slave model and maintains the data at ES data nodes. This means one needs to set up a cluster of nodes for Elastic search itself for its various components such as ES Master, ES data and ES client.
The next level if the system is required at production grade which requires Multi master setup with minimum 3 master nodes.
This would be the beginning of ELK.
If one needs to run such a complex system in a limited resources, then Containerizing the ELK components and running them in a container orchestration framework is the recommended option. Kubernetes/Docker swarm are the options to run ELK cluster based on the dockerized instances of ELK. Again these orchestration frameworks also require multimaster setup , but that would be fair as one would have lot more components in a cloud environment and all of them could be controlled under these orchestration frameworks.

Server Configuration for WSO2 IOT Server with Postgresql

I have a aws ubuntu server with 4gb RAM and 2gb internal memory. I want the wso2 iot server with postgresql configuration. What kind of configuration needed for aws ubuntu server for this requirement. As per the wso2 iot documentation, 4gb RAM and 1gb, I have configured with that configuration which is not good right now. Please do any one tell me the what kind of server optimisation needed for my requirement.
When dealing with wso2 modules, I have found that they only work for me when deployed as individual servers. I was using local VirtualBox vms so that I had Data Services on one vm, Enterprise Service Bus on another, etc. Any attempt to combine them in the installer would result in Java dependency hell.

Azure Service Fabric-based Services: Prerequisite is always a prepared cluster?

If I've understood the docs properly, azure service fabric-based apps/microservices cannot be installed together with their service-fabric operational environment in one "packaged installer" step. For example, if I want to deploy a set of microservices on premises at a company that is running a typical windows server 2012 or VMWare IT center, then I'm out of luck? I'd have to require the company to first commit to (and execute) an installation of an azure app service fabric on several machines.
If this is the case, then the Azure Service Fabric is only an option for pure cloud operations where the service fabric cluster can be created on-demand by the provider or for companies that have already committed to azure service fabric. This means that a provider of classical "installer-based" software cannot evolve to the azure service fabric advantages since the datacenter policies of the potential customers is unknown.
What have I missed?
Yes, you always have to have a cluster to run Service Fabric Applications and Microservices. It is however not any more limited to a pure cloud environment, as of September last year the on-premise version of Azure Service Fabric for Windows Server went GA (https://azure.microsoft.com/en-us/blog/azure-service-fabric-for-windows-server-now-ga/) and that lets you run your own cluster on your own machines (whether physical or virtual, doesn't matter) or in another data center (or even at another cloud provider).
Of course, as you say, this requires your customer company to either have their own cluster or that you set one up for them (https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-creation-for-windows-server). They will also need to have the competence to manage that cluster over time. It could be argued though that this shouldn't be much more difficult than managing a VMWare farm or setting up and managing say a Docker container host(s).
For the traditional 'shrink-wrapped-DVD-installer-type' of software vendor this might not be as easy as just supplying an .exe and some system requirements, i agree with you on that. If the customer can't or don't wan't to run their own cluster and cloud is not an option then it definitely adds additional complexity to selling and delivering your solution.
The fact that you can run your own cluster on any Windows Server environment means that there is no real lock-in to Azure as a cloud platform, I think that this is a big pro for SF as a framework. Once you have a cluster to receive your applications then you can focus on developing that, this cannot be said of most other cloud-based PaaS frameworks/services.

WAR application and RDBMS for high volume transaction

Can you please suggest some of the industry best practices (of the type "it depends") to deal with a WAR (web application archive) deployed into a J2EE container to connect with a RDBMS?
In the current scenario, we deploy the .war file to an Apache Tomcat instance that connects to a PostgreSQL data base instance. We are required to scale up both in terms of data storage and transactions (multi read-write).
It seems, we have two choices.
Build load balancing system
First, we start with Apache Tomcat clustering as described here with a short graphic (from their website) as below.
DNS Round Robin
|
Load Balancer
/ \
Cluster1 Cluster2
/ \ / \
Tomcat1 Tomcat2 Tomcat3 Tomcat4
Then, build the database cluster as described here whose graphic is shown below.
Both clustering (Tomcat and PostgreSQL) are front-ended by HAProxy as shown in the graphic below.
Use IaaS provider
Or, perhaps, just set-up the servers in AWS and let AWS manage all balancing while we just pay per use. For example, as described here for a typical set-up and here for AWS PostgreSQL.

AppFabric setup in a domain

So I am a little confused by reading the documents.
I want to setup AppFabric caching and hosting.
Can I do the following?
DC
SQL Server
AppFabric1
AppFabric2
All these computers are joined to the DC.
I want to be able to have AppFabric1 be the mainhost but also part of the cache cluster?
What about AppFabric2? or AppFabricX? How can I make them part of the cache cluster?
Do I have to make AppFabric1 and AppFabric2 configured in Windows as part of a cluster (i.e setup the entire environment as a cluster)?
Can I install AppFabric independently on AppFabric1 and 2 and have them cluster together and "make it work"? If so - how?
I see documentation about setting it up in a webfarm but also a workgroup... and that's it. nothing about computers joined to a domain.
I want to setup AppFabric caching and hosting.
Caching and Hosting are two totaly different things and generally don't share the same use cases.
AppFabric Caching provides an in-memory, distributed cache platform for Windows Server, previously named Velocity. The cache cluster is a collection of one or more instances of the Caching Service working together. You can easily add new cache host without restarting the cluster in the "storage location" (xml or sql server).
Can I install AppFabric independently on AppFabric1 and 2 and have
them cluster together and "make it work"? If so - how?
Don't worry... this can be done easily during installation. In addition, there are powerfull PS module to to the same thing.
AppFabric Hosting enhance the hosting of WCF and Workflow Foundation services in WAS (autostart, monitoring of hosted services, workflow persistence, ...). There is no cluster here and basically you just have to configure to monitoring/persistence DB for each server.
Just try it !
When you are adding the second node in the AppFabric cluster, make sure to choose the option Join Cluster (instead of New Cluster) and point to the path of the share where you stored the configuration (assuming that you used FILE SHARE to store the configuration of the cluster). The share that you used should be accessible from Appfabric2.