Hazelcast Lite member vs member vs client in cluster - cluster-analysis

I am new to hazaelcast, So please help would be appreciated, trying to achive same in java framework vert.x,
I Tried to figure out the differences between lite member vs members vs client in hazelcast, so i tried to understand if discovery is possible with below scenarios
Only having lite members in cluster and if yes then who is going to perform discovery
Having hazelcast/hazlcast as prime manager where it handles all lite member for discovery
So basically what i want that, i want to run hazelcast image on kubernate on separate node where it is going to manage my all other data member running as lite member on seperate node
below is Just a rough diagram what i am trying to achieve, hazelcast/hazelcast image as a separate node running on kube and if that goes down then one will be there as stand by

Related

unable to understand bgchaindb behaviour?

I started to implement bigchainDB.I have followed the tutorial from here
I have setup two nodes running bighchainDB server running & also mongoDB. I have added node id and address of nodes to each configuration so that they can connect.I am able to create transactions on each node.So my questions are as follow
How two nodes communicate and sync data with each other.
How consensus is achieved ?
Why this tutorial is created for setting up cluster ?
BigchainDB nodes communicate with each other using Tendermint P2P protocol. Consensus is Tendermint consensus. To understand those better, here are some starting points:
The BigchainDB 2.0 Whitepaper
The Tendermint website and docs
Also, please ignore the old docs for versions of BigchainDB Server older than 2.0.0x.

Azure Service Fabric - one app different named instances

I have Azure Service Fabric Application which consumes RabbitMQ queue and makes some calculations using data from sql database.
Connection strings for rabbit and sql stored in ApplicationManifest.xml via Parameters and then changing by different publishing profiles (I have different xml for cloud or local deployment)
Now I want deploy another instance of my application for another db/rabbitmq.
I suppose I must create another publishing profile, change config package version (e.g 1.1.0) and register new application type to cluster. But I mustn't upgrade existing app. Then I should create another app with version 1.1.0.
So there will be two apps in my cluster
App for db2/rabbit2 ver 1.1.0
App for db1/rabbit1 ver 1.0.0
Is it appropriate scenario for having 2 apps with different connection strings?
One approach would be to only have one Application Type and then instantiate multiple Application instances of that type; each of those applications can consume a different db/rabbitmq. During application creation, you can pass different connection strings (db/rabbitmq) as parameters.

Orientdb, How create database in distribute mode

I am new to orientdb. I use orientdb verson 2.1.11.
I config and deployed five nodes on the same machine in distribute mode. I use console to create a database, command is (port 2425 is the second node):
create database remote:192.168.12.37:2425/fuwu_test root 1234 plocal graph
Every node created the database "fuwu_test", but the cluster not create synchronous relationship.
I see the studio that every cluster has one cluster not five. I create one class Person, the class also not syncronized to other nodes.
Why it does't work, how to create a new datebase in running a cluster. Do I need to restart the whole nodes ?
thanks a lot
There is a known issue on this in v2.1 and v2.2 releases. The workaround is creating the database before to go in cluster. Anyway it will be resolved soon, sorry.

Akka cluster and OpenShift

I'm new to Akka Clusters, however as I am understanding its documentation, I need to know at least one "seed node" to join an existing cluster.
So when using clusters with OpenShift I would need to know if the current gear is the first node - then I would create a new cluster - or if there are already some other gears around - I would need to know at least one of their IPs to join them.
Is this possible with OpenShift cloud? (I'm using the DIY catridge, so customizing the start up script wouldn't be a problem. However I can't find any environment variable which provides me relevant data.)
DIY gears on OpenShift Online do not scale. And if you are spinning up separate applications for each of the nodes in your cluster, you are going to (probably) run into inter-gear communication issues. You might need to create your own akka cartridge (http://docs.openshift.org/origin-m4/oo_cartridge_developers_guide.html), then you can set your own scaling options. You might check out this cartridge (https://github.com/smarterclayton/openshift-redis-cart) which supports scaling and might give you some ideas about how to implement yours.

Automatic provisioning of xen in private cloud

I am setting up private cloud for some experiments using xen as the hosting system. But I am faced with a problem for which I can't seem to get solutions.
I have to do some kind of automatic provisioning of VMs given the server load. Eg: if server of type A gets to lets say 60% load the cloud should spawn off another vm instance of the same type to distribute the load(using the netscalar).
Is there an opensource system that can help me or how do I go about developing scripts to do the same.
If I understand you correctly, you want to live-migrate the VMs depending on the load of the host. You can use OpenNebula to help you with this. You can use the advanced scheduler named Haizea with OpenNebula.
While I've never tried this, but you can use this with ONE's APIs to create more VMs if a VM gets too much load.
Take a look at http://openstack.org/
It's opensourced.
OpenStack and OpenNebula are already mentioned, there are two more IaaS open-source projects:
Eucalyptus
Nimbus
use apache cloudstack, it is open-source and it has tight integration with netscalar Load Balancers and F5 Load balancers, check below link for Netscalar LB creation and VM creation. Rules can be set on these and new VMs ca be spanned based on Load.
https://cloudstack.apache.org/docs/api/apidocs-4.5/TOC_Root_Admin.html
There is a Cloud platform called Nimbo that lets you do this and more out of the box... http://www.hcltech.com/cloud-computing/Nimbo/ .