Best practices on Hazelcast persistance and multiple members - distributed-computing

I went through several related topics here and it seems the topic is still open, official documentation does not cover it so here we are.
There's a cluster with N members in one group
There's one distributed map
The map has persistence store backed up by MongoDB
Number of backups is 1
Write-through is enabled
Map is supposed to be populated at startup
Data in DB is too big to be stored solely by one member
The questions are:
To make write operations working correctly, all map members have to have MapStore enabled, as they write only partitions they own?
When cluster is starting, should loadKeys() implementation load only subset of data, or it should try to load entire data set and Hazelcast partitioning will take care of keys distribution?
If both scenarios from previous Q are valid, which one is more efficient?

Yes all members MUST have the MapStore implementation enabled
No, it has to load all the keys (at least those you want to make available eagerly) and the loading node distributes the keys based on the standard distributed hashing pattern
I guess this question is not necessary anymore, or maybe I misunderstood

Related

GridFS: what it gives us

I'm reading "Seven Databases in Seven Weeks". Could you please explain me the text below:
One downside of a distributed system can be the lack of a single
coherent filesystem. Say you operate a website where users can upload
images of themselves. If you run several web servers on several
different nodes, you must manually replicate the uploaded image to
each web server’s disk or create some alternative central system.
Mongo handles this scenario by its own distributed filesystem called
GridFS.
Why do you need replicate manually uploaded images? Does they mean some of the servers will have linux and some of them Windows?
Do all replicated data storages tend to implement own filesystem?
On the need for data distribution and its intricacies
Let us dissect the example in a bit more detail. Say you have a web application where people can upload images. You fire up your server, save the images to the local machine in /home/server/app/uploads, the users use the application. So far, so good.
Now, your application becomes the next big thing, you have tens of thousands of concurrent users and your single server simply can not handle that load any more. Luckily, aside from the fact that you store the images in the local file system, you implemented the application in a way that you could easily put up another instance and distribute the load between them. But now here comes the problem: the second instance of your application would not have access to the images stored on the first instance – bad thing.
There are various ways to overcome that. Let us take NFS as an example. Now your second instance can access the images, and even store new ones, but that puts all the images on one machine, which sooner or later will run out of disk space.
Scaling storage capacity can easily become a very expensive part of an application. And this is where GridFS comes to help. It uses the rather easy means of MongoDB to distribute data across many machines, a process which is called sharding. Basically, it works like this: Instead of accessing the local filesystem, you access GridFS (and the contained files within) via the MongoDB database driver.
As for the OS: Usually, I would avoid mixing different OSes within a deployment, if at all possible. Nowadays, there is little to no reason for the average project to do so. I assume you are referring to the "different nodes" part of that text. This only refers to the fact that you have multiple machines involved. But they perfectly can run the same OS.
Sharding vs. replication
Note The following is vastly simplified, because going into details would well exceed the scope of one or more books.
The excerpt you quoted mixes two concepts a bit and is not clear enough on how GridFS works.
Lets first make the two involved concepts a bit more clear.
Replication is roughly comparable to a RAID1: The data is stored on two or more machines, and each machine holds all data.
Sharding (also known as "data partitioning") is roughly comparable to a RAID0: Each machine only holds a subset of the data, albeit you can access the whole data set (files in this case) transparently and the distributed storage system takes care of finding the data you requested (and decides where to store the data when you save a file)
Now, MongoDB allows you to have a mixed form, roughly comparable to RAID10: The data is distributed ("partitioned" or "sharded") between two or more shards, but each shard may (and almost always should) consist of a replica set, which is an uneven number of MongoDB instances which all hold the same data. This mixed form is called a "sharded cluster with a replication factor of X", where X denotes the non-hidden members per replica set.
The advantage of a sharded cluster is that there is no single point of failure any more:
Depending on your replication factor, one or more replica set members can fail, and the cluster is still working
There are servers which hold the metadata (which part of the data is stored on which shard, for example). Those are called config servers. As of MongoDB version 3.0.x (iirc), they form a replica set themselves – not much of a problem if a node fails.
You access a sharded cluster via a the mongos sharded cluster query router of which you usually have one per instance of your application (and most often on the same server as your application instance). But: most drivers can be given multiple mongos instances to connect to. So if one of those mongos instances fails, the driver will happily use the next one you configured.
Another advantage is that in case you need to add additional storage or have more IOPS than your current system can handle, you can add another shard: MongoDB will take care of distributing the existing data between the old shards and the new shard automagically. The details on how this is done are covered in the introduction to Sharding in the MongoDB docs.
The third advantage – and the one that has the most impact, imho – is that you can distribute (and replicate) data on relatively cheap commodity hardware, whereas most other technologies offering the benefits of GridFS on a sharded cluster require you to have specialized and expensive hardware.
A disadvantage is of course that this setup only is feasible if you have a lot of data, since many machines are necessary to set up a sharded cluster:
At least 3 config servers
At least a single shard, which should consist of a replica set. The minimal setup would be two data bearing nodes plus an arbiter
But: in order to use GridFS in general, you do not even need a replica set ;).
To stay within our above example: Both instances of your application could well access the same MongoDB instance holding a GridFS.
Do all replicated data storages tend to implement own filesystem?
Replicated? Not necessarily. There is DRBD for example, which could be described as "RAID1 over ethernet".
Assuming we have the same mixup of concepts here as we had above: Distributed file systems by their very definition implement a file system.
In this case,IMHO, author was stating that each web server has own disk storage, not shared with others - having that - upload path could be /home/server/app/uploads and as it is part of server filesystem is not shared at all as a kind of security with service provider. To populate those we need to have a script/job which will sync data to other places behind the scenes.
This scenario could be a case to use GridFS with mongo.
How gridFS works:
GridFS divides the file into parts, or chunks 1, and stores each
chunk as a separate document. By default, GridFS uses a chunk size of
255 kB; that is, GridFS divides a file into chunks of 255 kB with the
exception of the last chunk. The last chunk is only as large as
necessary. Similarly, files that are no larger than the chunk size
only have a final chunk, using only as much space as needed plus some
additional metadata.
In reply to comment:
BSON is binary format, and mongo has special replication mechanism for replicating collection data (gridFS is a special set of 2 collections). It uses OpLog to send diffs toother servers in replica set. More here
Any comments welcome!

Consistency for read from distributed databases

I have a set of databases, distributed across multiple locations in the network and for ex. one client that needs to store some data in that databases.
I need to make sure my data will always be stored.
I can't organize a replica set with sync/async replication as it will make me to connect to one master which is a point of failure, so I send data from the client to all databases I know. Apparently, one database can fail to store, so I am relying on other databases writes. In the end I get different data sets stored in DB's though these sets are overlapping. (Ex. DB1 -> [1, 2, 3], DB2 -> [1, 3], DB3 -> [2,3,4])
How can get consistent data when reading from these DBs? What techniques should I apply on the client that writes data and a client that reads to be able to merge data sets successfully (getting on reader [1,2,3,4])?
What you're asking is basically an entire branch of computer science. It is very much a non-trivial problem and you will find that a surprising number of things are impossible.
Also note that simply saying "consistent" data is not a sufficient definition. There are all sorts of levels of consistency (read-your-own-writes, reads-follow-writes, monotonic read, linearizable, causal, etc.) I think you likely mean (in a very loose sense): consistency similar to what you get when you use just one database.
To answer your question directly, you want to decide on a read quorum size and a write quorum size. These sizes must be selected such that reads and writes will overlap by at least one database instance. If you want to optimize for write latency, use a smaller write quorum and do the opposite if you want to optimize for read latency.
A more detailed exposition of overlapping read/write quorums can be found in Weighted Voting for Replicated Data. This is considered a seminal work in the field of replication.
Also be careful around the behavior of your overlapping quorums when adding or removing a database instance. It sounds like you have a relatively static topology, but if that is not the case, then an entirely different set of choices need to be made.
Lastly - and here's the real kick in the teeth - what I have described doesn't actually give you consistency (by any definition) in some cases (I like Daniel Abadi's explanation of when andy why), but for many systems it gives you good enough consistency. It's up to you to decide exactly what level of consistency you need.
There are two-way/three-way replication software that do not require a "master".
You can also use transaction log based replications.
What and how you can use will depend on the database product you use.
HTH

Is MongoDB usable as shared memory for a parallell processing / multiple-instances application?

I'm planning a product that will process updates from multiple data feeds. Input-data is guesstimated to be a total of 100Mbps stream containing 100 byte sized messages. These messages contain several data fields that needs to be checked for correlation with the existing data set within the application. If a input-message correlates with an existing data record, then the input-message will update the existing data-record, if not: it will create a new record. It is assumed that data are updated every 3 seconds in average.
The correlation process is assumed to be a bottleneck, and thus I intend to make our product able to run balanced in multiple processes if needed (most likely on a separate hardware or VM). Somewhat in the vicinity of Space-based architecture. I'd then like a shared storage between my processes so that all existing data records are visible to all the running processes. The shared storage will have to fetch possible candidates for correlation through a query/search based on some attributes (e.g. elevation). It will have to offer configuring warm redundancy, and a possibility to store snapshots every 5 minutes for logging.
Everything seems to be pointing towards MongoDB, but I'd like a confirmation from you that MongoDB will meet my needs. So do you think it is a go?
-Thank you
NB: I am not considering a relational database because we want to focus all coding in our application, instead of having to make 'stored procedures'/'functions' in a separate environment to optimize the performance of our system. Further, the data is diverse and I don't want to try normalize it into a schema.
Yes, MongoDB will meet your needs. I think the following aspects of your description are particularly relevant in your DB selection decision:
1. An update happens every 3 seconds
MongoDB has a database level write-lock (usually short lived) that blocks read operations. This means that you want will want to ensure that you have enough memory to fit your working set, and you will generally not run into any write-lock issues. Note that bulk inserts will hold the write lock for longer.
If you are sharding, you will want to consider shard keys that allow for write scaling i.e. distribute writes on different shards.
2. Shared storage for multiple processes
This is a pretty common scenario; in fact, many MongoDB deployments are expected be accessed from multiple processes concurrently. Unlike the write-lock, the read-lock does not block other reads.
3. Warm redundancy
Supported through MongoDB replication. If you'd like to read from secondary server(s) you will need to set the Read Preference to secondaryPreferred in your driver.

Data Synchronization in a Distributed system

We have an REST-based application built on the Restlet framework which supports CRUD operations. It uses a local-file to store the data.
Now the requirement is to deploy this application on multiple VMs and any update operation in one VM needs to be propagated other application instances running on other VMs.
Our idea to solve this was to send multiple POST msgs (to all other applications) when a update operation happens in a given VM.
The assumption here is that each application has a list/URLs of all other applications.
Is there a better way to solve this?
Consistency is a deep topic, and a hard thing to get right. The trouble comes when two nearly-simultaneous changes occur to the same data: conflicting updates can arrive in one order on one server, and in another order on another. This is a problem, since the two servers no longer agree on what the data is, and it isn't clear who is "right".
The short-story: get your favorite RDBMS (for example, mysql is popular) and have your app servers connect to in what is called the three-tier model. Be sure to perform complex updates in transactions, which will provide an acceptable consistency model.
The long-story: The three-tier model serves well for small-to-medium scale web sites/services. You will eventually find that the single database becomes the bottleneck. For services whose read traffic is substantially larger than write traffic, a common optimization is to create a single-master, many-slave database replication arrangement, where all writes go to the single master (required for consistency with non-distributed transactions), but the more-common reads could go to any of the read slaves.
For services with evenly-mixed read/write traffic, you may be better served by dropped some of the conveniences (and accompanying restrictions) that formal SQL provides and instead use of one of the various "nosql" data stores that have recently emerged. Their relative merits and fitness for various problems is a deep topic in itself.
I can see 7 major options for now. You should find out more details and decide whether the facilities / trade-offs are appropriate for your purpose
Perform the CRUD operation on a common RDBMS. Simplest and most consistent
Perform the CRUD operations on a common RDBMS which runs as fast in-memory RDBMS. eg TimesTen from Oracle etc
Perform the CRUD on a distributed cache or your own home cooked distributed hash table which can guarantee synchronization eg Hazelcast/ehcache and others
Use a fast common state server like REDIS/memcached and perform your updates
in a synchronized manner on it and write out the successfull operations to a DB in a lazy manner if required.
Distribute your REST servers such that the CRUD operations on a single entity are only performed by a single master. Once this is done, the details about the changes can be communicated to everyone else using a reliable message bus or a distributed database (eg postgres) that runs underneath and syncs all of your updates fairly fast.
Target eventual consistency and use a distributed data store like Cassandra which lets you target the consistency you require
Use distributed consensus algorithms like Paxos or RAFT or an implementation of the same(recommended) like zookeeper or etcd respectively and take ownership of the item you want to change from each REST server before you perform the CRUD operation - might be a bit slow though and same stuff is what Cassandra might give you.

What is a cluster in a RDBMS?

Please explain what a cluster is in a RDBMS?
In SQL, a cluster can also refer to a specific physical ordering of rows.
For example, consider a database with two tables: INVOICES and INVOICE_ITEMS. If many INVOICE_ITEMs are inserted concurrently, chances are that items of the same invoice end up on multiple physical blocks of the underlying storage. When reading such an invoice, unneeded data will be read together with the interesting rows. Clustering INVOICE_ITEMS over the foreign key to INVOICES groups rows of items the same invoice together in the same block, thus reducing the amount of necessary read operations when accessing the invoice.
Read about clustered index on wikipedia.
In system administration, a "cluster" is a number of servers configured to provide the same service, but look like one server to the users.
This can be done for performance reasons (two servers can answer more requests than a single one) or redundancy (if one server crashes, the others still work).
Such configurations often need special software or setup to work. Some services, like serving static web content, can be clustered very easily. Others, like RDBMS, need complicated replication schemes to coordinate.
Read about computer clusters on wikipedia.
In statistics, a cluster is a "group of items so that objects from the same cluster are more similar to each other than objects from different clusters."
Read about Cluster analysis on wikipedia.
From here:
High-availability clusters (also known
as HA Clusters or Failover Clusters)
are computer clusters that are
implemented primarily for the purpose
of providing high availability of
services which the cluster provides.
They operate by having redundant
computers or nodes which are then used
to provide service when system
components fail. Normally, if a server
with a particular application crashes,
the application will be unavailable
until someone fixes the crashed
server. HA clustering remedies this
situation by detecting
hardware/software faults, and
immediately restarting the application
on another system without requiring
administrative intervention, a process
known as Failover
In database context it can have two completely different meanings:
may either mean data clustering or index clustering, which is grouping of similar rows. This is useful for data mining, some databases (e.g. Oracle) also use it to optimize physical data organization;
or cluster as database running on many closely linked servers.
Clustering, in the context of databases.
It refers to the ability of several servers or instances to connect to a single database.
An instance is the collection of memory and processes that interacts with a database, which is the set of physical files that actually store data.