What is the biggest Couchbase cluster nodes number? - nosql

any body knows what is biggest couchbase cluster has been deployed, since there are lot of info broadcast from each node, i am doubt on the scalability
thanks

I would like to answer this question differently than a "simple number of nodes". In your question you are talking about scalability and some "doubts" about it.... And as you can case, as Couchbase, I have no doubts about the scalability...
When people are using Couchbase, like any NoSQL solution, they have specific use case in mind for their data. And each use case have a specific data "life cycle" (volume, throughput, expiration, ...) So what do you have in mind when you are talking about scalability?
For example I have been working on a project where we have a 20 nodes cluster processing 650,000 op/s with 30% of mutation of the data. For this specific use case, no need to go bigger. You can see in other use case like Draw Something with 80/90 nodes ~50 million total users, 16 million daily users, 2 billion documents...
So instead of talking of "hypothetical" size of the cluster, I would like to understand your use case and type of deployment (available hardware/VMs) to define what will be a good topology.

Check out this article, it covers the growth of the game 'draw something '. They went from a 6 node cluster to a 90 node cluster in 8 weeks due to rapid growth. They also had zero downtime in adding nodes to the cluster and at week 6 were processing 3000 drawings a second.
http://www.couchbase.com/customer-stories/couchbase-helps-omgpop-scale-draw-something-50-million-users-50-days
Edit
Check slide 16 on this link, cluster size of 100+ for Viber
http://www.couchbase.com/presentations/couchbase-tlv-2014-couchbase-at-viber

Related

mongoDb Atlas - In general, What affects my writing rate here?

I am using an Atlas cluster with M60 tier configured with IOPS of 3099.
I am trying to write 116,550,000 documents, waging at around 12 KB on average per doc, as fast as possible.(optimally in less then 6 hours)
I have 6 compound indexes made of multiple fields.
My question is: I would like to know what, in principle, affects my writing speed and essentially holds my speed from getting any faster.
Is it the IOPS? If I increase the IOPS say to 9000 will I see a drastic change in writing speed?
Is sharding the answer here? And if so, can I use multiple shards in the, lets say M30 tier, but with more IOPS and achieve my goal?
Thank to all the responders :)
The best writing speed I managed to achieve was 1200 docs per sec, using node. optional: I have a spark cluster with mongodb-spark-connector as a sending option. But I think the problem is in the mongo server and not the client.

Multiple node pools vs single pool with many machines vs big machines

We're moving all of our infrastructure to Google Kubernetes Engine (GKE) - we currently have 50+ AWS machines with lots of APIs, Services, Webapps, Database servers and more.
As we have already dockerized everything, it's time to start moving everything to GKE.
I have a question that may sound too basic, but I've been searching the Internet for a week and did not found any reasonable post about this
Straight to the point, which of the following approaches is better and why:
Having multiple node pools with multiple machine types and always specify in which pool each deployment should be done; or
Having a single pool with lots of machines and let Kubernetes scheduler do the job without worrying about where my deployments will be done; or
Having BIG machines (in multiple zones to improve clusters' availability and resilience) and let Kubernetes deploy everything there.
List of consideration to be taken merely as hints, I do not pretend to describe best practice.
Each pod you add brings with it some overhead, but you increase in terms of flexibility and availability making failure and maintenance of nodes to be less impacting the production.
Nodes too small would cause a big waste of resources since sometimes will be not possible to schedule a pod even if the total amount of free RAM or CPU across the nodes would be enough, you can see this issue similar to memory fragmentation.
I guess that the sizes of PODs and their memory and CPU request are not similar, but I do not see this as a big issue in principle and a reason to go for 1). I do not see why a big POD should run merely on big machines and a small one should be scheduled on small nodes. I would rather use 1) if you need a different memoryGB/CPUcores ratio to support different workloads.
I would advise you to run some test in the initial phase to understand which is the size of the biggest POD and the average size of the workload in order to properly chose the machine types. Consider that having 1 POD that exactly fit in one node and assign to it is not the right to proceed(virtual machine exist for this kind of scenario). Since fragmentation of resources would easily cause to impossibility to schedule a large node.
Consider that their size will likely increase in the future and to scale vertically is not always this immediate and you need to switch off machine and terminate pods, I would oversize a bit taking this issue into account and since scaling horizontally is way easier.
Talking about the machine type you can decide to go for a machine 5xsize the biggest POD you have (or 3x? or 10x?). Oversize a bit as well the numebr of nodes of the cluster to take into account overheads, fragmentation and in order to still have free resources.
Remember that you have an hard limit of 100 pods each node and 5000 nodes.
Remember that in GCP the network egress throughput cap is dependent on the number of vCPUs that a virtual machine instance has. Each vCPU has a 2 Gbps egress cap for peak performance. However each additional vCPU increases the network cap, up to a theoretical maximum of 16 Gbps for each virtual machine.
Regarding the prices of the virtual machines notice that there is no difference in price buying two machines with size x or one with size 2x. Avoid to customise the size of machines because rarely is convenient, if you feel like your workload needs more cpu or mem go for HighMem or HighCpu machine type.
P.S. Since you are going to build a pretty big Cluster, check the size of the DNS
I will add any consideration that it comes to my mind, consider in the future to update your question with the description of the path you chose and the issue you faced.
1) makes a lot of sense as if you want, you can still allow kube deployments treat it as one large pool (by not adding nodeSelector/NodeAffinity) but you can have different machines of different sizes, you can think about having a pool of spot instances, etc. And, after all, you can have pools that are tainted and so forth excluded from normal scheduling and available to only a particular set of workloads. It is in my opinion preferred to have some proficiency with this approach from the very beginning, yet in case of many provisioners it should be very easy to migrate from 2) to 1) anyway.
2) As explained above, it's effectively a subset of 1) so better to build up exp with 1) approach from day 1, but if you ensure your provisioning solution supports easy extension to 1) model then you can get away with starting with this simplified approach.
3) Big is nice, but "big" is relative. It depends on the requirements and amount of your workloads. Remember that while you need to plan for loss of a whole AZ anyway, it will be much more frequent to loose single nodes (reboots, decommissions of underlying hardware, updates etc.) so if you have more hosts, impact of loosing one will be smaller. Bottom line is that you need to find your own balance, that makes sense for your particular scale. Maybe 50 nodes is too much, would 15 cut it? Who knows but you :)

Ideal number of replicas for database

I am looking for some best practices to be able to read enough to gauge how to decide on the number of replicas needed for Mongo. I am aware of Mongo Docs that talk about things like having odd number of nodes, and when does the need of arbiter arise, etc.
In our case the requirement for reads won’t be so high that reads will become a bottleneck. Neither are we targeting sharding at this moment. However, we are going to run mongo in a docker swarm and there could be multiple instances of certain services trying to write. Our swarm cluster won’t be very huge either most likely.
So how do I find logical answers to these:
Why not create one local mongo instance per physical node and tie it to that?
For any number of physical nodes, as long as read/write is not a bottle neck, 3 or 5 replicas are always going to be ideal for fault recovery and high availability. But why is 3 or 5 a good number. Why not 7 if I have say 10 physical nodes.
I am trying to find some good reads to be able to decide on how to arrive at a number. Any pointers?
To give you an answer, all depend on many criterias
What is your budget
How big is your data
what do you want to use your replica sets for
etc...
As an Example, In my case
We have 3 Data Centers across the country
One of them is very Small
We found our sweet spot in terms of number od nodes being 5
1 Primary + 1 Secondary in DC1
1 Arbiter in DC2
2 secondaries in DC3

Azure Service Fabric reliable collections and memory

Let's say I'm running a Service Fabric cluster on 5 D1 class (1 core, 3.5GB RAM, 50GB SSD) VMs. and that I'm running 2 reliable services on this cluster, one stateless and one stateful. Let's assume that the replica target is 3.
How to calculate how much can my reliable collections hold?
Let's say I add one or more stateful services. Since I don't really know how the framework distributes services do I need to take most conservative approach and assume that a node may run all of my stateful services on a single node and that their cumulative memory needs to be below the RAM available on a single machine?
TLDR - Estimating the expected capacity of a cluster is part art, part science. You can likely get a good lower bound which you may be able to push higher, but for the most part deploying things, running them, and collecting data under your workload's conditions is the best way to answer this question.
1) In general, the collections on a given machine are bounded by the amount of available memory or the amount of available disk space on a node, whichever is lower. Today we keep all data in the collections in memory and also persist it to disk. So the maximum amount that your collections across the cluster can hold is generally (Amount of available memory in the cluster) / (Target Replica Set Size).
Note that "Available Memory" is whatever is left over from other code running on the machines, including the OS. In your above example though you're not running across all of the nodes - you'll only be able to get 3 of them. So, (unrealistically) assuming 0 overhead from these other factors, you could expect to be able to put about 3.5 GB of data into that stateful service replica before you ran out of memory on the nodes on which it was running. There would still be 2 nodes in the cluster left empty.
Let's take another example. Let's say that it is about the same as your example above, except in this case you set up the stateful service to be partitioned. Let's say you picked a partition count of 5. So now on each node, you have a primary replica and 2 secondary replicas from other partitions. In this case, each partition would only be able to hold a maximum of around 1.16 GB of state, but now overall you can pack 5.83 GB of state into the cluster (since all nodes can now be utilized fully). Incidentally, just to prove out the math works, that's (3.5 GB of memory per node * 5 nodes in the cluster) [17.5] / (target replica set size of 3) = 5.83.
In all of these examples, we've also assumed that memory consumption for all partitions and all replicas is the same. A lot of the time that turns out to not be true (at least temporarily) - some partitions can end up with more or less work to do and hence have uneven resource consumption. We also assumed that the secondaries were always the same as the primaries. In the case of the amount of state, it's probably fair to assume that these will track fairly evenly, though for other resource consumption it may not (just something to keep in mind). In the case of uneven consumption, this is really where the rest of Service Fabric's Cluster Resource Management will help, since we can come to know about the consumption of different replicas and pack them efficiently into the cluster to make use of the available space. Automatic reporting of consumption of resources related to state in the collections is on our radar and something we want to do, so in the future, this would be automatic but today you'd have to report this consumption on your own.
2) By default, we will balance the services according to the default metrics (more about metrics is here). So by default, the different replicas of those two different services could end up on the machine, but in your example, you'll end up with 4 nodes with 1 replica from a service on it and then 1 node with two replicas from the two different services. This means that each service (each with 1 partition as per your example) would only be able to consume 1.75 GB of memory in each service for a total of 3.5 GB in the cluster. This is again less than the total available memory of the cluster since there are some portions of nodes that you're not utilizing.
Note that this is the maximum possible consumption, and presuming no consumption outside the service itself. Taking this as your maximum is not advisable. You'll want to reduce it for several reasons, but the most practical reason is to ensure that in the presence of upgrades and failures that there's sufficient available capacity in the cluster. As an example, let's say that you have 5 Upgrade Domains and 5 Fault Domains. Now let's say that a fault domain's worth of nodes fails while you have an upgrade going on in an upgrade domain. This means that (a little less than) 40% of your cluster capacity can be gone at any time, and you probably want enough room left over on the remaining nodes to continue. This means that if your cluster previously could hold 5.83 GB of state (from our prior calculations), in reality you probably don't want to put more than about 3.5 GB of state in it since with more of that the service may not be able to get back to 100% healthy (note also that we don't build replacement replicas immediately so the nodes would have to be down for your ReplicaRestartWaitDuration before you ran into this case). There's a bunch more information about metrics, capacity, buffered capacity (which you can use to ensure that room is left on nodes for the failure cases) and fault and upgrade domains are covered in this article.
There are some other things that practically will limit the amount of state you'll be able to store. You'll want to do several things:
Estimate the size of your data. You can make a reasonable estimate up-front of how big your data is by calculating the size of each field your objects hold. Be sure to take into consideration 64-bit references. This will give you a lower-bound starting point.
Storage overhead. Each object you store in a collection will come with some overhead for storing that object. In the reliable collections depending on the collection and the operations currently in flight (copy, enumerations, updates, etc.) this overhead can range from between 100 and around 700 bytes per item (row) stored in the collections. Do know also that we're always looking for ways to reduce the amount of overhead we introduce.
We also strongly recommend running your service over some period of time and measuring actual resource consumption via performance counters. Simulating some sort of real workload and then measuring the actual usage of the metrics you care about will serve you pretty well. The reason we recommend this in particular is that you will be able to see consumption from things like which CLR object heap your objects end up placed in, how often GC is running, if there's leaks, or other things like this which will impact the amount of memory you can actually utilize.
I know that this has been a long answer but I hope you find it helpful and complete.

Why is the maximum number of nodes in a MongoDB Replica Set exactly 12?

Sorry for that dumb question, but why exactly that number? At first I thought it must only be odd number, but actually it is 7 as the maximum voting node, and overall a maximum of 12 nodes. If you wish to run more than this, you must use the deprecated master/slave configuration.
So how to explain this number?
All members of a replica set maintain knowledge of the current state of each of the other members. This is the rationale for limiting the total number of nodes to twelve - more than that would introduce too much overhead in heartbeats between each pair of nodes.
Maximum of seven voting members is to avoid slowing down elections - since you need to have a consensus election to select a new primary limiting the number of nodes that need to coordinate amongst themselves will keep things faster.
For anyone still wondering about this, the latest dev branch of MongoDB (2.7.8 as of writing this answer) has increased the overall limit to 50 as part of SERVER-15060. This change, assuming it is not reverted due to irreconcilable problems during testing, will be available in the 2.8 production release. Note: the max number of voting nodes remains at 7 but this change should prevent the need to use the legacy master/slave configuration for those niche cases that require more than 12 nodes.