Erasure Coded Pool suggested PG count - ceph

I'm messing around with pg calculator to figure out the best pg count for my cluster. I have an erasure coded FS pool which will most likely use half space of the cluster in the forseeable future. But the pg calculator only has options for replicated pools. Should i just type according to the erasure-code ratio for replica # or is there another way around this?

From Ceph Nautilus version onwards there's a pg-autoscaler that does the scaling for you. You just need to create a pool with an initial (maybe low) value. As for the calculation itself your assumption is correct, you take the number of chunks into account when planning the pg count.

From :
redhat docs:
3.3.4. Calculating PG Count
If you have more than 50 OSDs, we recommend approximately 50-100 placement groups per OSD to balance out resource usage, data durability and distribution. If you have less than 50 OSDs, choosing among the PG Count for Small Clusters is ideal. For a single pool of objects, you can use the following formula to get a baseline:
(OSDs * 100)
Total PGs = ------------
pool size
Where pool size is either the number of replicas for replicated pools or the K+M sum for erasure coded pools (as returned by ceph osd erasure-code-profile get).
You should then check if the result makes sense with the way you designed your Ceph cluster to maximize data durability, data distribution and minimize resource usage.
The result should be rounded up to the nearest power of two. Rounding up is optional, but recommended for CRUSH to evenly balance the number of objects among placement groups.
For a cluster with 200 OSDs and a pool size of 3 replicas, you would estimate your number of PGs as follows:
(200 * 100)
----------- = 6667. Nearest power of 2: 8192
3
With 8192 placement groups distributed across 200 OSDs, that evaluates to approximately 41 placement groups per OSD. You also need to consider the number of pools you are likely to use in your cluster, since each pool will create placement groups too. Ensure that you have a reasonable maximum PG count.

Related

Opensearch: Data node costs

I don't understand the costs of having 1 data node vs having 2 or more data nodes.
Will I have the same cost regardless of the number of nodes?
If I have 2 data nodes, that means that I will have double the cost of the instances?
Thanks
Depends on the instance size: i3.2xlarge would be ~2x more expensive than i3.xlarge.
If you use one instance size then yes, 2 nodes would be 2x more expensive than 1 node but you'll get more resilience (if one node goes down your cluster can still get updates and serve data) and rolling restarts.
Though, Opensearch requires an odd number of nodes for master election to work reliably so 3 smaller nodes might be better than 2 larger ones.

Redshift free storage doesn't increase after adding 2 nodes

My 4-Node (dc2.large 160 GB storage per node) Redshift cluster had around 75% storage full, so I added 2 more nodes, to make a total of 6 Nodes, and I was expecting the disk usage to drop down to around 50%, but after making the said change, the disk usage still remains at 75% (even after few days and after VACUUM).
75% of 4*160 = 480 GB of data
6*160 = 960 of available storage in the new configuration, which means it should have dropped to 480/960 i.e somewhere close to 50% disk usage.
The image shows the disk space percentage before and after adding two nodes.
I also checked if there are any large table which are using DISTSTYLE ALL, which causes data replication across the nodes, but the tables I have in that are very small in size as compared to the total storage capacity, so I don't think they'd have any significant impact on the storage.
What can I do here to reduce the storage usage as I don't want to add more nodes and then again land up in the same situation?
It sounds like your tables are affected by the minimum table size. It may be counter-intuitive but you can often reduce the size of small tables by converting them to DISTSTYLE ALL.
https://aws.amazon.com/premiumsupport/knowledge-center/redshift-cluster-storage-space/
Can you clarify what distribution style you are using for some of the bigger tables?
If you are not specifying a distribution style then Redshift will automatically pick one (see here), and it's possible that it will chose ALL distribution at first and only switch to EVEN or KEY distribution once you reach a certain disk usage %.
Also, have you run the ANALYZE command to make sure the table stats are up to date?

Can we have more than 1024 nodes in Couchbase?

Disclaimer : Just started NoSQL.
As per my understanding, in case of multiple nodes, 1024 V Buckets will be divided symmetrically inbetween available nodes.
Say in case of 2 nodes system, 512 V Buckets will be residing in each node.
Similarly in case of 4 nodes, 256 V Buckets will be residing on each nodes.
On Extrapolating same distribution, How the system will behave in case 1025th Node is being added to the cluster?
Couchbase has a fixed number of vbuckets, they will always be 1024. This also means that the maximum number of nodes a couchbase cluster could have is 1024, and this 10x bigger than the biggest clusters we have so far. (Yes, some clients have clusters with ~100 nodes in it )
The advantage of sharding data into 1024 vbuckets is that you won't ever need to reshard your data (an expensive operation in mongo, for instance). It also makes couchbase super easy to scale out ( as we just need to move some buckets to the new node) and also super easy to recover from a node failure (as we just need to guarantee the correct number of replicas of each bucket)

How databricks do auto scaling for a cluster

I have a databricks cluster setup with auto scale upto 12 nodes.
I have often observed databricks scaling cluster from 6 to 8, then 8 to 11 and then 11 to 14 nodes.
So my queries -
1. Why is it picking up 2-3 nodes to be added at one go
2. Why auto scale is triggered as I see not many jobs are active or heavy processing on cluster. CPU usage is pretty low.
3. While auto scaling why is it leaving notebook in waiting state
4. Why is it taking up to 8-10 min to auto scale
Thanks
I am trying to investigate why data bricks is auto scaling cluster when its not needed
When you create a cluster, you can either provide a fixed number of workers for the cluster or provide a minimum and maximum number of workers for the cluster.
When you provide a fixed size cluster, Databricks ensures that your cluster has the specified number of workers. When you provide a range for the number of workers, Databricks chooses the appropriate number of workers required to run your job. This is referred to as autoscaling.
With autoscaling, Databricks dynamically reallocates workers to account for the characteristics of your job. Certain parts of your pipeline may be more computationally demanding than others, and Databricks automatically adds additional workers during these phases of your job (and removes them when they’re no longer needed).
Autoscaling makes it easier to achieve high cluster utilization, because you don’t need to provision the cluster to match a workload. This applies especially to workloads whose requirements change over time (like exploring a dataset during the course of a day), but it can also apply to a one-time shorter workload whose provisioning requirements are unknown. Autoscaling thus offers two advantages:
Workloads can run faster compared to a constant-sized
under-provisioned cluster.
Autoscaling clusters can reduce overall costs compared to a
statically-sized cluster.
Databricks offers two types of cluster node autoscaling: standard and optimized.
How autoscaling behaves
Autoscaling behaves differently depending on whether it is optimized or standard and whether applied to an interactive or a job cluster.
Optimized
Scales up from min to max in 2 steps.
Can scale down even if the cluster is not idle by looking at shuffle
file state.
Scales down based on a percentage of current nodes.
On job clusters, scales down if the cluster is underutilized over
the last 40 seconds.
On interactive clusters, scales down if the cluster is underutilized
over the last 150 seconds.
Standard
Starts with adding 4 nodes. Thereafter, scales up exponentially, but
can take many steps to reach the max.
Scales down only when the cluster is completely idle and it has been
underutilized for the last 10 minutes.
Scales down exponentially, starting with 1 node.

Choosing the compute resources of the nodes in the cluster with horizontal scaling

Horizontal scaling means that we scale by adding more machines into the pool of resources. Still, there is a choice of how much power (CPU, RAM) each node in the cluster will have.
When cluster managed with Kubernetes it is extremely easy to set any CPU and memory limit for Pods. How to choose the optimal CPU and memory size for cluster nodes (or Pods in Kubernetes)?
For example, there are 3 nodes in a cluster with 1 vCPU and 1GB RAM each. To handle more load there are 2 options:
Add the 4th node with 1 vCPU and 1GB RAM
Add to each of the 3 nodes more power (e.g. 2 vCPU and 2GB RAM)
A straightforward solution is to calculate the throughput and cost of each option and choose the cheaper one. Are there any more advanced approaches for choosing the compute resources of the nodes in a cluster with horizontal scalability?
For this particular example I would go for 2x vCPU instead of another 1vCPU node, but that is mainly cause I believe running OS for anything serious on a single vCPU is just wrong. System to behave decently needs 2+ cores available, otherwise it's too easy to overwhelm that one vCPU and send the node into dust. There is no ideal algorithm for this though. It will depend on your budget, on characteristics of your workloads etc.
As a rule of thumb, don't stick to too small instances as you have a bunch of stuff that has to run on them always, regardless of their size and the more node, the more overhead. 3x 4vCpu+16/32GB RAM sounds like nice plan for starters, but again... it depends on what you want, need and can afford.
The answer is related to such performance metrics as latency and throughput:
Latency is a time interval between sending request and receiving response.
Throughput is a request processing rate (requests per second).
Latency has influence on throughput: bigger latency = less throughput.
If a business transaction consists of multiple sequential calls of the services that can't be parallelized, then compute resources (CPU and memory) has to be chosen based on the desired latency value. Adding more instances of the services (horizontal scaling) will not have any positive influence on the latency in this case.
Adding more instances of the service increases throughput allowing to process more requests in parallel (if there are no bottlenecks).
In other words, allocate CPU and memory resources so that service has desired response time and add more service instances (scale horizontally) to handle more requests in parallel.