I'm new to Couchbase and NoSql technologies in general, but I'm working on a web chat application running on node js using express and some other modules.
I've chosen to work with NoSql to store sessions and all needed data on server-side. But I don't really understand some important features of Couchbase : What is a Cluster, a Bucket? Where can I find some clear definitions of how the server works?
Couchbase uses the term cluster in the same way as many other products, a Couchbase cluster is simply a collection of machines running as a co-ordinated, distributed system of Couchbase nodes.
A Bucket is a Couchbase specific term that is roughly analogous to a 'database' in traditional RDBMS terms. A Bucket provides a container for grouping your data, both in terms of organisation and grouping of similar data and resource allocation. You can configure your buckets separately, providing different quotas, different IO priorities and different security settings on a per bucket basis. Buckets are also the primary method for namespacing documents in Couchbase.
For further information, the Architecture and Concepts overview in the Couchbase documentation, specifically data storage, is a good starting point. A somewhat outdated, but still useful video on Introduction to Couchbase might also be useful to you.
Even though it's answered, hope the following would be more helpful for someone.
A Couchbase cluster contains nodes. Nodes contain buckets. Buckets contain documents. Documents can be retrieved multiple ways: by their keys, queried with N1QL, and also by using Views.(Ref)
As specified in the Couchbase Documentation,
Node
A single Couchbase Server instance running on a physical server,
virtual machine, or a container. All nodes are identical: they consist
of the same components and services and provide the same interfaces.
Cluster
A cluster is a collection of nodes that are accessed and managed as a
single group. Each node is an equal partner in orchestrating the
cluster to provide facilities such as operational information
(monitoring) or managing cluster membership of nodes and health of
nodes.
Clusters are scalable. You can expand a cluster by adding new nodes
and shrink a cluster by removing nodes.
The Cluster Manager is the main component that orchestrates the
cluster level operations. For more information, see Cluster Manager.
Bucket
A bucket is a logical container for a related set of items such as
key-value pairs or documents. Buckets are similar to databases in
relational databases. They provide a resource management facility for
the group of data that they contain. Applications can use one or more
buckets to store their data. Through configuration, buckets provide
segregation along the following boundaries:
Cache and IO management
Authentication
Replication and Cross Datacenter Replication (XDCR)
Indexing and Views
For further info : Couchbase Terminology
Related
The reason I am asking is, I have a resource-intensive collection that degrades performance of its entire database. I need to decide whether to migrate other collections away to a different database within the same cluster or to a different cluster altogether.
The answer I think depends on under-the-hood implementation. Does a poorly performing collection take resources only from its own database, or from the cluster as a whole?
Hosted on Atlas.
I would suggest first look at your logical and schema designs and try to optimize it but if that is not working then
"In MongoDB Atlas, all databases within a cluster share the same set of nodes (servers) and are subject to the same resource limitations. Each database has its own logical namespace and operates independently from the other databases, but they share the same underlying hardware resources, such as CPU, memory, and I/O bandwidth.
So, if you have a resource-intensive collection that is degrading performance for its entire database, migrating other collections to a different database within the same cluster may not significantly improve performance if the resource bottleneck is at the cluster level. In this case, you may need to consider scaling up the cluster or upgrading to a higher-tier plan to increase the available resources and improve overall cluster performance."
Reference: https://www.mongodb.com/community/forums/t/creating-a-new-database-vs-a-new-collection-vs-a-new-cluster/99187/2
The term "cluster" is overloaded. It can refer to a replica set or to a sharded cluster.
A sharded cluster is effectively a group of replica set with a query router.
If you are using a sharded cluster, you can design a sharding strategy that will put the busy collection on its own shard, the rest of the data on the other shard(s), and still have a common point to query them both.
I am re-designing a dotnet backend api using the CQRS approach. This question is about how to handle the Query side in the context of a Kubernetes deployment.
I am thinking of using MongoDb as the Query Database. The app is dotnet webapi app. So what would be the best approach:
Create a sidecar Pod which containerizes the dotnet app AND the MongoDb together in one pod. Scale as needed.
Containerize the MongoDb in its own pod and deploy one MongoDb pod PER REGION. And then have the dotnet containers use the MongoDb pod within its own region. Scale the MongoDb by region. And the dotnet pod as needed within and between Regions.
Some other approach I haven't thought of
I would start with the most simple approach and that is to place the write and read side together because they belong to the same bounded context.
Then in the future if it is needed, then I would consider adding more read side or scaling out to other regions.
To get started I would also consider adding the ReadSide inside the same VM as the write side. Just to keep it simple, as getting it all up and working in production is always a big task with a lot of pitfalls.
I would consider using a Kafka like system to transport the data to the read-sides because with queues, if you later add a new or if you want to rebuild a read-side instance, then using queues might be troublesome. Here the sender will need to know what read-sides you have. With a Kafka style of integration, each "read-side" can consume the events in its own pace. You can also more easily add more read-sides later on. And the sender does not need to be aware of the receivers.
Kafka allows you to decouple the producers of data from consumers of the data, like this picture that is taken form one of my training classes:
In kafka you have a set of producers appending data to the Kafka log:
Then you can have one or more consumers processing this log of events:
It has been almost 2 years since I posted this question. Now with 20-20 hindsight I thought I would post my solution. I ended up simply provisioning an Azure Cosmos Db in the region where my cluster lives, and hitting the Cosmos Db for all my query-side requirements.
(My cluster already lives in the Azure Cloud)
I maintain one Postges Db in my original cluster for my write-side requirements. And my app scales nicely in the cluster.
I have not yet needed to deploy clusters to new regions. When that happens, I will provision a replica of the Cosmos Db to that additional region or regions. But still just one postgres db for write-side requirements. Not going to bother to try to maintain/sync replicas of the postgres db.
Additional insight #1. By provisioning the the Cosmos Db separately from my cluster (but in the same region), I am taking the load off of my cluster nodes. In effect, the Cosmos Db has its own dedicated compute resources. And backup etc.
Additional insight #2. It is obvious now but wasnt back then, that tightly coupling a document db (such as MongoDb) to a particular pod is...a bonkers bad idea. Imagine horizontally scaling your app and with each new instance of your app you would instantiate a new document db. You would quickly bloat up your nodes and crash your cluster. One read-side document db per cluster is an efficient and easy way to roll.
Additional insight #3. The read side of any CQRS can get a nice jolt of adrenaline with the help of an in-memory cache like Redis. You can first see if some data is available in the cache before you hit the docuement db. I use this approach for data such as for a checkout cart, where I will leave data in the cache for 24 hours but then let it expire. You could conceivably use redis for all your read-side requirements, but memory could quickly become bloated. So the idea here is consider deploying an in-memory cache on your cluster -- only one instance of the cache -- and have all your apps hit it for low-latency/high-availability, but do not use the cache as a replacemet for the document db.
This is a broader question around building the architecture for SolrCloud Time Routed Alias application. I'm using SolrCloud to ingest time-series data on a regular basis and have SolrCloud running in a Kubernetes Cluster. A Solr node gets attached every time we add a new Pod to our cluster. Each pod has a persistent volume claim, so this is how we scale our storage as well.
Since I'm trying to use Time Routed Aliases, it creates a new collection with preemptive calculation and currently places them across the Solr pods based on how much free-disk space is available in a pod, so new pods will get selected for shard placements whenever a new pod is introduced.
However, I would like to design a solution where we can avoid hot-spotting Solr nodes by distributing the shards across older pods and yet still maintaining SolrCloud architecture that grows in size as data is ingested every day.
I'm unsure what the best configuration would be at a collection/cluster level based on the available policies in https://solr.apache.org/guide/8_6/solrcloud-autoscaling-policy-preferences.html
I'm currently creating collections at weekly-intervals and my use-cases involve searching across data at least 2 weeks old. Because ingested data will be placed in newer pods, my client side facing applications will be bombarding the newer pods every time.
Each collection has a replication factor of 2 and a numShards parameter of 2.
What level of configuration on a collection/alias/cluster level should I use in order to avoid hot-spotting?
Is there a way to deal with different server types in a sharded cluster? According to MongoDB documentation the balancer attempts to achieve an even distribution of chunks across all shards in the cluster. So, it purely seems to be based on the amount of data.
However, when you add new servers to an existing sharded cluster then typically the new server has more disc space, disc is faster and CPU has more power. Especially when you run an application for several years then this condition might come a fact.
Does the balancer take such topics into account or do you have to ensure that all servers in a sharded cluster have similar performance and resources?
You are correct that the balancer would assume that all parts of the cluster is of similar hardware. However you can use zone sharding to custom tailor the behaviour of the balancer.
To quote from the zone sharding docs page:
In sharded clusters, you can create zones of sharded data based on the shard key. You can associate each zone with one or more shards in the cluster. A shard can associate with any number of zones. In a balanced cluster, MongoDB migrates chunks covered by a zone only to those shards associated with the zone.
Using zones, you can specify data distribution to be by location, by hardware spec, by application/customer, and others.
To directly answer your question, the use case you'll be most interested in would be Tiered Hardware for Varying SLA or SLO. Please see the link for a tutorial on how to achieve this.
Note that defining the zones is a design decision on your part, and there is currently no automated way for the server to do this for you.
Small note: the balancer balances the cluster purely using the shard key. It doesn't take into account the amount of data at all. Thus in an improperly designed shard key, it is possible to have some shard overflowing with data while others are completely empty. In a pathological mis-design case, some chunks are not divisible, leading to a situation where the cluster is forever unbalanced until an extensive redesign is done.
My app runs a daily job that collects data and feeds it to a mongoDB. This data is processed and then exposed via rest API.
Need to setup a mongodb cluster in AWS, the requirements:
Data will grow about the same size each day ( about 50M records), so write throughput doesn't need to scale. writes would be triggered by a cron at a certain hour. Objects are immutable ( they won't grow)
Read throughput will depend on number of users / traffic, so it should be scalable. traffic won't be heavy in the beginning.
Data is mostly simple JSON, need a couple of indices around some of the fields for fast-querying / filtering.
what kind of architecture should I use in terms of replica sets, shards, etc ?.
What kind of storage volumes should I use for this architecture? ( EBS, NVMe) ?
Is it preferred to use more instances or to use RAID setups. ?
I'm looking to spend some ~500 a month.
Thanks in advance
To setup the MongoDB cluster in AWS I would recommend to refer the latest AWS quick start for MongoDB which will cover the architectural aspects and also provides CloudFormation templates.
For the storage volumes you need to use EC2 instance types that supports EBS instead of NVMe storage since NVMe is only an instance storage. If you stop and start the EC2, the data in NVMe is lost.
Also for the storage volume throughput, you can start with General Purpose IOPS with resonable storage size and if you find any limitations then only consider Provisioned IOPS.
For high availability and fault tolerance the CloudFormation will create multiple instances(Nodes) in MongoDB cluster.