Read Model Partition Key Strategy - nosql

I have a collection of documents that looks like the following:
There is one document per VIN/SiteID and our access pattern is showing all documents
at a specific site. I see two potential partition keys we could choose from:
SiteID - We only have 75 sites so the cardinality is not very high. Also, the doucments are not very big so the 10GB limit is probably OK.
SiteID/VIN: The data is now more evenly distributed but now that means each logical partition will only store one item. is this an anti-pattern? also, so support our access pattern we will need to use a cross-partition query. again, the data set is small so is this a problem?
Based on what I am describing, which partition key makes more sense?
Any other suggestions would be greatly appreciated!

Your first option makes a lot of sense and could be a good partition key but the words "probably OK" don't really breed confidence. Remember, the only way to change the partition key is to migrate to a new collection. If you can take that risk then SiteId (which I'm guessing you will always have) is a good partition key.
If you have both VIN and SiteId when you are doing the reading or querying then this is the safer combination. There is no problem with having each logical partition to store one item per se. It's only a problem when you are doing cross partition queries. If you know both VIN and SiteId in your queries then it's a great plan.
You also have to remember that your RUs are evenly split between your partitions inside a collection.

Related

The point of Cosmos DB value uniqueness only per shard key (partition key)

Microsoft's documentation of Managing indexing in Azure Cosmos DB's API for MongoDB states that:
Azure Cosmos DB's API for MongoDB server version 3.6 automatically
indexes the _id field, which can't be dropped. It automatically
enforces the uniqueness of the _id field per shard key.
I'm confused about the reasoning behind "per shard key" part. I see it as "you're unique field won't be globally unique at all" because if I understand it correctly, if I have the Guid field _id as unique and userId field as the partition key then I can have 2 elements with the same ID provided that they happen to belong to 2 different users.
Is it that I fail to pick the right partition key? Because in my understanding partition key should be the field that is the most frequently used for filtering the data. But what if I need to select the data from the database only by having the ID field value? Or query the data for all users?
Is it the inherent limits in distributed systems that I need to accept and therefore remodel my process of designing a database and programming the access to it? Which in this case would be: ALWAYS query your data from this collection not only by _id field but first by userId field? And not treat my _id field alone as an identifier but rather see an identifier as a compound of userId and _id?
TL;DR
Is it the inherent limits in distributed systems that I need to accept and therefore remodel my process of designing a database and programming the access to it? Which in this case would be: ALWAYS query your data from this collection not only by _id field but first by userId field? And not treat my _id field alone as an identifier but rather see an identifier as a compound of userId and _id?
Yes. Mostly.
Longer version
While this id not field not being unique is not intuitive at first sight, it actually makes sense, considering CosmosDB seeks unlimited scale for pinpoint GET/PUT operations. This requires the partitions to act independently and this is where a lot of the magic comes from. If id or other unique constraint uniqueness would have been enforced globally, then every document change would have to coordinate with all other partitions and that would no longer be optimal or predictable in endless scale.
I also think this design decision of separation of data is in alignment with the schemaless distributed mindset of CosmosDB. If you use CosmosDB then embrace this and avoid trying to force cross-document relation constraints to it. Manage them in data/api design and client logic layer instead. For example, by using a guid for id.
About partition key..
Is it that I fail to pick the right partition key? [...] partition key should be the field that is the most frequently used for filtering the data.
It depends;). You have to think also for worst query performance, not only the "most frequently" used ones. Make sure MOST queries can go directly to correct partition, meaning you MUST know the exact target partition key before making those queries, even for those "get by id"-queries. Measure the cost for left cross-partition queries on realistic data set.
It is difficult to say whether userId is a good key or not. It most likely is known in advance and could be included to get-by-id queries, so it's good in that sense. But you should also consider:
hot partition - all single user queries would go to single partition, no scale out there.
partition size - single user data most likely grows-and-grows-and-grows. Partitions have a max size limits and working within those target partitions will become costlier over time.
So, if possible, I would define smaller partitions to distribute the load further. Maybe consider using a composite partition key or similar tactics to split user partition to multiple smaller ones. Or to the very extreme of having id itself a partition key, which is good for writes and get-by-id but less optimal for everything else.
.. just always make sure to have the chosen partition key at hand.

How do you scale postgres to billions of rows for this schema?

Consider this scenario.
You're a link shortening service, and you have two tables:
Links
Clicks - predominantly append only, but will need a full scan to produce aggregates, which should be (but probably won't be) quick.
Links is millions of rows, Clicks is billions of rows.
Should you split these onto separate hardware? What's the right approach to getting the most out of postgres for this sort of problem?
With partitioning, it should be scalable enough. Partition links on hash of the shortened link (the key used for retrieval). Depending on your aggregation and reporting needs, you might partition clicks by date (maybe one partition per day?). When you create a new partition, the old one can be summed and moved to history (or removed, if the summed data is enough for your needs.
In addition to partitioning, I suggest pre-aggregating the data. If you never need the individual data, but only aggregates per day, then perform the aggregation and materialize it in another table after each day is over. That will reduce the amount considerably and make the data manageable.

Cloudant partitions design recommandations

I am migrating a Cloudant database without partitions to the new partition system of Cloudant to reduce the cost in my ibm cloud account. The context can be summarized like so :
I am dealing with emails object which have a category name
I might receive more dans 100 new entries (emails) per day
The UI can query the emails from date A to date B and also on categories C1, C2, ... C100 in any combination possible of categories.
The UI displays only 15 emails/page
The question is about the partitioning of such a data model and avoid as much as possible global queries (cross partitions) which are way more costly than partition based queries.
I thought first I would go for a partitioning per day but eventually I can end up with one situation where the query filters emails on a specific category Cn on 4 months but the specific category receives only 1 email per day which means that to display one page on the UI (of 15 emails) I should do 15 queries which is not acceptable.
Before the partitioning arrival, I was just doing global queries with the Lucene query engine but that is not anymore because of the cost.
Also, I also considered putting all the emails in a single partition so that I would be able to use the same old query within that partition and since it is a partition, I would not hit the global query price but the partition query price. That theoretically work but might have some limits I guess since the documentation about partitions recommends not to put "too many data" in a single partition.
Do you by any mean have any recommandation for this kind of problem ?
Thanks.
Given your design, it doesn't seem to me like there is a partition key that will allow you to avoid global queries completely. As a rule of thumb, pick a partition key that would allow you to retrieve all data that make up a logical grouping. For example, imagine an order system where you have a set of customers with associated orders -- the obvious partition key would be a unique customer id: you then have a logical grouping of all data associated with each customer.
Over on the Cloudant blog, there is a good article series on partitions:
https://blog.cloudant.com/2019/03/05/Partition-Databases-Data-Design.html

Partition key generation for RDBMS sharding

Consider I have very huge table that needs to be sharded across the RDBMS cluster. I need to decide on the partitioning key on which to shard the table across. Obviously this partition key can’t be an artificial key (example: auto-generated primary key column), because the application needs to hold the logic of figuring out the shard depends on the natural key from request data. Consider the following situation
If the natural key is not evenly distributed in the system
a) Is it a good idea to even consider this table for sharding ?
Is there a way to generate a GUID based on the natural key and evenly distribute it across the cluster?
what can be an efficient algorithm to generate a GUID based on the natural key.
If the key is not evenly distributed it might not have any difference whether the table is partitioned or not. It will have to read almost same amount of rows in order to fulfil the query. Remember, partitioning will not always increase the performance. Reading across partitions will might be slower. So make sure you analyse all the query needs before selecting the partition key.
I can't recall any function which can generate partition key for this case. There are functions to generate GUIDs or MD5 for your data but the result will be worst than natural key that you have. The results will be more towards to unique values. Also it will drop the performance as each and every request it has to run additional logics.
Also please consider purging old or unused data. Once that is done you might not have partitioning need.

MongoDB shard key

I've been thinking about selecting the best shard key (through a compound index) for my data and thought the combination of the document creation date combined with a customer no. (or invoice no.) would be a good combination. IF MongoDB would consider the customer no as a string backwards ie.:
90043 => 34009
90044 => 44009
90045 => 54009
etc.
Index on the The creation date would ensure that relatively new data are kept in memory and the backward customer no would help MongoDB to distribute the data/load across the cluster.
Is this a correct assumption? and if so... would I need to save my customer no reversed for it to be distributed the way I expect?
Regarding your specific question of "would I need to save my customer no reversed for it to be distributed the way I expect?", no - you would not.
Even with the relatively narrow spread of customer number values you listed, if you use customerNumber in your compound key, MongoDB will break apart the data into chunks and distribute these accordingly. As long as the data associated with customerNumber are relatively evenly distributed (e.g., one user doesn't dominate the system), you will get the shard balancing you desire.
I would consider either your original choice (minus the string reversal) or Dan's choice (using the built-in ObjectId instead of timestamp) as good candidates for your compound key.
from what I have read in the documentation the MongoId is already time based.
Therfore you can add the _id to your compound key like this: (_id, customerid). If you don't need the date in your application, you can just drop the field which would save you some storage.
MongoDB stores the datasets recently used in memory.
The index of a collection will always tried to be stored into RAM.
When an index is too large to fit into RAM, MongoDB must read the
index from disk, which is a much slower operation than reading from
RAM. Keep in mind an index fits into RAM when your server has RAM
available for the index combined with the rest of the working set.
Hope this helps.
Cheers dan
I think the issue with your thinking it that, somehow, you feel Node 1 would be faster than Node 2. Unless the hardware is drastically different then Node 1 and Node 2 would be accessed equally fast and thus reversing the strings would not help you out.
The main issue I see has to do with the number of customers in your system. This can lead to monotonic sharding wherein the last shard is the one always being hit and that can cause excessive splitting and migration. If you have a large number of customers then there is no issue, otherwise you might want to add another key on top of the customer id and date fields to more evenly divide up your content. I have heard of people using random identifiers, hashing the _id or using a GUID to overcome this issue.