DynamoDB adjacency list primary key - nosql

I am completing an exercise using DynamoDB to model a many to many relationship. I need to allow a many to many relationship between posts and tags. Each post can have many tags and each tag can have many posts.
I have a primary key on id and primary sort key on type and then another global index on id and data, I added another global index on id and type again but I think this is redundant.
Here is what I have so far.
id(Partition key) type(Sort Key) target data
------------- ---------- ------ ------
1 post 1 cool post
tag tag tag n/a
1 tag tag orange
---------------------------------------------
---- inserting another tag will overwrite ---
---------------------------------------------
1 tag tag green
I am taking advice from this awesome talk https://www.youtube.com/watch?v=jzeKPKpucS0 and these not so awesome docs https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-adjacency-graphs.html
The issue I am having is that if I try to add another tag with an id "1" and type "tag" it will overwrite the existing tag because it would have the same composite key. What am I missing here? It seems like the suggestion is to make the primary key and sort key be the id and type. Should I have my type be more like "tag#orange"? In that case I could put a global index on the target with a sort key on the type. This way I could get all posts with a certain tag by querying target = "tag" and type starts with "tag".
Just looking for some advice on handling this sort of adjacency list data with Dynamo as it seems very interesting. Thanks!

Basic guidelines for an adjacency-list
You need a few modifications to the way you're modeling. In an adjacency-list you have two types of items:
Top-level (those are your Posts and Tags)
Association (expresses which Tags are associated with each Post and vice-versa)
To build this adjacency-list, you must follow two simple guidelines (which I think are missing in your example):
Each top-level item (in your case a Post or a Tag) must be represented using the primary-key. Also, those items should have the same value in the sort-key and the primary-key
For associations, use the primary-key to represent the source (or parent) and the sort-key to represent the target (or child).
From what I see in your examples, you set the primary-key of your Posts and Tags as just the item ID, while you should also use its type; e.g. Post-1 or Tag-3. In items that represent associations, I also don't see you storing the target ID.
Example
Let's say you have:
Three Posts: "hello world", "foo bar" and "Whatever..."
And three tags: "cool", "awesome", "great"
Post "hello world" has one tag: "cool"
Post "foo bar" has two tags: "cool" and "great"
Post "Whatever..." doesn't have any tags
You'd need to model this way in Dynamo:
PRIMARY-KEY | SORT-KEY | SOURCE DATA | TARGET DATA
--------------|-------------|--------------|-------------
Post-1 | Post-1 | hello world |
Post-2 | Post-2 | foo bar |
Post-3 | Post-3 | Whatever... |
Tag-1 | Tag-1 | cool |
Tag-2 | Tag-2 | awesome |
Tag-3 | Tag-3 | great |
Post-1 | Tag-1 | hello world | cool
Post-2 | Tag-1 | foo bar | cool
Post-2 | Tag-3 | foo bar | great
Tag-1 | Post-1 | cool | hello world
Tag-1 | Post-2 | cool | foo bar
Tag-3 | Post-2 | great | foo bar
How you query this adjacency list
1) You need a particular item, say Post-1:
Query primary-key == "Post-1" & sort-key == "Post-1" - returns: only Post-1
2) You need all tags associated with Post-2:
Query by primary-key == "Post-2" & sort-key BEGINS_WITH "Tag-" - returns: Tag-1 and Tag-3 associations.
Check the documentation about the begin_with key condition expression.
3) You need all Posts associated with, say Tag-1:
Query by primary_key == "Tag-1" & sort-key BEGINS_WITH "Post-" - returns: Post-1 and Post-2 associations.
Note that, if you change the contents of a given post, you need to change the value in all association items as well.
You can also don't store the post and tag content in association items, which saves storage space. But, in this case, you'd need two queries in the example queries 2 and 3 above: one to retrieve associations, another to retrieve each source item data. Since querying is more expensive than storing data, I prefer to duplicate storage. But it really depends if your application is read-intensive or write-intensive. If read-intensive, duplicating content in associations gives you benefit of reducing read queries. If write-intensive, not duplicating content saves write queries to update associations when the source item is updated.
Hope this helps! ;)

I don't think you are missing anything. The idea is that ID is unique for the type of item. Typically you would generate a long UUID for the ID rather than using sequential numbers. Another alternative is to use the datetime you created the item, probably with an added random number to avoid collisions when items are being created.
This answer I have previously provided may help a little DynamoDB M-M Adjacency List Design Pattern
Don't remove the sort key - this wont help make your items more unique.

Related

Query children of One-To-Many Relationship based on date along with parent

I have two entities in my dynamo table: User and Order.
Each user has 0..* orders and each order has exactly one associated user. Every order also has a orderDate attribute, that describes when the order was placed.
My current table is structured as follows to make retrieving all orders for a specific user efficient:
+--------------+----------------+--------------------------------------+
| PK | SK | Attributes |
+--------------+----------------+-------------+-----------+------------+
| | | name | firstName | birthDate |
+--------------+----------------+-------------+-----------+------------+
| USER#userid1 | META#userid1 | Foo | Bar | 2000-10-10 |
+--------------+----------------+-------------+-----------+------------+
| | | orderDate | | |
+--------------+----------------+-------------+-----------+------------+
| USER#userid1 | ORDER#orderid1 | 2020-05-10 | | |
+--------------+----------------+-------------+-----------+------------+
I now have a second access pattern where I want to query all orders (regardless of user) that were placed on a specific day (e.g. 2020-05-10) along with the the user(s) that placed them.
I'm struggling to handle this access pattern in my table design. Neither GSIs nor different primary keys seem to work here, because I either have to duplicate every user item for each day or I can't query the orders together with the user.
Is there an elegant solution to my problem?
This is a perfect use case for a secondary index. Here's one way to do it:
You could create a secondary index (GSI1) on the Order item with a Partition Key (GSI1PK) of ORDERS#<orderDate> and a Sort Key (GSI1SK) of USER#<user_id>. It would look something like this:
The logical view of your GSI1 would look like this:
GSI1 would now support a query of all orders placed on a specific day.
Keep in mind that denormalizing your data model (e.g. repeating user info in the Order item) is a common pattern utilized in DynamoDB data modeling. Remember, space is cheap! More importantly, you are pre-joining your data to support your applications access patterns. In this instance, I'd add whatever User metadata you need to the Order item so it gets projected into the index.
Make sense?
Unfortunately, I can't seem to figure out a way to elegantly solve your problem.
You need to either duplicate the user info and store in the order record or use a second getItem to query the user-specific info.
If anyone has better solutions, please let me know.

Nicely managed lookup tables

We have a people table, each person has a gender defined by a gender_id to a genders table,
| people |
|-----------|
| id |
| name |
| gender_id |
| genders |
|---------|
| id |
| name |
Now, we want to allow people to create forms by themselves using a nice form builder. One of the elements we want to add is a select list with user defined options,
| lists |
|-------|
| id |
| name |
| list_options |
|--------------|
| id |
| list_id |
| label |
| value |
However, they can't use the genders as a dropdown list because it's in a different table. They could create a new list with the same options as genders but this isn't very nice and if a new gender is added they'd need to add it in multiple places.
So we want to move the gender options into a list that the user can edit at will and will be reflected when a new person is created too.
What's the best way to move the genders into a list and list_options while still having a gender_id (or similar) column in the people table? Thoughts I've had so far include;
Create a 'magic' list with a known id and always assume that this contains the gender options.
Not a great fan of this because it sounds like using 'magic' numbers. The code will need some kind of 'map' between system level select boxes and what they mean
Instead of having a 'magic' list, move it out into an option that the user can choose so they have a choice which list contains the genders.
This isn't really much different, but the ID wouldn't be hardcoded. It would require more work looking through DB tables though
Have some kind of column(s) on the lists table that would mark it as pulling its options from another table.
Would likely require a lot more (and more complex) code to make this work.
Some kind of polymorphic table that I'm not sure how would work but I've just thought about and wanted to write down before I forget.
No idea how this would work because I've only just had the idea
The easiest solution would change your list_options table to a view. If you have multiple tables you need have a list drop down for to pull from this table, just UNION result sets together.
SELECT
(your list id here) -- make this a part primary key
id, -- and this a part primary key
Name,
FROM dbo.Genders
UNION
SELECT
(your list id here) -- make this a part primary key
id, -- and this a part primary key
Name,
FROM dbo.SomeOtherTable
This way it's automatically updated anytime the data changes. Now you are going to want to test this, as if this gets big it might get slow, you can get around this by only pulling all this information once in your application (or say cache it for 30 minutes and then refresh just in case).
Your second option is to create a table list_options and then create a procedure (etc.) which goes through all the other lookup tables and pulls the information to compile it. This will be faster for application performance, but it will require you to keep it all in sync. The easiest way to handle this one is to create a series of triggers which will rebuild portions (or the entire) list_options table when something in the look up tables is changed. In this one, I would suggest moving away from creating a automatically generated primary key and move to a composite key, like I mentioned with the views. Since this is going to be rebuilt, the id will change, so it's best to not having anything think that value is at all stable. With the composite (list_id,lookup_Id) it should always be the same no matter how many times that row is inserted into the table.

REST API database structure

Im building a simple REST Api, where i have "book" and "bookCategory".
They properities are very simple and the same:
book {id, name, created_at, modified_at }
bookCategory {id, name, created_at, modified_at }
If i had only this tables i would leave it like this, but i have the same logic and structure for "movie", "painting", "video games" etc.
Is it a good practice to split them into different table, even if they have the same structure, but logically they are different.
I could do this which saves me a lot of tables, controllers and forms (keep it DRY):
things {id, **parent_id**, name, created_at, modified_at, **type** }
some example
1 | 0 | "Comedy" | "movie"
2 | 1 | "Dumb and Dumber" | "movie"
3 | 1 | "Ace Ventura" | "movie"
4 | 0 | "Fantasy" | "book"
5 | 4 | "Lord of the Rings" | "book"
It is very compact, but how would look like an endpoint for "all movie categories" or "all categories" ?
domain/api/things/???
Or its better to lay down a flexible ground structure (maybe new properties will come)?
Given the problem space as you've defined it, it's reasonable to use a vertical table - assuming you expect there will be no new properties which are unique to one entity (such as 'writer', which might be on book and movie, but not bookCategory or movieCategory). If you anticipate new unique properties (or aren't sure), I would suggest separate tables for each. While you're violating DRY now, the cost to change later is going to be large.
As far as endpoints,
GET /api/movieCategories
<- an entity with all movie categories
Again, if all categories have and will always have the same properties, it would be reasonable to instead do
GET /api/categories?type=movie

cassandra schema data design for many-to-many array relationship

So I need a DB that can store info for about 300 million users. Each user will have two vectors: their 5 favorite items, and their 5 most similar users (these users also contained in the user set)
ex:
preferences users
user | item user | user
-------------- --------------
user1 | item1 user1 | user2
user1 | item2 user1 | user4
user1 | item3 user2 | user8
user2 | item3 . . .
user2 | item4
. . .
So basically I need two tables, both many-many relationships, and both relatively big.
Ive been exploring cassandra (but im open to other solutions) and I was wondering how I would define the schema, and what type of indexing I need for this to be optimized and working properly.
I will need to query in two fashions:
1.By user of course, and
2. by whatever item is in their list.
(so i can get a list of users with the same favorite item)
Ive already set up cassandra and started messing with it but I cant even get lists to work because i need 'composite' primary keys? I dont understand why.
Any help/a push in the right direction is greatly appreciated.
Thanks!
I am not sure you've adequately described your use case. It is the access patterns that first and foremost define your key design, which is ultimately what defines your workload characteristics with NoSQL databases. For example, are you going to have to do searches for users based on a certain geography or something along those lines or is this just simple , grab 1 user and his favorite items and/or his similar users.
Based on what you've described, you should probably just create a keyspace for user_ids and then your value can be the denormalized copies of "favorite items" and a list of "similar user id's". Assuming your next action is to do something with those similar users, you can quickly get them from the list of id's.
The important point is how big is your key ( i mean in characters / bytes ) and will you be able to fit them into memory so you get really fast performance. If your machines have limited memory for your key size, then you need to plan for a number of nodes which can accommodate a given number of keys and let those nodes run on separate servers. At least that is the most important part for Oracle NoSQL Database (ONDB) .... I am part of that team. Good news is that 300M is still very small.
Hope it helps,
-Robert

Join custom table with product attributes on custom attribute

It's fairly simple - I have a table with a field that corresponds to certain custom product attribute.
+----+-----------+---------+---------------------+
| id | from_user | to_user | first_contact |
+----+-----------+---------+---------------------+
| 2 | 2 | 2 | 2012-10-26 18:24:30 |
+----+-----------+---------+---------------------+
to_user corresponds to profile_id product attribute which is unique. Now, I need to join product attributes such as status and url_key of that product to my custom table. I get stuck at the very soon, since I don't know how to reference profile_id field in my join syntax. I could always rely on flat product table, but I'd like to make the script more robust so that it works even if flat product is off. I presume I should use joinAttribute() but I'm having trouble understanding how.
Thanks