Is there a way to set up MongoDB ttl indexes using prisma - mongodb

I am trying to make a Hotel website backend that allows users to reserve rooms using GraphQL and prisma. I want to make sure that the reservation will be cancelled after the check-in time if no one has checked in. To do this I wanted to set up a ttl index that will delete the reservation.
If there is any alternative way of doing this you can help me.

take a look on this : https://www.prisma.io/docs/concepts/components/prisma-schema/indexes
model Post {
id Int #id
title String #db.VarChar(255)
content String #db.Text
##fulltext([title, content])
}

Related

Is there a way to create public/private fields for a mongodb schema in golang?

I'm creating a backend for a moderately large-scale application, and I came across a difficulty with restraining what fields users can access. For instance, a user should not be able to modify their follower count with a PUT request to an update endpoint, yet the only way to really remove the followerCount field from the golang struct representing the user schema is by creating an entirely new schema for updates in particular. I've been doing this, and my backend code base is way more complex than it needs to be, to the point where it's nearly unmanageable.
Here's an example of the schemas I have:
type User struct {
ID primitive.ObjectID `bson:"_id" json:"_id"`
// CHANGEABLE
Username string `bson:"username" json:"username"`
Email string `bson:"email" json:"email"`
Password string `bson:"password" json:"password"`
Archived bool `bson:"archived" json:"archived"`
// UNCHANGEABLE
Sessions []primitive.ObjectID `bson:"sessions" json:"sessions"`
IsCreator bool `bson:"isCreator" json:"isCreator"`
FollowerCount int `bson:"followerCount" json:"followerCount"`
}
and for updates specifically
type UserUpdate struct {
Username string `bson:"username,omitempty" json:"username,omitempty"`
Email string `bson:"email,omitempty" json:"email,omitempty"`
Password string `bson:"password,omitempty" json:"password,omitempty"`
Archived bool `bson:"archived,omitempty" json:"archived,omitempty"`
}
Is there a way to make public/private fields within a mongo schema so I can simplify this process? And if not, can you advise me on a better solution? Nothing is coming to mind for me.
I've continued creating new "sub-schemas" built off the same schema for specific purposes (i.e. Creation, Updating, Getting, etc.). Changing one field name takes nearly 30 minutes to change across schemas, which is not ideal.

What should I use as validation tool using NestJS and MongoDB?

Let's say, I am creating application using NestJS. I use MongoDB as a database and mongoose as ODM. So, NestJS has it's own way to validate data - ValidationPipe's. On the other side, there is mongoose built-in ways to validate data that being pushed into DB.
The question is - can I use only NestJS validation or do I need also second-check my data using mongoose validation tools?
Personally, I can't see why should I use additional layer of validation if I already have NestJS validation. But maybe there is best-practice or something?
Each database validates input data and emits an error if found. However, you must take into account your schema file. For instance, if you are using Prisma(It doesn't actually matter) and you have a User model like below
model User {
id String #id #default(auto()) #map("_id") #db.ObjectId
email String #unique
password String
name String
image String?
createdAt DateTime #default(now())
updatedAt DateTime #updatedAt
##map("user")
}
As you can see there is only one optional property "image". Whether you pass a value for image property or not, the database will insert the data as a row. On the other hand, the properties without "?" mark at the end of the type, will not be stored if you don't pass them and emit an error.
Therefore, if you modeled schema according to your business logic, you don't need to validate twice but only add an exception handling like the one below.
const user = await this.usersService.findOne('id...')
.catch((e) => new NotFoundException({ message: 'No user found.' }))

Is it possible to access metadata about the prisma model?

Let's say I have a model in my schema.prisma file:
model Post {
id Int #id #default(autoincrement())
author User #relation(fields: [authorId], references: [id])
authorId Int
}
Having a variable called model in my server containing a model name
const model: string = [model name generated dynamically]
Using this string I want to know every information about this model. For example if this variable model happens to be Post, I want to know that it has fields id, author, authorId and also information about each field separately, like in the case of author which field in which model does it reference, in this example the model User the field id.
I'm aware that prisma generates a type for each model and that way maybe I can access the fields that way but that't not enough for me, I want information about each fields as well.
I search the prisma docs, also googled something like 'get meta information about model in prisma2' but I didn't find any solution. Is there a way to achieve this?
Yes, you can get all the metadata for your entire schema in the following manner:
const prisma = new PrismaClient()
// #ts-ignore
console.log(prisma._dmmf)
This will give you all the details for the models and relations. The reason this is not documented is that this is highly experimental and will change with releases, which is why it's used for internal purposes only.

Data modelling for dynamodb where entity has one to many and many to many relationships

I am new to the NoSql world. I am building a serverless app with dynamodb. In a relational DB when I would have 3 entities like post, post_likes and post_tags I would have few tables and use joins to fetch data. But, I wonder how should one make a NoSql structure for a scenario where post has one to many relationship with likes, and many to many with tags.
Post model:
user_id <string>
attachment_url <string>
description <string>
public <boolean>
Like model:
user_id <string>
post_id <string>
type <string>
Tag model:
name <string>
I have few access patterns:
Get all public posts
Get all posts filtered by a single tag and public status
Get all posts by user id
Get a single post by post id
And each time a post should be fetched with tags data, and likes data including user data that is attached to a like.
In relational DB I would create post_tags table and fetch all post by tags. But, how can I do this with dynamodb?
I am struggling to figure out how my table should look like and what to set as primary and sort keys amongst post_id, user_id, tag_name or public fields for this case?
My initial thought was to build a table with entity that would look like this:
Partition key | Sort key | data attributes
tag_name | post_id | public | user_id | likes[] | other post attributes...
Then this table would look something like this:
I have set the 2 Global secondary indexes.
First Global secondary index:
partition key set to public and sort key to post_id
Second Global secondary index:
partition key set to user_id and sort key to post_id
That way for each tag a post has, I would have a duplicate of that post in the table. I thought by having a tag as a first filter, that way I could query efficiently posts if I need to query them by a tag.
But, if I do a query by just a public status or user_id, I would get all the duplicates of posts for each tag they belong to.
Or should I have 3 separate entities in the table, tags, posts and likes and if I fetch a post by a tag, I would first do one query to find all post_ids by a tag, then do the second query to fetch posts and their likes id, and then do the third query to fetch the likes array.
I don't know what is the best practice when it comes to this things, since I only just started using dynamodb.
How should this DB structure look like then?
You're off to a great start by thinking deeply about your access patterns and defining your entities (Posts, Users, Likes, etc). As you know, having a thorough understanding of your access patterns is critical to storing your data in DynamoDB.
While reviewing my answer, keep in mind that this is only one solution. DynamoDB gives you a ton of flexibility when defining your data model, which can be both a blessing and a curse! This answer is not meant to be the way to model these access patterns. Instead, it's one way that these access patterns can be implemented. Let's get into it!
I like to start by listing the entities we need to model, as well as the Primary key for each. Throughout this post, I'll be using composite primary keys, which are keys made up of a Partition Key (PK) and a Sort Key (SK). Let's start out with a blank table and fill it out as we go.
Partition Key Sort Key
User
Post
Tag
Users
Users are central to your application, so I'll start there.
Let's start by defining a User model that lets us identify a User by ID. I'll use the pattern USER#<user_id> for the PK and SK of the User entity.
This supports the following access patterns (examples in pseudocode for simplicity):
Fetch User by ID
ddbClient.query(PK = USER#1, SK = USER#1)
I'll update the table with the new PK/SK pattern for Users
Partition Key Sort Key
User USER#<user_id> USER#<user_id>
Post
Tag
Posts
I'll start modeling Posts by focusing on the one-to-many relationship between Users and their Posts.
You have an access pattern to fetch All Posts by UserId, so I'll start by adding the Post model to the User partition. I'll do this by defining a PK of USER#<user_id> and an SK of POST#<post_id>.
This supports the following access patterns:
Fetch User and all Posts
ddbClient.query(PK = USER#<user_id>)
Fetch User Posts
ddbClient.query(PK = USER#<user_id>, SK begins_with "POST#")
You may wonder about the odd-looking Post IDs. When fetching Posts, you'll probably want to get the most recent Posts first. You also want to be able to uniquely identify Posts by ID. When you have this sort of requirement, you can use a KSUID as your unique identifier. Explaining KSUID's is a bit out of scope for your question, but know that they are unique and sortable by the time they were created. Since DynamoDB sorts results by the Sort Key, your query for a user's posts will automatically be sorted by creation date!
Updating the PK/SK patterns for your application, we now have
Partition Key Sort Key
User USER#<user_id> USER#<user_id>
Post USER#<user_id> POST#<post_id>
Tag
Tags
We have a few options on how to model the one-to-many relationship between Posts and Tags. You could include a list attribute on your Post item, which simply lists the number of tags on the item. This approach is perfectly fine. However, looking at your other access patterns, I'm going to take a different approach for now (it will be apparent why later).
I will model tags with a PK of POST#<post_id> and an SK of TAG#<tag_name>
Since Primary Keys are unique, modeling tags in this way will ensure that no Post is tagged with the same Tag twice. Additionally, it allows us to have an unbounded number of Tags on a Post.
Updating our PK/SK table for Tag, we have
Partition Key Sort Key
User USER#<user_id> USER#<user_id>
Post USER#<user_id> POST#<post_id>
Tag POST#<post_id> TAG#<tag_name>
At this point we've modeled Users, Posts and Tags. However, we've only addressed one of your four access patterns. Lets see how we can use secondary indexes to support your access patterns.
Note: You could also model Likes in the exact same way.
Defining A Secondary Index
Secondary indexes allow you to support additional access patterns within your data. Let's define a very simple secondary index and see how it supports your various access patterns.
I'm going to create a secondary index that swaps the PK/SK patterns in your base table. This pattern is called an inverted index, and would look like this:
All we've done here is swapped the PK/SK pattern of your base table, which has given us access to two additional access patterns:
Fetch Post by ID
ddbClient.query(IndexName = InvertedIndex, PK = POST#<post_id>)
Fetch Posts by Tag
ddbClient.query(IndexName = InvertedIndex, PK = TAG#<tag_name>)
Fetch All Posts by Public/Private status
You wanted to fetch posts by public/private status, as well as fetching all Posts. One way to fetch all Posts is to put them in a single partition. We can put the public/private status in the sort key to separate the public and private Posts.
To do this, I'll create two new attributes on the Post item: _type and publicPostId. These fields will serve as the PK/SK patterns for the secondary index I'm calling PostByStatus.
After doing this, your base table would look like this:
and your new secondary index would look like this
This secondary index would enable the following access patterns
Fetch All Posts
ddbClient.query(IndexName = PostByStatus, PK = POST)
Fetch All Private Posts
ddbClient.query(IndexName = PostByStatus, PK = POST, SK begins_with "PRIVATE#")
Fetch All Public Posts
ddbClient.query(IndexName = PostByStatus, PK = POST, SK begins_with "PUBLIC#")
Remember, post ID's are KSUID's, so they will naturally be sorted in your results by the date the Post was made.
A Word on Hot Partitions
Storing all your Posts in a single partition will likely result in a hot partition as your application scales. One way to address this is by distributing your Post items across multiple partitions. How you do that is entirely up to you and specific to your application.
One strategy to avoid the single POST partition could involve grouping Posts by creation day/week/month/etc. For example, instead of using POST as your PK in the PostByStatus secondary index, you could use POSTS#<month>-<year> instead, which would look like this:
Your application would need to take this pattern into account when fetching Posts (e.g. start at the current month and go backwards until enough results are fetched), but you'd be spreading the load across multiple partitions.
Wrapping Up
I hope this exercise gives you some ideas on how to model your data to support specific access patterns. Data modeling in DynamoDB takes time to get right, and will likely require multiple iterations to make work for your specific application. It can be a steep learning curve, but the payoff is a solution that brings scale and speed to your application.

Storing and updating data in MongoDB v4

I'm a MongoDB beginner previously MySQL user. I was wondering what the correct way is to store and update data that relates to one another in MongoDB.
Say I have 2 entities User and Item. Everytime I display an item, i want to show all user information together with it, so I copy the user information into my Item object when creating an Item:
Item
------
itemID
itemName
userName
userID
userLocation
Say I update the userName of the user related to that item. That means I need to target all the Items that has the userID to update the userName of those Items. Is that the right way of doing it? Or would you reference the user inside the Item Object like this?
Item
------
itemID
itemName
userID
and perform a specific query? What's the MongoDB way of storing data and updating it?
So far I've just been doing this "Bulk" update kind of way doing it but I'm not sure what's the best practice in this case.