Advantage of using connect over updating foreign keys directly - prisma

Why use connect?
data:{
'userId': 1
}
the above one is not enough??
Why use
user:{
connect:{
id: 1
}
}
Isn't the result the same? I wonder

To answer your question, why connect exists at all:
connect (and disconnect) provide an alternative interface to relations that you can also achieve by updating the respective fields directly. However, there are many cases, when the API is much more convenient.
E.g.
updating the object that does not store the attribute that represents the relation
updating a many-to-many relation
more advanced interfaces like connectOrCreate
connecting entities on unique attributes other than the primary key

Related

Null values in relational database

I'm new in PostgreSQL(still learning)
I'm trying to create a relational database for a venue.
In my table(still in UNF) I have attribute to store the client's name, phone, email.
The problem is that the client will give maybe 2 or 1 info on him. So I will always have null values.
Sometimes I can get all the client's values(for the 3 attribute)
How am i supposed to deal with this in the normalization process?
Do I need to separate the tables in other relation. If so 3 relations is not too much?
For every attribute that should be there once, use a column in the main table. "Should" indicates it might be missing / unknown, too. That's a NULL value then. If the attribute must be there, define the column NOT NULL.
Attributes where there can be multiple distinct instances, especially if the maximum number is uncertain, create a separate table in a one-to-many relationship.
Store (non-trivial) attributes that can be used in many rows of the main table, in a separate table in a many-to-one relationship.
And attributes that can be linked multiple times on either side are best implemented in a many-to-many relationship.
Referential integrity is enforced with foreign key constraints.
It's not nearly as complex as reality, but the point is to establish a logically valid model that can keep up with reality.
Read basics about database normalization.
Detailed code example with explanation and links for n:m relationship:
How to implement a many-to-many relationship in PostgreSQL?

How can we ensure Data integrity in mongoDb?

i am trying to migrate from relational database (mysql) data to nosql (mongoDb) . But how can i ensure data integrity in mongodb . what i have found that we cannot do it on server side. what should i use on application side to handle data integrity ?
For eg: i have two tables user and task . Both have userId field common . if i add a new entry in task table it should check if userid present in user table.
this is one of the requirement others like adding constraints , updating values etc
Ultimately, you're screwed. There's no way (in mongodb) to guarantee data integrity in such scenario, since it's lacking relations in general and foreign keys in particular. And there's little point in building application-level checks. No matter how elaborate they are, they can still fail (hence "no guarantee").
So it's either embedding (so that related data is always there, right in the document) or abandoning the hope of consistent data.
MongoDb is nosql and hence no joins.
Data is stored as BSON documents and hence no Foreign key constraints
Steps to ensure Data Integrity:
Check in the application before adding the task document whether it is having a valid user.
MongoDB doesn't support FOREIGN KEY. It's uses to Avoid JOINS.
MongoDB doesn't support server side foreign key relationships. But some times we need to relate So MongoDB applications use one of two methods for relating documents:
Manual references where you save the _id field of one document in another document as a reference. Then your application can run a second query to return the related data. These references are simple and sufficient for most use cases.
DBRefs are references from one document to another using the value of the first document’s _id field, collection name, and, optionally, its database name. By including these names, DBRefs allow documents located in multiple collections to be more easily linked with documents from a single collection.This may be then not so speedy because DB has to make additional queries to read objects but allows for kind of foreign key reference.Still you will have to handle your references manually. Only while looking up your DBRef you will see if it exists, the DB will not go through all the documents to look for the references and remove them if the target of the reference doesn't exist any more. But I think removing all the references after deleting the book would require a single query per collection, no more, so not that difficult really.
Refer to documentation for more info: Database References.
How can I solve this task?
To be clear, MongoDB is not relational. There is no standard "normal form". You should model your database appropriate to the data you store and the queries you intend to run.
For ex-
student
{
_id: ObjectId(...),
name: 'Jane',
courses: [
{ course: 'bio101', mark: 85 },
{ course: 'chem101', mark: 89 }
]
}
course
{
_id: 'bio101',
name: 'Biology 101',
description: 'Introduction to biology'
}
Try to resolve to this
student
{
_id: ObjectId(...),
name: 'Jane',
courses: [
{
name: 'Biology 101',
mark: 85,
id:bio101
},
]
}

Entity Framework Code First unique constraint across multiple tables

So I'm creating a database model using Entity Framework's Code First paradigm and I'm trying to create two tables (Players and Teams) that must share a uniqueness constraint regarding their primary key.
For example, I have 3 Players with Ids "1", "2" and "3" and when I try to create a Team with Id "2", the system should validate uniqueness and fail because there already exists a Player with Id "2".
Is this possible with data annotations? Both these entities share a common Interface called IParticipant if that helps!
Txs in advance lads!
The scenario you are describing here isn't really ideal. This isn't really a restriction on Entity Framework; it's more a restriction on the database stack. By default, the Id primary key is an Identity column, and SQL itself isn't really supportive of the idea of "shared" Identity columns. You can disable Identity and manage the Id properties yourself, but then Entity Framework cannot automatically build navigation properties for your entities.
The best option here is to use one single participant table, in a technique called "Table Per Hierarchy", or TPH. Entity Framework can manage the single table using an internal discriminator column. Shared properties can be put into the base class, and non-shared properties can be put on the individual classes, which Entity Framework will composite into a single large table in the DB. The main drawback to this strategy is that columns for non-shared properties will automatically be nullable in the database. This article describes this scenario very well.
The more I try to come up with a solution, I realize that this is an example of the XY Problem. There is not really a good solution to this question, because this question is already a proposed solution. There is a problem here that has led you to create an Interface which you suggest requires the entities which are using the interface to have a unique Id. This really sounds like an issue with the design of the Interface itself, as Interfaces should be agnostic to the entity they are applied to. Perhaps providing some code and showing what your problem actually is would be helpful, since the proposed solution you are asking how to implement here isn't really practical.

Updating foreign keys in the db or having a model that is maped to db that does not have foreign key (for lazt loading)

we have many an application that uses a database that is been used by differrent clients (each client has is own DB). Over the years, some of our client's database has lost some of the foreign key definition.
We would like to use code first Entity Framework, but since not all the db has the relationship defined, we have a lot of problems (specialy if we want to use lazy loading).
We where thinking of trying reverse negeneering a db that has the relations defined and to update only the foreign key definitions, is that possible ?
We only want to fix the foreign key definition and nothing else, because there is critical data in the DB and we don't want to take risks and update the db from the model on the production enveironnement.
Thank you in advance!
I'm having a hard time following your question mainly because I think you have left a lot of information out. I would think that you could reverse engineer the database, then use automatic-migrations to add any foreign keys. YOu may want to look into:
Automatic-Migrations AND Data Annotations or Fluent API
Without more information or an example of your DB Schema or any Code First (CODE) I don't think anyone will be able to do more than point you in the right direction like I have tried to do.

DDD and MongoDB: Is it okay to let Mongo create ObjectIDs?

According to DDD (Blue book, Evans) a Factory has the responsibility to create an Aggregate Root in a valid state. Does this mean it should be able to create the technical id (objectId in mongoDB world) as well as the domain id?
On the one hand, this seems like a technical detail and it would seem okay to let Mongo handle the creation of the ID.
On the other, enabling querying by id (by having getById in a DDD repository) exposes the technical id to the domain, which in turn would make it the responsibility of the Factory to create it.
Perhaps I can't get my head on the different use-cases / overlap, etc. of Technical Id's vs DomainId's or perhaps I'm being overzealous, but I'd appreciate your opinion anyway.
In short:
In DDD: Should a factory be able to create the technical Id as well as the domain Id?
possible implementation: Hi/Lo ( How to set the hilo sequence starting value in MongoDB Norm?)
EDIT: although the hi/lo way exposes the Factory to the persistence layer, which is something only the Repository should know. hmmm
Thanks
Factories don't have to concern themselves with the ID because the validity of an aggregate is orthogonal to identity. Identity can be assigned in a few different ways, either as a incremental ID from a relational database in which case the repository has to manage it, or as a UUID/GUID in which case it can be assigned by the factory, or repository, or even the calling client which is convenient because then the client has the key by default.
Whenever possible, I try to maintain a single identity for aggregates. I'm not sure if MongoDB requires an additional technical ID, but if it does and the domain ID can't be used in its place, then MongoDB should manage it on its own and behind the scenes.