How to create multiple relationships to the same filed in prisma - prisma

Here Is the related portion of my datamodel.prisma file.
type Driver {
id: ID! #unique
zones: [Zone!] #relation(name: "DriverZones")
shifts: [Shift!] #relation(name: "DriverShifts")
preferredZone: Zone
preferredShift: Shift
}
type Shift {
id: ID! #unique
drivers: [Driver! ] #relation(name: "DriverShifts")
}
type Zone {
id: ID! #unique
drivers: [Driver! ] #relation(name: "DriverZones")
}
Here I want to create the relationship for preferredZone and preferredShift to be type Zone and Shift according to the datamodel I have created.
this is a one way relationship.
The relation field preferredShift must specify a #relation directive: #relation(name: "MyRelation")
, The relation field preferredZone must specify a #relation directive: #relation(name: "MyRelation")
I'm using PostgreSQL for my prisma database. How to build the relationship between preferredZone to Zone. and preferredShift to Shift.

You need to name the relations since you have two relations between same types (Driver <-> Shift and Driver <-> Zone both are connected by two relations each).
In cases like this Prisma asks you to name the relations which is what the error message you posted is about. I think this data model should work:
type Driver {
id: ID! #unique
zones: [Zone!] #relation(name: "DriverZones")
shifts: [Shift!] #relation(name: "DriverShifts")
preferredZone: Zone #relation(name: "PreferredZone")
preferredShift: Shift #relation(name: "PreferredShift")
}
type Shift {
id: ID! #unique
drivers: [Driver! ] #relation(name: "DriverShifts")
}
type Zone {
id: ID! #unique
drivers: [Driver! ] #relation(name: "DriverZones")
}

Related

Prisma model typing issue - expected non-nullable type "String", found incompatible value of "null"

Problem
I am using Prisma as an ORM for a psql database. When I try to delete a row from a table (Plant) that my current table (Pod) references I get the error:
Error converting field "plantId" of expected non-nullable type "String", found incompatible value of "null".
My intention is, when the underlying Plant row is deleted, the reference or foreign_key on the Pod rows that reference it, are set to null. However, I can't seem to set it up correctly. I thought by typing it with plantId: String? it would be able to handle that since the ? set's the row to either <type>|null.
I could really use some help, I'm not sure where I'm going wrong.
// Plant
model Plant {
id Int #id #default(autoincrement())
plantId String #unique #default(uuid())
name String #unique
pods Pod[]
createdAt DateTime #default(now())
updatedAt DateTime #updatedAt
}
// Pod
model Pod {
id Int #id #default(autoincrement())
podId String #unique #default(uuid())
name String #db.VarChar(200)
hasAlert Boolean
plant Plant? #relation(fields: [plantId], references: [plantId], onDelete: SetNull)
plantId String?
batch Batch #relation(fields: [batchId], references: [batchId], onDelete: Cascade)
batchId String
createdAt DateTime #default(now())
updatedAt DateTime #updatedAt
}
Prisma Studio walk through:
Pod has a foreign key pointing to Plant (named broccoli)
In Plant table, I delete the referenced Plant (broccoli) that we saw in Pod
Upon returning to the Pod table, we get the following error:

Reuse same model for two fields in Prisma

My goal is for this:
I have a table of professions. A person can have two professions - a primary and secondary profession.
model Person {
id String #id #default(cuid())
PrimaryProfessionId String?
Secondary String?
PrimaryProfession Profession #relation(fields: [PrimaryProfessionId], references: [id])
SecondaryProfession Profession #relation(fields: [SecondaryProfessionId], references: [id], name: "secondaryProfession")
}
model Profession {
id String #id #default(cuid())
name String
Primary Person?
Secondary Person? #relation("secondaryProfession")
}
Trying to copy this: Prisma - How to point two fields to same model? but it doesnt work.
With current code I am getting error: Error parsing attribute "#relation": A one-to-one relation must use unique fields on the defining side.
What should I fix to make this work?

Dql mutation add duplicate records

What I want to do
Prevent the dql mutation to add duplicates records
What I did
I add a graphql schema:
type Product {
id: ID!
name: String! #id #dgraph(pred: "Product.name")
slug: String! #id #dgraph(pred: "Product.slug")
image: String #dgraph(pred: "Product.image")
created_at: DateTime! #dgraph(pred: "Product.created_at")
updated_at: DateTime! #dgraph(pred: "Product.updated_at")
}
the above graphql schema has generated the bellow DQL schema:
<Product.created_at>: datetime .
<Product.image>: string .
<Product.name>: string #index(hash) #upsert .
<Product.slug>: string #index(hash) #upsert .
<Product.updated_at>: datetime .
<dgraph.drop.op>: string .
<dgraph.graphql.p_query>: string #index(sha256) .
<dgraph.graphql.schema>: string .
<dgraph.graphql.xid>: string #index(exact) #upsert .
type <Product> {
Product.name
Product.slug
Product.image
Product.created_at
Product.updated_at
}
type <dgraph.graphql> {
dgraph.graphql.schema
dgraph.graphql.xid
}
type <dgraph.graphql.persisted_query> {
dgraph.graphql.p_query
}
I run a mutation to add some data using: https://github.com/dgraph-io/dgo#running-a-mutation.
But it does not respect the #id added to the schema to some fields like "slug" and "name".
Using the graphql mutation this is working and respect the uniqueness by returning an error:"message": "couldn't rewrite mutation addProduct because failed to rewrite mutation payload because id aaaa already exists for field name inside type Product"
dgraph version v21.03.2
In Dql you have to handle this by your own using Upsert Block.
Or if you don't want the power of dql you can use the graphql which handle this stuff automatically.

AWS Amplify and GraphQL Interfaces

How would you deal with interfaces and using them for connections in a data model using the AWS Amplify Model Transforms?
interface User #model {
id: ID
email: String
created: AWSTimestamp
}
type ActiveUser implements User {
id: ID
first: String
last: String
email: String
created: AWSTimestamp
}
type InvitedUser implements User {
id: ID
email: String
created: AWSTimestamp
invitedBy: String
}
type Team #model {
users: [User] #connection
}
It seems like my choices are to put #model on the types but then I get separate Dynamo tables and queries on the Query once amplify update api is run.
Can the transformer support interfaces as documented here: https://docs.aws.amazon.com/appsync/latest/devguide/interfaces-and-unions.html
I also found some support tickets, but was wondering if there was anything out there that enabled this feature. Here are the support tickets I found:
https://github.com/aws-amplify/amplify-cli/issues/1037
https://github.com/aws-amplify/amplify-cli/issues/202
You only use #connection to link two databases together (which must be made from type and not interface), so if you don't want to do that then just get rid of the #connection and the Team database will simply have users be of type [User]. I am not entirely what you want to do but I would do something like:
type User #model {
id: ID
first: String!
last: String!
email: String!
created: AWSTimestamp
isActive: boolean
invitedBy: String
team: Team #connection(name: "UserTeamLink")
}
type Team #model {
users: [User!] #connection(name: "UserTeamLink")
}
Where the fields first, last, and email are required when creating a new user, and you can distinguish between an active user with a boolean, and when you query the User database it returns the Team item from the Team database as well (I am guessing you want other fields like team name, etc.?), so when you create a Team object you pass in the teamUserId (not shown below but created when using amplify) that will allow you to attach a newly created Team to an existing user or group of users.
I think you could keep the common fields in User, and extra info in separate type. Not sure if this is the best practice, but it should work for this scenario
enum UserType {
ACTIVE
INVITED
}
type User #model #key(name:"byTeam", fields:["teamID"]){
id: ID!
teamID: ID!
email: String
created: AWSTimestamp
type: UserType
activeUserInfo: ActiveUserInfo #connection(fields:["id"])
invitedUserInfo: InvitedUserInfo #connection(fields:["id"])
}
type ActiveUserInfo #key(fields:["userID"]){
userID: ID!
first: String
last: String
}
type InvitedUserInfo #key(fields:["userID"]){
userID: ID!
invitedBy: String
}
type Team #model {
id:ID!
users: [User!] #connection(keyName:"byTeam", fields:["id"])
}

Same relation for two fields in one data type

can I write something like this:
type User {
primaryStory: Story! #relation(name: "userStory")
secondaryStories: [Story] #relation(name: "userStory")
}
type Story {
user: User! #relation(name: "userStory")
}
Basically what I want is to have a single relation name for both primary story and secondary stories.
This is not possible. With the name specified in an ambiguous way it is not clear what the userStory relates to.
You could either have 2 different relation names, or have a construct like the following and filter accordingly:
type User {
stories: Story! #relation(name: "userStories")
}
type Story {
author: User! #relation(name: "userStories")
isPrimary: Boolean!
}