I'm working with Prisma for the first time and I'm not sure if I'm going about this in the right way.
Here's a snippet of my schema
model Economy {
id String #id #default(auto()) #map("_id") #db.ObjectId
shops Shop[]
}
type Shop {
id Int #default(0)
name String #default("")
items ShopItem[]
EconomyId String? #db.ObjectId
}
type ShopItem {
id Int #default(0)
enabled Boolean #default(true)
name String #default("")
price Int #default(0)
description String #default("")
commands ShopCommand[]
}
type ShopCommand {
command String #default("")
}
This does work of course and it's easy for me to destructure in JavaScript however inserting and updating is complicated.. here's an example of updating a "ShopItem"
const newItem = {
id: 2,
name: body.name,
price: parseInt(body.price),
description: body.description,
enabled: body.enabled == 'true',
commands: newCommands
}
const newEconomy = await prisma.economy.update({
where: {
guildId: guildId,
},
data: {
shops: {
updateMany: {
where: {
id: 0,
} ,
data: {
items : {
push: [newItem]
}
}
}
}
}
})
This is hard to read and I feel like I'm doing this all wrong.
I've looked into how other people go about the same thing but I haven't found much information. Should I even be using composite types, should each type be inside it's own collection instead?
It looks like Shop and ShopItem could be their own models. They look sufficiently complex and relational in nature to warrant a separate entity.
Composite types are best suited for smaller, structured pieces of data that might be repeated or shared between several other models while not being necessarily relational or unique in nature.
Related
I have a number of documents in my sites collection.
These documents contain a pages property, which is an Array of ObjectId's.
// documents in my 'sites' collection.
{
_id: ObjectId("5e607b2643e2640056466402")
title: "Some Title"
...
pages: [
ObjectId("5e57bcbd0166a10055c43bb7")
ObjectId("5e5ba2630166a10055c5c952")
...
]
}
I am using GraphQL to fetch these results using Apollo & Prisma and I want to return the actual pages as opposed to their individual IDs.
My schema.prisma looks like this
model pages {
id String #id #default(auto()) #map("_id") #db.ObjectId
title String?
uri String?
}
model sites {
id String #id #default(auto()) #map("_id") #db.ObjectId
pages String[] #db.ObjectId
title String?
url String?
}
My typeDefs.js file looks like this:
type pages {
id: ID!
title: String
uri: String
}
type sites {
id: ID!
pages: [String]
title: String
url: String
}
And my resolver looks like this:
allSites: () => prisma.sites.findMany(),
When I send the following GraphQL request:
query myQuery {
allSitesRecursive {
title
pages
}
}
I get this response:
{
"data": {
"allSites": [
{
"title": "Some Title",
"pages": [
"5dc9630226b1b2005952c4b5",
"5e5797020166a10055c41f94",
...
]
}
}
}
}
This is fine, but I want to extract the individual title and uri from each page in the pages array, as opposed to just having the reference ID.
Is this possible to achieve without reorganising my data?
If so, how?
EDIT:
I have changed my resolver to:
allSites: () => prisma.sites.findMany({ include: { pages: true } }),
But then I receive an error stating:
Invalid scalar field `pages` for include statement on model sites.
This model has no relations, so you can't use include with it.
Note, that include statements only accept relation fields.,
So then I change my schema.prisma to:
model pages {
id String #id #default(auto()) #map("_id") #db.ObjectId
title String?
uri String?
sites sites? #relation(fields: [sitesId], references: [id]) // gets added automatically on save
sitesId String? #db.ObjectId // gets added automatically on save
}
model sites {
id String #id #default(auto()) #map("_id") #db.ObjectId
pages pages[]
title String?
url String?
}
and my sites typeDefs to:
type sites {
id: ID!
pages: [pages]
title: String
url: String
}
But my response to the following GraphQL query:
query AllSites {
allSites {
pages {
title
uri
}
title
}
}
is:
{
"data": {
"allSites": [
{
"pages": [],
"title": "www.some-website.com"
},
...
}
}
}
The pages array is empty, which suggests that it is not able to fetch the pages by reference.
Am I doing something wrong?
When I try to update the Shoppinglist struct with the data I get an "there is no unique or exclusion constraint matching the ON CONFLICT specification (SQLSTATE 42P10)" Error
These are my Structs
type Shoppinglist struct {
Model
ID int `gorm:"primaryKey" json:"id"`
Title string `json:"title"`
Items []Item `json:"items" gorm:"foreignKey:ParentListID;references:ID;"`
Owner string `json:"owner"`
Participants pq.StringArray `gorm:"type:text[]" json:"participants"`
}
type Item struct {
Model
ParentListID int `gorm:"primaryKey" json:"parentListId"`
Title string `json:"title"`
Position int `json:"position"`
Bought bool `json:"bought"`
}
And this is the Code I execute when trying to edit a list
func EditList(id int, data map[string]interface{}) error {
//https://github.com/go-gorm/gorm/issues/3487
shoppinglist := Shoppinglist{
ID: data["id"].(int),
Title: data["title"].(string),
Items: data["items"].([]Item),
Owner: data["owner"].(string),
Participants: data["participants"].([]string),
}
if err := db.Session(&gorm.Session{FullSaveAssociations: true}).Where("id = ?", id).Updates(&shoppinglist).Error; err != nil {
return err
}
return nil
}
This is where I execute the EditList and where I set all the values to pass nito the map:
type Shoppinglist struct {
ID int
Title string
Items []models.Item
Owner string
Participants []string
PageNum int
PageSize int
}
func (s *Shoppinglist) Edit() error {
shoppinglist := map[string]interface{}{
"id": s.ID,
"title": s.Title,
"items": s.Items,
"owner": s.Owner,
"participants": s.Participants,
}
return models.EditList(s.ID, shoppinglist)
}
Before I was just using a []string instead of []Item and that was working perfectly. Now everything updates except for the []Item
These are the SQL Queries executed:
UPDATE "shoppinglists" SET "modified_on"=1628251977096,"title"='kjhdsfgnb',"owner"='janburzinski1#gmail.com',"participants"='{}' WHERE id = 517687 AND "id" = 517687
INSERT INTO "items" ("created_on","modified_on","deleted_at","title","position","bought","parent_list_id") VALUES (1628251977,1628251977116,NULL,'dfkjhgndfjkg',1,false,517687),(1628251977,1628251977116,NULL,'dfgh123',2,true,517687) ON CONFLICT ("parent_list_id") DO UPDATE SET "created_on"="excluded"."created_on","modified_on"="excluded"."modified_on","deleted_at"="excluded"."deleted_at","title"="excluded"."title","position"="excluded"."position","bought"="excluded"."bought" RETURNING "parent_list_id"
I would really like to know how to Update a Relation in Gorm or why this isn't working because I've been looking through all the Association Issues on Github and Stackoverflow and didn't find a answer that worked for me.
The first problem I see here is that your Item has no ID but uses the ParentListID as primary key. That means you can only have one Item for each parent which defeats the purpose of having an array.
Create an ID field (used as primary key) for items and if there's still issues with your approach, please update the question.
PS: would have left this in a comment, but can't.
I just needed to add the * to the []Item and fix the problem with the primarykey and remove the reference.
type Shoppinglist struct {
Model
ID int `gorm:"primaryKey" json:"id"`
Title string `json:"title"`
Items []*Item `json:"items" gorm:"foreignKey:ParentListID;"`
Owner string `json:"owner"`
Participants pq.StringArray `gorm:"type:text[]" json:"participants"`
}
type Item struct {
Model
ID int `gorm:"primaryKey" json:"id"`
ParentListID int `json:"parentListId"`
ItemID int `json:"itemId"`
Title string `json:"title"`
Position int `json:"position"`
Bought bool `json:"bought" gorm:"default:false"`
}
I am using Vapor and Fluent. I want to define a user model like below, but I get an error saying:
Fatal error: Error raised at top level: previousError(server: multiple primary keys for table "users" are not allowed
Is it not possible to define multiple UUIDs in one model?
import Vapor
import FluentPostgresDriver
final class User: Model, Content {
static let schema = "users"
#ID(custom: "id")
var id: Int?
#Field(key: "email")
var email: String
#Field(key: "password")
var password: String
#ID(custom: "public_id")
var public_id: UUID?
init() { }
init(id: Int? = nil,email: String, password:String, public_id: UUID? = nil) {
self.id = id
self.email = email
self.password = password
self.public_id = public_id
}
}
struct CreateUser: Migration {
func prepare(on database: Database) -> EventLoopFuture<Void> {
database.schema("users")
.field("id", .int, .identifier(auto: true))
.field("email", .string)
.field("password", .string)
.field("public_id", .uuid,?????)
.create()
}
.....
}
Instead of marking the public UUID as #ID, just mark it as another #Field (string type is easiest, but you can also do binary [16 bytes] if you're feeling like wearing a propeller beanie), and make it required.
During the create, you'll have to actually invoke a UUID generation function, but that ought to be easy enough.
But, why not make the primary key the UUID, instead of having two identifiers? It takes a little more room in the database, but might be worth avoiding the headache of having several different ids.
I have a question about MongoDB ISODate type and GraphQL. I need to declare a mutation in my gql schema that allows to add a document in my Mongo database.
This document has an ISODate property, but in my gql schema, I'am using a String :
mutation addSomething(data: SomeInput)
type SomeInput {
field1: String
field2: Int
created: String
}
My problem is that, in the new document, the created field is in String format (not ISODate), and I was expecting that. But I wonder how to do to make it insert an ISODate instead. Is there a "custom type" somewhere I could use instead a String ?
Thank you
PS: I'am using nodeJS and apollo libraries.
Edit 1 : Trying with the graphql-iso-date package
I have found this package https://www.npmjs.com/package/graphql-iso-date that adds 3 date custom types.
Here is my gql schema :
const { gql } = require('apollo-server');
const { GraphQLDateTime } = require('graphql-iso-date')
const typeDefs = gql`
scalar GraphQLDateTime
type Resto {
restaurant_id: ID!
borough: String
cuisine: String
name: String
address: Address
grades: [Grade]
}
type Address {
building: String
street: String
zipcode: String
coord: [Float]
}
type Grade {
date: GraphQLDateTime
grade: String
score: Int
}
input GradeInput {
date: GraphQLDateTime
grade: String
score: Int
}
extend type Query {
GetRestos: [Resto]
GetRestoById(id: ID!): Resto
}
extend type Mutation {
UpdateGradeById(grade: GradeInput!, id: ID!): Resto
RemoveGradeByIdAndDate(date: String, id: ID!): Resto
}
`
module.exports = typeDefs;
This is a test based on the sample restaurants dataset.
So, if I try to call the UpdateGradeById() function like this :
UpdateGradeById(grade:{date:"2020-08-25T08:00:00.000Z",grade:"D",score:15},id:"30075445"){...}
The document is updated but the date is always in String format (as you can see on the screenshot bellow) :
The date of the last grade in the list is recognized as a string (not as a date).
I can see an improvement though because before I was using the graphql-iso-date date fields were returned in timestamp format. Now they are returned as ISO string. But the insertion does not work as expected.
Ok, I missed to do something important in my previous example : resolvers.
So, if like me you want to manipulate MongoDB date type through GraphQL, you can use the graphql-iso-date package like this :
First, modify your schema by adding a new scalar :
const { gql } = require('apollo-server');
const typeDefs = gql`
scalar ISODate
type Resto {
restaurant_id: ID!
borough: String
cuisine: String
name: String
address: Address
grades: [Grade]
}
type Address {
building: String
street: String
zipcode: String
coord: [Float]
}
type Grade {
date: ISODate
grade: String
score: Int
}
input GradeInput {
date: ISODate
grade: String
score: Int
}
extend type Query {
GetRestos: [Resto]
GetRestoById(id: ID!): Resto
}
extend type Mutation {
UpdateGradeById(grade: GradeInput!, id: ID!): Resto
RemoveGradeByIdAndDate(date: ISODate!, id: ID!): Resto
}
`
module.exports = typeDefs;
(Here I choose to call my custom date scalar ISODate)
Then, you have to tell how to "resolve" this new ISODate scalar by modifying you resolvers file :
const { GraphQLDateTime } = require('graphql-iso-date')
module.exports = {
Query: {
GetRestos: (_, __, { dataSources }) =>
dataSources.RestoAPI.getRestos(),
GetRestoById: (_, {id}, { dataSources }) =>
dataSources.RestoAPI.getRestoById(id),
},
Mutation: {
UpdateGradeById: (_, {grade,id}, { dataSources }) =>
dataSources.RestoAPI.updateGradeById(grade,id),
RemoveGradeByIdAndDate: (_, {date,id}, { dataSources }) =>
dataSources.RestoAPI.removeGradeByIdAndDate(date,id),
},
ISODate: GraphQLDateTime
};
And that's it. Now, date properties in my mongodb documents are well recognized as Date type values.
How can I query (an undefined) number of nested elements from my document?
I would like to get the list of items without having to go
menus {
id
items {
id
name
items {
id
name
............
}
}
}
}
I have a prisma schema (MongoDB) that looks like this
enum MenuType {
MAIN
HYPERLINK
}
type Menu {
id: ID! #id
type: MenuType!
name: String!
title: String
href: String
t_blank: Boolean
items: [Items]
}
type Items #embedded{
id: ID! #id
type: MenuType!
name: String!
title: String
href: String
t_blank: Boolean
items: [Items]
}
I would like to get all my items with something like this
{
menus {
id
items
}
}
Is it possible? Does my consumer need to keep looping through the reponse body?