KeystoneJs user-defined order for Relationship - postgresql

I am using KeystoneJS with PostgreSQL as my backend and Apollo on the frontend for my app.
I have a schema that has a list that is linked to another list.
I want to be able to allow users to change the order of the second list.
This is a simplified version of my schema
keystone.createList(
'forms',
{
fields: {
name: {
type: Text,
isRequired: true,
},
buttons: {
type: Relationship,
ref: 'buttons.attached_forms',
many: true,
},
},
}
);
keystone.createList(
'buttons',
{
fields: {
name: {
type: Text,
isRequired: true,
},
attached_forms: {
type: Relationship,
ref: 'forms.buttons',
many: true,
},
},
}
);
So what I would like to do, is allow users to change the order of buttons so when I fetch them in the future from forms:
const QUERY = gql`
query getForms($formId: ID!) {
allforms(where: {
id: $formId,
}) {
id
name
buttons {
id
name
}
}
}
`;
The buttons should come back from the backend in a predefined order.
{
id: 1,
name: 'Form 1',
buttons: [
{
id: 1,
name: 'Button 1',
},
{
id: 3,
name: 'Button 3',
},
{
id: 2,
name: 'Button 2',
}
]
}
Or even just have some data on that returns with the query that will allow for sorting according to the user-defined sort order on the frontend.
The catch is that this relationship is many to many.
So it wouldn't be enough to add a column to the buttons schema as the ordering needs to be relationship-specific. In other words, if a user puts a particular button last on a particular form, it shouldn't change the order of that same button on other forms.
In a backend that I was creating myself, I would add something to the joining table, like a sortOrder field or similar and then change those values to change the order, or even order them on the frontend using that information.
Something like this answer here.
The many-to-many join table would have columns like formId, buttonId, sortOrder.
I have been diving into the docs for KeystoneJS and I can't figure out a way to make this work without getting into the weeds of overriding the KnexAdapter that we are using.
I am using:
{
"#keystonejs/adapter-knex": "^11.0.7",
"#keystonejs/app-admin-ui": "^7.3.11",
"#keystonejs/app-graphql": "^6.2.1",
"#keystonejs/fields": "^20.1.2",
"#keystonejs/keystone": "^17.1.2",
"#keystonejs/server-side-graphql-client": "^1.1.2",
}
Any thoughts on how I can achieve this?

One approach would be to have two "button" lists, one with a template for a button (buttonTemplate below) with common data such as name etc, and another (button below) which references one buttonTemplate and one form. This allows you to assign a formIndex property to each button, which dictates its position on the corresponding form.
(Untested) example code:
keystone.createList(
'Form',
{
fields: {
name: {
type: Text,
isRequired: true,
},
buttons: {
type: Relationship,
ref: 'Button.form',
many: true,
},
},
}
);
keystone.createList(
'Button',
{
fields: {
buttonTemplate: {
type: Relationship,
ref: 'ButtonTemplate.buttons',
many: false,
},
form: {
type: Relationship,
ref: 'Form.buttons',
many: false,
},
formIndex: {
type: Integer,
isRequired: true,
},
},
}
);
keystone.createList(
'ButtonTemplate',
{
fields: {
name: {
type: Text,
isRequired: true,
},
buttons: {
type: Relationship,
ref: 'Button.buttonTemplate',
many: true,
},
},
}
);
I think this is less likely to cause you headaches (which I'm sure you can see coming) down the line than your buttonOrder solution, e.g. users deleting buttons that are referenced by this field.
If you do decide to go with this approach, you can guard against such issues with the hook functionality in Keystone. E.g. before a button is deleted, go through all the forms and rewrite the buttonOrder field, removing any references to the deleted button.

I had a similar challenge once, so after some research and found this answer, I implemented a solution to a project using PostgreSQL TRIGGER.
So you can add a trigger where on an update, it should shift the buttonOrder.
Here is the SQL I had on me, this was the test code, I regex replaced the terms to fit your question :)
// Assign order
await knex.raw(`
do $$
DECLARE form_id text;
begin
CREATE SEQUENCE buttons_order_seq;
CREATE VIEW buttons_view AS SELECT * FROM "buttons" ORDER BY "createdAt" ASC, "formId";
CREATE RULE buttons_rule AS ON UPDATE TO buttons_view DO INSTEAD UPDATE buttons SET order = NEW.order WHERE id = NEW.id;
FOR form_id IN SELECT id FROM form LOOP
ALTER SEQUENCE buttons_order_seq RESTART;
UPDATE buttons_view SET order = nextval('buttons_order_seq') WHERE "formId" = form_id;
END LOOP;
DROP SEQUENCE buttons_order_seq;
DROP RULE buttons_rule ON buttons_view;
DROP VIEW buttons_view;
END; $$`);
// Create function that shifts orders
await knex.raw(`
CREATE FUNCTION shift_buttons_order()
RETURNS trigger AS
$$
BEGIN
IF NEW.order < OLD.order THEN
UPDATE buttons SET order = order + 1, "shiftOrderFlag" = NOT "shiftOrderFlag"
WHERE order >= NEW.order AND order < OLD.order AND "formId" = OLD."formId";
ELSE
UPDATE buttons SET order = order - 1, "shiftOrderFlag" = NOT "shiftOrderFlag"
WHERE order <= NEW.order AND order > OLD.order AND "formId" = OLD."formId";
END IF;
RETURN NEW;
END;
$$
LANGUAGE 'plpgsql'`);
// Create trigger to shift orders on update
await knex.raw(`
CREATE TRIGGER shift_buttons_order BEFORE UPDATE OF order ON buttons FOR EACH ROW
WHEN (OLD."shiftOrderFlag" = NEW."shiftOrderFlag" AND OLD.order <> NEW.order)
EXECUTE PROCEDURE shift_buttons_order()`);

One option that we came up with is to add the order to the form table.
keystone.createList(
'forms',
{
fields: {
name: {
type: Text,
isRequired: true,
},
buttonOrder: {
type: Text,
},
buttons: {
type: Relationship,
ref: 'buttons.attached_forms',
many: true,
},
},
}
);
This new field buttonOrder could contain a string representation of the order of the button Ids, like in a JSON stringified array.
The main issue with this is that it will be difficult to keep this field in-sync with the actual linked buttons.

Related

Using complex object for grouping in Ag Grid

I am trying to use a complex object to group my ag grid rows. Object of my rowdata looks like this -
const rowData= {
id : '123',
name: 'dummy',
category: 'A',
group : {
name : 'dummyGroup',
id : '456',
category: 'A'
}
}
Now, I am using group object to group the rows. And according to this documentation https://www.ag-grid.com/javascript-data-grid/grouping-complex-objects/ I am using keyCreator as keyCreator: params => params.value.name . My group object is uniquely identified by combination of id and catogory.
The problem that I am facing is, as I am using group.name in the keyCreator, if I have two row data object whose group.names are same but id and category are different, ag grid is grouping those rows together. I understand that this is the behavior from ag grid. So can I get any workaround for it? I need to show name on group row. But to identify the groups differently I need to use id+catogory in keyCreator. How can I achieve this ?
You need to utilise the groupRowInnerRenderer property so you can group by a combination of the id and category fields, while displaying the name as the group.
const gridOptions = {
groupDisplayType: 'groupRows',
groupRowInnerRenderer: function (params) {
return params.node.childrenAfterFilter[0].data.name;
},
columnDefs: [
{ field: 'id' },
{ field: 'name' },
{ field: 'category' },
{
field: 'group',
valueFormatter: groupValueFormatter,
rowGroup: true,
keyCreator: function (params) {
return params.value.id + params.value.category;
},
},
],
};
Demo.

Auto increment in postgres/sequelize

I have a Postgres database using Sequelize (node/express) as ORM. I have a table called students, in it there are columns: id and name.
In this students table, I have several registered students, but a detail: the last registered ID is 34550 and the first is 30000, they come from an import from a previous database, I need to continue counting from 34550, or that is, from the last registered student. However, when I register a student via API, the generated ID is below 30000. I know that in mysql the ID field being AUTO INCREMENT would solve it, however, as I understand it, postgres works in a different way.
How could I solve this problem?
The migration used to create the table is as follows:
module.exports = {
up: (queryInterface, Sequelize) => {
return queryInterface.createTable('students', {
id: {
type: Sequelize.INTEGER,
allowNull: false,
autoIncrement: true,
primaryKey: true,
},
name: {
type: Sequelize.STRING,
allowNull: false,
},
});
},
down: (queryInterface) => {
return queryInterface.dropTable('students');
},
};
Table print:
Based on Frank comment, I was able to adjust using:
SELECT setval('public.students_id_seq', 34550, true);

sequelize find parent table from child

I would like to find the parent table information of an object.
I have User hasMany Book
where Book has writer and assigned to user id.
Book has type, which is like fantasy, romance, history, scientific fiction... etc
So I want to find out the Book with type Scientific Fiction but not only for that, I also want the writer, which is User.
How can I find the book with its writer where where condition is given for books only? It seems like 'include' in Book.findAll( include: User) is not working; this tells me that include is only working for finding child tables not parent.
Here are some code for user
const User = sequelize.define('User', {
id: { type: DataTypes.STRING(6), field: 'ID', primaryKey : true }
}
associate: function(models) {
User.hasMany(models.Book, { foreignKey: 'userId' });
}
and book
const Book = sequelize.define('Book', {
id: { type: DataTypes.STRING(6), field: 'ID', primaryKey: true }, // primary key
userId: { type: DataTypes.STRING(6), field: 'USER_ID', primaryKey: true }
type: { type: DataTypes.STRING(20), field: 'TYPE'
}
Book has some more child table and I try to find those additional information in includes, so I guess I really need to find from Book.findAll(...)
Instead of User.findAll(include: Book).
Can anyone help?
I think I was making a mistake.
As soon as I changed
userId: { type: DataTypes.STRING(6), field: 'USER_ID', primaryKey: true }
to
userId: { type: DataTypes.STRING(6), field: 'USER_ID' }
include is working for belongsTo; and it finds the parent from child.
It seems like Sequelize has some problem with relation if there is more than one primary key declared in model...
If anyone does think this solves your problem, please share in reply so I can be more sure about it.
Thanks.

Using objects as options in Autoform

In my Stacks schema i have a dimensions property defined as such:
dimensions: {
type: [String],
autoform: {
options: function() {
return Dimensions.find().map(function(d) {
return { label: d.name, value: d._id };
});
}
}
}
This works really well, and using Mongol I'm able to see that an attempt to insert data through the form worked well (in this case I chose two dimensions to insert)
However what I really what is data that stores the actual dimension object rather than it's key. Something like this:
[
To try to achieve this I changed type:[String] to type:[DimensionSchema] and value: d._id to value: d. The thinking here that I'm telling the form that I am expecting an object and am now returning the object itself.
However when I run this I get the following error in my console.
Meteor does not currently support objects other than ObjectID as ids
Poking around a little bit and changing type:[DimensionSchema] to type: DimensionSchema I see some new errors in the console (presumably they get buried when the type is an array
So it appears that autoform is trying to take the value I want stored in the database and trying to use that as an id. Any thoughts on the best way to do this?.
For reference here is my DimensionSchema
export const DimensionSchema = new SimpleSchema({
name: {
type: String,
label: "Name"
},
value: {
type: Number,
decimal: true,
label: "Value",
min: 0
},
tol: {
type: Number,
decimal: true,
label: "Tolerance"
},
author: {
type: String,
label: "Author",
autoValue: function() {
return this.userId
},
autoform: {
type: "hidden"
}
},
createdAt: {
type: Date,
label: "Created At",
autoValue: function() {
return new Date()
},
autoform: {
type: "hidden"
}
}
})
According to my experience and aldeed himself in this issue, autoform is not very friendly to fields that are arrays of objects.
I would generally advise against embedding this data in such a way. It makes the data more difficult to maintain in case a dimension document is modified in the future.
alternatives
You can use a package like publish-composite to create a reactive-join in a publication, while only embedding the _ids in the stack documents.
You can use something like the PeerDB package to do the de-normalization for you, which will also update nested documents for you. Take into account that it comes with a learning curve.
Manually code the specific forms that cannot be easily created with AutoForm. This gives you maximum control and sometimes it is easier than all of the tinkering.
if you insist on using AutoForm
While it may be possible to create a custom input type (via AutoForm.addInputType()), I would not recommend it. It would require you to create a template and modify the data in its valueOut method and it would not be very easy to generate edit forms.
Since this is a specific use case, I believe that the best approach is to use a slightly modified schema and handle the data in a Meteor method.
Define a schema with an array of strings:
export const StacksSchemaSubset = new SimpleSchema({
desc: {
type: String
},
...
dimensions: {
type: [String],
autoform: {
options: function() {
return Dimensions.find().map(function(d) {
return { label: d.name, value: d._id };
});
}
}
}
});
Then, render a quickForm, specifying a schema and a method:
<template name="StacksForm">
{{> quickForm
schema=reducedSchema
id="createStack"
type="method"
meteormethod="createStack"
omitFields="createdAt"
}}
</template>
And define the appropriate helper to deliver the schema:
Template.StacksForm.helpers({
reducedSchema() {
return StacksSchemaSubset;
}
});
And on the server, define the method and mutate the data before inserting.
Meteor.methods({
createStack(data) {
// validate data
const dims = Dimensions.find({_id: {$in: data.dimensions}}).fetch(); // specify fields if needed
data.dimensions = dims;
Stacks.insert(data);
}
});
The only thing i can advise at this moment (if the values doesnt support object type), is to convert object into string(i.e. serialized string) and set that as the value for "dimensions" key (instead of object) and save that into DB.
And while getting back from db, just unserialize that value (string) into object again.

Updating an array field in a mongodb collection

I am trying to update my collection which has an array field(initially blank) and for this I am trying this code
Industry.update({_id:industryId},
{$push:{categories: id:categoryId,
label:newCategory,
value:newCategory }}}});
No error is shown, but in my collection just empty documents({}) are created.
Note: I have both categoryId and newCategory, so no issues with that.
Thanks in advance.
This is the schema:
Industry = new Meteor.Collection("industry");
Industry.attachSchema(new SimpleSchema({
label:{
type:String
},
value:{
type:String
},
categories:{
type: [Object]
}
}));
I am not sure but maybe the error is occuring because you are not validating 'categories' in your schema. Try adding a 'blackbox:true' to your 'categories' so that it accepts any types of objects.
Industry.attachSchema(new SimpleSchema({
label: {
type: String
},
value: {
type: String
},
categories: {
type: [Object],
blackbox:true // allows all objects
}
}));
Once you've done that try adding values to it like this
var newObject = {
id: categoryId,
label: newCategory,
value: newCategory
}
Industry.update({
_id: industryId
}, {
$push: {
categories: newObject //newObject can be anything
}
});
This would allow you to add any kind of object into the categories field.
But you mentioned in a comment that categories is also another collection.
If you already have a SimpleSchema for categories then you could validate the categories field to only accept objects that match with the SimpleSchema for categories like this
Industry.attachSchema(new SimpleSchema({
label: {
type: String
},
value: {
type: String
},
categories: {
type: [categoriesSchema] // replace categoriesSchema by name of SimpleSchema for categories
}
}));
In this case only objects that match categoriesSchema will be allowed into categories field. Any other type would be filtered out. Also you wouldnt get any error on console for trying to insert other types.(which is what i think is happening when you try to insert now as no validation is specified)
EDIT : EXPLANATION OF ANSWER
In a SimpleSchema when you define an array of objects you have to validate it,ie, you have to tell it what objects it can accept and what it can't.
For example when you define it like
...
categories: {
type: [categoriesSchema] // Correct
}
it means that objects that are similar in structure to those in another SimpleSchema named categoriesSchema only can be inserted into it. According to your example any object you try to insert should be of this format
{
id: categoryId,
label: newCategory,
value: newCategory
}
Any object that isn't of this format will be rejected while insert. Thats why all objects you tried to insert where rejected when you tried initially with your schema structured like this
...
categories: {
type: [Object] // Not correct as there is no SimpleSchema named 'Object' to match with
}
Blackbox:true
Now, lets say you don't what your object to be filtered and want all objects to be inserted without validation.
Thats where setting "blackbox:true" comes in. If you define a field like this
...
categories: {
type: [Object], // Correct
blackbox:true
}
it means that categories can be any object and need not be validated with respect to some other SimpleSchema. So whatever you try to insert gets accepted.
If you run this query in mongo shell, it will produce a log like matched:1, updated:0. Please check what you will get . if matched is 0, it means that your input query is not having any matching documents.