Mutation cache update not working with vue-apollo and Hasura - postgresql

I'm completely new to these technologies, and am having trouble wrapping my head around it, so bear with me. So, my situation is that I've deployed Hasura on Heroku and have added some data, and am now trying to implement some functionality where I can add and edit certain rows of a table. Specifically I've been following this from Hasura, and this from vue-apollo.
I've implemented the adding and editing (which works), and now want to also reflect this in the table, by using the update property of the mutation and updating the cache. Unfortunately, this is where I get lost. I'll paste some of my code below to make my problem more clear:
The mutation for adding a player (ADD_PLAYER_MUTATION) (same as the one in Hasura's documentation linked above):
mutation addPlayer($objects: [players_insert_input!]!) {
insert_players(objects: $objects) {
returning {
id
name
}
}
}
The code for the mutation in the .vue file
addPlayer(player, currentTimestamp) {
this.$apollo.mutate({
mutation: PLAYER_ADD_MUTATION,
variables: {
objects: [
{
name: player.name,
team_id: player.team.id,
created_at: currentTimestamp,
updated_at: currentTimestamp,
role_id: player.role.id,
first_name: player.first_name,
last_name: player.last_name
}
]
},
update: (store, { data: { addPlayer } }) => {
const data = store.readQuery({
query: PLAYERS
});
console.log(data);
console.log(addPlayer);
data.players.push(addPlayer);
store.writeQuery({ query: PLAYERS, data });
}
});
},
I don't really get the update part of the mutation. In most examples the { data: { x } } bit uses the function's name in the place of x, and so I did that as well, even though I don't really get why (it's pretty confusing to me at least). When logging data the array of players is logged, but when logging addPlayer undefined is logged.
I'm probably doing something wrong that is very simple for others, but I'm obviously not sure what. Maybe the mutation isn't returning the correct thing (although I'd assume it wouldn't log undefined in that case), or maybe isn't returning anything at all. It's especially confusing since the player is actually added to the database, so it's just the update part that isn't working - plus, most of the guides / tutorials show the same thing without really much explanation.

Okay, so for anyone as stupid as me, here's basically what I was doing wrong:
Instead of addPlayer in update: (store, { data: { addPlayer } }), it should be whatever the name of the mutation is, so in this case insert_players.
By default a mutation response from Hasura has a returning field, which is a list, and so the added player is the first element in the list, so you can get it like so: const addedPlayer = insert_players.returning[0];
I didn't want to just delete my question after realising what was wrong shortly after posting it, in case this is useful to other people like me, and so I'll leave it up.

Related

MongoDB query: If two docs are referencing each other, eliminate one doc (Keep one combination only)

I have docs like these:
{
_id:61af43169dae3a9c3e133a90
name:"user1",
status: "RECOMMENDED",
recommendedId:61b708b8041895f4c68a3b3d
}
{
_id:61b708b8041895f4c68a3b3d
name:"user2",
status: "RECOMMENDED"
recommendedId:61af43169dae3a9c3e133a90
}
Both users are recommended to each other, so, I don't want both documents having recommended Id populated. I just want one document having recommendedId populated (Keep one combo only)
I would try to prevent this from happening at the time of setting the value of recommendedId in the first place.
So before trying to set the value, you could do something like this:
const idToRecommend = Types.ObjectId()
const recommenders = await Foo.find({
_id: idToRecommend,
recommendedId: user._id
})
if (recommenders.length > 0) {
// We don't want to make the change, we already have a relationship recorded.
}
Cleaning up a db already tainted with these duplicate relationships is a different question, but I would do that as a one-off task rather than a matter of process.

Cannot read property 'length' of undefined on one GET request

working with a MEAN Stack and I have three GET requests for the same URL/Route. One is to get a generalised summary of long-term emotions, the other is to get a summary of emotions by dates entered, and lastly, a summary of emotions related to a user-entered tag associated with individual emotion entries.
My first GET request is throwing no issues but the second GET request throws an error: Cannot read property 'length' of undefined
The error points to the following line:
48| each emotion in dateEmotions
Below is the relative code associated with the error:
Jade
each emotion in dateEmotions
.side-emotions-group
.side-emotions-label
p.emotion-left= emotion.emotionName
p.pull-right(class= emotion.emotionLevel) (#{emotion.emotionLevel}%)
.side-emotions-emotion.emotion-left
GET Request
module.exports.emotionsListByDates = function (req, res) {
Emo.aggregate([
{ $match :
{ "date" : { $gte: ISODate("2018-04-09T00:00:00.000Z"), $lt: ISODate("2018-04-13T00:00:00.000Z") } }
}, { "$group": {
"_id": null,
"averageHappiness": {"$avg": "$happiness"},
"averageSadness": {"$avg": "$sadness"},
"averageAnger": {"$avg": "$anger"},
"averageSurprise": {"$avg": "$surprise"},
"averageContempt": {"$avg": "$contempt"},
"averageDisgust": {"$avg": "$disgust"},
"averageFear": {"$avg": "$fear"},
}}
], function (e, docs) {
if (e) {
res.send(e);
} else {
res.render('dashboard', {
title: "ReacTrack - User Dashboard",
pageHeader: {
title: "User Dashboard",
strapline: "View your emotional data here."
},
dateEmotions: docs
})
}
});
};
This question is already getting pretty long, but I have another GET Request pointed to that URL and it is not throwing any errors, and the only difference is that I am not matching the db records by date in that query. I can post the working code if need be.
Edit
After some experimenting, I am able to get each of the three routes working individually if I comment out the other two. It's when multiple routes pull in the multiple requests that causes issues. For example, here are the routes at present where the ctrlDashboard.emotionsListByDates is working:
// Dashboard Routes
//router.get(/dashboard', ctrlDashboard.emotionsListGeneralised);
router.get('/dashboard', ctrlDashboard.emotionsListByDates);
//router.get('/dashboard', ctrlDashboard.emotionsListByTag);
If I comment out two routes and leave one running, and comment out the respective each emotion in emotions each emotion in dateEmotions and each emotion in tagEmotions blocks in the Jade file and leave the correct one uncommented, then that route will work, it seems to be when I am firing multiple routes. Is this bad practice, or incorrect? Should all queries be in the one GET request if on the same URL?
Thanks.
Apologies, new to routing and RESTful APIs but after some researching into the topic, I now understand the fault.
I assumed that the URL used in routing was the URL you wanted the data to populate...which it still kinda is, but I thought if I wanted to populate the dashboard page, I had to use that exact route and I did not realise I could post the data to different URL routes and take the data from those URLs to populate the one page.
Fixed by adding /date and /tag to those routes and using AJAX to perform those requests and populate the main page.
Thanks all.
I have the same problem but I'm using React+Redux+Fetch. So is it not a good practice dispatch more the one request in the same time and from the same page to a specific url?
I would know what causes that problem. I've found some discussions about it could be a mongoose issue.
My code:
MymongooObject.find(query_specifiers, function(err, data) {
for (let i = 0; i < data.length; ++i) {
...
}
}
Error:
TypeError: Cannot read property 'length' of undefined

Meteor Subscriptions Selecting the Entire Set?

I've defined a publication:
Meteor.publish('uninvited', function (courseId: string) {
return Users.find({
'profile.courses': {
$ne: courseId
}
});
});
So, in when a subscriber subscribes to this, I expect Users.find() to return only users that are not enrolled in that particular course. So, on my client, when I write:
this.uninvitedSub = MeteorObservable.subscribe("uninvited", this.courseId).subscribe(() => {
this.uninvited = Users.find().zone()});
I expect uninvited to contain only a subset of users, however, I'm getting the entire set of users regardless of whether or not they are enrolled in a particular course. I've made sure that my data is correct and that there are users enrolled in the course that I'm concerned with. I've also verified that this.courseId is working as expected. Is there an error with my code, or should I further look into my data to see if there's anything wrong with it?
**Note:
When I write this:
this.uninvitedSub = MeteorObservable.subscribe("uninvited", this.courseId).subscribe(() => {
this.uninvited = Users.find({
'profile.courses': {}
}).zone();
});
With this, it works as expected! Why? The difference is that my query now contains 'profile.courses': {}.

Query sailsjs blueprint endpoints by id array using request

I'm using the request library to make calls from one sails app to another one which exposes the default blueprint endpoints. It works fine when I query by non-id fields, but I need to run some queries by passing id arrays. The problem is that the moment you provide an id, only the first id is considered, effectively not allowing this kind of query.
Is there a way to get around this? I could switch over to another attribute if all else fails but I need to know if there is a proper way around this.
Here's how I'm querying:
var idArr = [];//array of ids
var queryParams = { id: idArr };
var options: {
//headers, method and url here
json: queryParams
};
request(options, function(err, response, body){
if (err) return next(err);
return next(null, body);
});
Thanks in advance.
Sails blueprint APIs allow you to use the same waterline query langauge that you would otherwise use in code.
You can directly pass the array of id's in the get call to receive the objects as follows
GET /city?where={"id":[1, 2]}
Refer here for more.
Have fun!
Alright, I switched to a hacky solution to get moving.
For all models that needed querying by id arrays, I added a secondary attribute to the model. Let's call it code. Then, in afterCreate(), I updated code and set it equal to the id. This incurs an additional database call, but it's fine since it's called just once - when the object is created.
Here's the code.
module.exports = {
attributes: {
code: {
type: 'string'//the secondary attribute
},
// other attributes
},
afterCreate: function (newObj, next) {
Model.update({ id: newObj.id }, { code: newObj.id }, next);
}
}
Note that newObj isn't a Model object as even I was led to believe. So we cannot simply update its code and call newObj.save().
After this, in the queries having id arrays, substituting id with code makes them work as expected!

KendoUI Autocomplete paging issue

I have a textbox bound to KendoUI autocomplete widget. The JS code looks like this:
var dataSourceImeSearch = {
type: "json",
transport: {
read: {
url: "#Url.Action("ImeSearch")",
contentType: "application/json; charset=utf-8",
type: "POST"
},
parameterMap: function (data, type) {
if (type == "read") {
if (data.filter) {
data = $.extend({ sort: null, filter: data.filter.filters[0] }, data);
} else {
data = $.extend({ sort: null, filter: null }, data);
}
return JSON.stringify(data);
} else {
return JSON.stringify({ model: data });
}
}
},
batch: false,
pageSize: 10,
serverPaging: true,
serverFiltering: true,
serverSorting: true,
schema: {
errors: "Errors",
data: "Data",
total: "TotalRecordCount",
model: myModel
},
error: function (e) {
if (e.errors) {
alert(e.errors);
}
}
};
$("#Ime").kendoAutoComplete({
dataTextField: "PunoIme",
filter: "contains",
minLength: 3,
dataSource: dataSourceImeSearch
});
I am experiencing a weird thing here. Autocomplete is working in terms that when I type the third character it runs to the server and gets JSON data back from there and shows first ten results. The thing is that this textbox is searching large datasets, so for some queries with say 4 characters result set can be more than 1000 items. For some reason the widget is not figuring out that there are more than 10 results and when I scroll down in the autocomplete dropdown which gets shown, it will not fire any search for a second page and so on. You can see that the serverPaging for data source is set to true, but this does not help.
Any help is appreciated. Thank you.
I found out after posting this question that Autocomplete widget does not allow paging by design. This was explained in the KendoUI forums by some Kendo employee as an example of poor UX (if you have autocomplete that needs paging). I would argue that, since in my opinion, the first use case of the autocomplete would be in case of a search of a person, and here I am doing exactly that. The only problem is that if you search by person's second name you can end up with hundreds of results after first 3 or 4 characters and you really need paging for that. If the Kendo people think that this is a case of a bad UX, this should be clearly mentioned in the Autocomplete documentation, and I could really not find any mentioning of it at a single place, and one would think that it might be a good idea to mention something like this to the people so that they don't have to waste all day trying to figure out what went wrong.
In my opinion one of the worst use case examples at all demos at KendoUI web demo pages is the Shared DataSource example, where if you type in 'ch' in the autocomplete textbox in the top, you will end up with 10 results in autocomplete, but 14 in the datagrid bellow. It really strikes me as stupid that nobody in Kendo sees this behavior as odd.
So my answer to my own question would be the following: DO NOT use autocomplete, except in some really really simple use case (I can't really think of a single one that would make sense). I ended up making a whole search form with 5 textboxes and search button in case where I hoped that I was going to be able to use 2 textboxes (one with autocomplete) and a search button.
You have set pageSize:10, which means that only 10 records are returned to the AutoComplete and its dataSource contains only 10 elements, I am afraid that automatic paging is not implemented by default