Relational data in react-query - react-query

I have a REST api which has this endpoint for getting an assignment -
'/classes/<str:code>/assignments/<int:assignment_id>'
I have created a custom hook for querying an Assignment:
const getAssignment = async ({ queryKey }) => {
const [, code, assignmentId] = queryKey
const { data } = await api.get(`/classes/${code}/assignments/${assignmentId}`)
return data
}
export default function useAssignment(code, assignmentId) {
return useQuery(['assignment', code, assignmentId], getAssignment)
}
This works as expected, but is this the right way to deal with relational data in react-query?

code looks right to me. react-query doesn't have a normalized cache, just a document cache. So if you request assignments with a different query key, e.g. assignments for a user, they will be cache separately.
The straight forward approach to tackle this would then be to structure your query keys in a way that you can invalidate everything at the same time.

Related

get route only with specified parameter

I am new to MongoDB and CRUD APIs.
I have created my first database and inserted some data. I can do get, post and delete requests.
Now I want to request a 'get' by adding a parameter, so I do the following:
router.get('/:story_name', async function (req, res, next) {
const selectedStory = await loadStoryToRead()
res.send(await selectedStory.find({}).toArray())
})
say that story_name is S1C1,
I can do http://localhost:3000/api/whatever/s1c1 to get the data.
I would have expected to retrieve the data ONLY by using the specified parameter, however I can use the ID or the date or any other parameter found in the json file to get the data.
for example I can do
http://localhost:3000/api/whatever/5d692b6b21d5fdac2... // the ID
or
http://localhost:3000/api/whatever/2019-08-30T13:58:03.035Z ... // the created_at date
and obtain the same result.
Why is that?
How can I make sure that if I use router.get('/:story_name' ... I can retrieve the data only if I use the 'story_name' parameter?
Thanks!
* UPDATE *
my loadStoryToRead() looks like this:
async function loadStoryToRead () {
const client = await mongodb.MongoClient.connect(
'mongodb+srv://...', {
useNewUrlParser: true
})
return client.db('read').collection('selectedStory')
}
I will try to reformulate my question.
I want to ensure that the data is retrieved only by adding the 'story_name' parameter in the URL and not by adding any other parameter within the file.
The reading that I have done suggested to add the parameter to the get request, but when I do it, it doesn't matter what parameter I enter, I can still retrieve the data.
The delete request, however, is very specific. If I use router.delete('/:id'... the only way to delete the file is by using the ID parameter.
I would like to obtain the same with a get request and not using the 'id' but by using 'story_name'.
you can use regular expression capabilities for pattern matching strings in queries.
Syntax is:
db.<collection>.find({<fieldName>: /<string>/})
for example, you can use
var re = new RegExp(req.params.story_name,"g");
db.users.find({$or:[{"name": re}, {"_id": re}, {..other fields}]});
You can use $and and $or according to your requirement.
To read more follow the link
https://docs.mongodb.com/manual/reference/operator/query/regex/

Mongoose Error - Mongoose models with same model name

I am working on a NodeJs application and I am using mongoose node package.
Sample Code
I am using following method to create dynamic collections and these collections sometimes fail to persist the data in database -
const Mongoose = require("mongoose");
const Schema = new Mongoose.Schema({
// schema goes here
});
module.exports = function (suffix) {
if (!suffix || typeof suffix !== "string" || !suffix.trim()) {
throw Error("Invalid suffix provided!");
}
return Mongoose.model("Model", Schema, `collection_${suffix}`);
};
I am using this exported module to create dynamic collections based on unique ids passed as suffix parameter. Something like this (skipping unnecessary code) -
const saveData = require("./data-service");
const createModel = require("./db-schema");
// test 1
it("should save data1", function (done) {
const data1 = [];
const response1 = saveData(request1); // here response1.id is "cjmt8litu0000ktvamfipm9qn"
const dbModel1 = createModel(response1.id);
dbModel1.insertMany(data1)
.then(dbResponse1 => {
// assert for count
done();
});
});
// test 2
it("should save data2", function (done) {
const data2 = [];
const response2 = saveData(request2); // here response2.id is "cjmt8lm380006ktvafhesadmo"
const dbModel2 = createModel(response2.id);
dbModel2.insertMany(data2)
.then(dbResponse2 => {
// assert for count
done();
});
});
Problem
The issue is, test 2 fails! It the insertmany API results in 0 records failing the count assert.
If we swap the the order of the tests, test 1 will fail.
If I run the two tests separately, both will pass.
If there are n tests, only first test will pass and remaining will fail.
Findings
I suspected the mongoose model creation step to be faulty as it is using the same model name viz. Model while creating multiple model instances.
I changed it to following and the tests worked perfectly fine in all scenarios -
return Mongoose.model(`Model_${suffix}`, Schema, `collection_${suffix}`);
Questions
This leaves me with following questions -
Am I following correct coding conventions while creating dynamic collections?
Is suspected code the actual cause of this issue (should the model name also be unique)?
If yes, why is it failing? (I followed mongoose docs but it doesn't provide any information regarding uniqueness of the model name argument.
Thanks.
I you are calling insertMany method on dbModel1, where you variable is declared to dbModel2.
Change your test 2 from:
dbModel1.insertMany(data2)
.then(dbResponse1 => {
// assert for count
done()
});
To:
dbModel2.insertMany(data2)
.then(dbResponse1 => {
// assert for count
done()
});

Disable caching in Angular Firestore queries

I am running a firestore query to get data but the query is returning data from cached data queries earlier and then returns additional data (which was not queried earlier) in the second pass from server. Is there a way I can disable caching for firestore queries so that request goes to DB every time I query something.
this.parts$ = this.db.collection<OrderBom>('OrderBom', ref => {
let query : firebase.firestore.Query = ref;
query = query.where('orderPartLC', '==', this.searchValue.toLowerCase());
return query;
}).valueChanges();
Change that .valueChanges() to a .snapshotChanges() then you can apply a filter. See the example below.
I dont like changing default behavior (default configurations). I saw it's a desired behavior and the good practice is to show the data as soon as possible to the user, even if you refresh twice the screen.
I dont think is a bad practice to filter on fromCache === false when we dont have a choise. (In my case I do more requests after i receive this first one so due to promises and other async 'tasks' cache/server order is completly lost )
See this closed issue
getChats(user : User) {
return this.afs.collection<Chat>("chats",
ref => ref.where('participantsId', 'array-contains', user.id)
.snapshotChanges()
.pipe(filter(c=> c.payload.doc.metadata.fromCache === false)).
.pipe(map(//probaly want to parse your object here))
}
if using AngularFire2 you can try:
I read on the Internet that you can disable offline persistence - which caches your results -by not calling enablePersistence() on AngularFireStoreModule.
I have done the first and still had no success, but try it first. What I managed to do to get rid of caching results was to use the get() method from class DocumentReference. This method receives as parameter a GetOptions, which you can force the data to come from server. Usage example:
// fireStore is a instance of AngularFireStore injected by AngularFire2
let collection = fireStore.collection<any>("my-collection-name");
let options:GetOptions = {source:"server"}
collection.ref.get(options).then(results=>{
// results contains an array property called docs with collection's documents.
});
Persistence and caching should be disabled for angular/fire by default but it is not and there is no way to turn it off. As such, #BorisD's answer is correct but he hasn't explained it too well. Here's a full example for converting valueChanges to snapshotChanges.
constructor(private afs: AngularFirestore) {}
private getSequences(collection: string): Observable<IPaddedSequence[]> {
return this.afs.collection<IFirestoreVideo>('videos', ref => {
return ref
.where('flowPlayerProcessed', '==', true)
.orderBy('sequence', 'asc')
}).valueChanges().pipe(
map((results: IFirestoreVideo[]) => results.map((result: IFirestoreVideo) => ({ videoId: result.id, sequence: result.sequence })))
)
}
Converting the above to use snapshotChanges to filter out stuff from cache:
constructor(private afs: AngularFirestore) {}
private getSequences(collection: string): Observable<IPaddedSequence[]> {
return this.afs.collection<IFirestoreVideo>('videos', ref => {
return ref
.where('flowPlayerProcessed', '==', true)
.orderBy('sequence', 'asc')
}).snapshotChanges().pipe(
filter((actions: DocumentChangeAction<any>[], idx: number) => idx > 0 || actions.every(a => a.payload.doc.metadata.fromCache === false)),
map((actions: DocumentChangeAction<any>[]) => actions.map(a => ({ id: a.payload.doc.id, ...a.payload.doc.data() }))),
map((results: IFirestoreVideo[]) => results.map((result: IFirestoreVideo) => ({ videoId: result.id, sequence: result.sequence })))
)
}
The only differences are that valueChanges changes to snapshotChanges and then add the filter DocumentChangeAction and map DocumentChangeAction lines at the top of the snapshotChanges pipe, everything else remains unchanged.
This approach is discussed here

Meteor subscription to get only new documents

Just wondering if there is a way to set my meteor subscription to load only new documents from a mongo collection, avoiding to sync deletes and updates (Since they are not relevant in the data that is shown to user).
Why I need that? It seems anytime I do a Meteor.subscribe after an offline period, the WHOLE collection is sent again from server to client, while I only need the new records.
I think this happen to keep local/remote database integrity, but since my app is planned to work online/offline (I'm using also groundDB), it seems to me It will be very inefficient in terms of data usage.
Thanks in advance.
You can create a publish which sends only new documents. Like:
Meteor.publish('newDocumentsOnly', () => {
let initializing = true;
const handle = Collection.find().observeChanges({
added: (id, fields) => {
if (initializing) return;
this.added('Collection', id, fields);
}
});
initializing = false;
this.ready();
this.onStop(() => {
handle.stop();
});
});

How to return and update a table in bookshelf knex

I am using postgresql, knex, and bookshelf to make queries to update my users table. I would like to find all users who didn't sign in during a specific time and then update their numAbsences and numTardies field.
However it appears that when running a raw sql query using bookshelf.knex the result that I get for users is an array of objects rather than an array of bookshelf objects of objects because I can't save the objects directly to the database when I try to use .save(). I get the exception user.save is not a function.
Does anyone know how I can update the values in the database for the users? I've seen the update function but I need to also return the users in absentUsers so I select them currently.
// field indicates whether the student was late or absent
var absentUsers = function(field){
// returns all users who did not sign in during a specific time
if (ongoingClasses){
return bookshelf.knex('users')
.join('signed_in', 'signed_in.studentId', '=', 'users.id')
.where('signed_in.signedIn', false)
.select()
.then(function(users){
markAbsent(users, field);
return users;
});
}
}
var markAbsent = function(users, field){
users.forEach(function(user){
user[field]++;
user.save();
})
}
I've solved my problem by using another sql query in knex. It seemed there was no way to use a sql query and then use standard bookshelf knex methods since the objects returned were not bookshelf wrapper objects.
var absentUsers = function(field){
// returns all users who did not sign in during a specific time
if (ongoingClasses){
return bookshelf.knex('users')
.join('signed_in', 'signed_in.studentId', '=', 'users.id')
.where('signed_in.signedIn', false)
.select()
.then(function(users){
markAbsent(users, field);
});
}
}
var markAbsent = function(users, field){
users.forEach(function(user){
var updatedUser = {};
updatedUser[field] = user[field]+1;
bookshelf.knex('users')
.where('users.id', user.id)
.update(updatedUser).then(function(){
});
});
}
With your code bookshelf.knex('users') you leave the "Bookshelf world" and are in "raw knex world". Knex alone doesn't know about your Bookshelf wrapper objects.
You may use Bookshelf query method to get the best of both worlds.
Assuming your model class is User, your example would look approximately like
User.query(function(qb) {
qb.join('signed_in', 'signed_in.studentId', 'users.id')
.where('signed_in.signedIn', false);
})
.fetchAll()
.then(function(bookshelfUserObjects) {
/*mark absent*/
return bookshelfUserObjects.invokeThen('save'); // <1>
});
<1> invokeThen: Call model method on each instance in collection