how to best get data from react-query cache? - react-query

So the react-query cache is a key-value store but that is sort of limiting no? I thought that it was always best to keep network requests to a minimum since they take orders of magnitude longer than getting data from the cache. That being said, if I have a route /todos as the page for all the todos and a sub route /todos/FE234F32 as the page for some random todo, the query key for all the todos would be ["todos"] and the query key for the specific one would be ["todos", "FE234F32"], then say I visit the page with all the todos first, I would have the data for all the todos in the cache but then if I go to the page for the specific todo it would make a network request because there is no value for ["todos", "FE234F32"].
I understand I could do a getQueryData("todos") inside the query function for ["todos", "FE234F32"] so that I could check if the data is already in the cache but that sort of conditional based solution would make the code look terrible with a larger virtual hierarchy inside the cache. In general, it seems like most state solutions are hierarchical or object-based. but the key-value nature of react-query either causes fetching too much data and selecting down or fetching data that is already in the cache.
I could be completely off base here but I would love some insight/tips!

There are quite many ways to do this with react-query. If the data structure of your list query vs. your detail query are really the same, you can either pull it down the queries once you fetch the list, which will pre-populate the detail cache:
useQuery(
'posts',
fn,
{
onSuccess: data => data.map(post =>
queryClient.setQueryData(['posts', id], post)
}
)
Together with the right staleTime setting, you can avoid additional fetches.
You can also do the opposite and pull data once you mount the detail query via initialData:
useQuery(
['posts', id],
fn,
{
initialData: () =>
queryClient.getQueryData(['posts']?.find(post => post.id === id)
}
)
finally, if your data is really always the same, you can think about using the select function to grab data from the list query instead of doing an extra fetch. This has the drawback of always refetching the whole list if you need data for one detail and you don't yet have data in the cache:
const usePosts = (select) => useQuery(['posts'], fetchPosts)
const usePost = (id) => usePosts(data => data.find(post => post.id === id))

Related

Which is the better way of relating these database tables?

I have a table Users and Matters. Ordinarily only admins can create a matter so I did a one to many relationship between users and matters(User.hasMany(Matter)) and (Matter.belongsTo(User))
Now on the frontend, Matter is supposed to have a multi-select field called assignees where users gotten from the User table can be selected.
My current approach is to make assignees a column on Matter which will be an array of user emails selected on the frontend but the frontend developer thinks I should make it an array of user ids instead but I think that won't be efficient because when getting all matters or updating them, one will need to run a query each time to get the associated assignees using the array of ids stored in the assignees column(and I am not entirely sure on how to go about that).
Another option is having a UserMatters join table but I don't think it will be performant-friendly to populate two tables(Matter and UserMatters) on creation of a matter while updating and getting all matters will involve writing lots of code.
My question is, is there a better way to go about this or should I just stick with populating the assignees field with user emails since it looks like a better approach as far as I can see?
N.B: I am using sequelize(postgres)
So what I did was instead of creating a through/join table, the frontend sent an array of integers containing the IDs of the assignees. Since the assignees are also users on the app, when I get a particular matter, I just loop through the IDs in the assignees column which is an array and get the user details from the user table then I reassign the result to that column and return the resource.
try {
// get the resource
const resource = await getResource(id);
/*looping through the column containing an array of IDs and converting it to numbers(if they are strings)**/
let newArr = resource.assignees.map(id => Number(id));
let newAssignees;
/**fetch all users with the corresponding IDs and wait for that process to be complete**/
newAssignees = await Promise.all(newArr.map(id => getUserById(id)));
/**Returns only an array of objects(as per sequelize)**/
newAssignees.map(el => el.get({ raw: true }));
/**reassign the result**/
resource.assignees = newAssignees;
if(resource)
return res.status(200).json({
status: 200,
resource
})
} else {
return res.status(404).json({
status: 404,
message: 'Resource not found'
});
}
} catch(error){
return res.status(500).json({
status: 500,
err: error.message
})
}

How to improve performance on nested graphql connections when using pagination

I'm trying to implement some kind of a basic social network project. It has Posts, Comments and Likes like any other.
A post can have many comments
A post can have many likes
A post can have one author
I have a /posts route on the client application. It lists the Posts by paginating and shows their title, image, authorName, commentCount and likesCount.
The graphql query is like this;
query {
posts(first: 10, after: "123456") {
totalCount
edges {
node {
id
title
imageUrl
author {
id
username
}
comments {
totalCount
}
likes {
totalCount
}
}
}
}
}
I'm using apollo-server, TypeORM, PostgreSQL and dataloader. I use dataloader to get author of each post. I simply batch the requested authorIds with dataloader, get authors from PostgreSQL with a where user.id in authorIds query, map the query result to the each authorId. You know, the most basic type of usage of dataloader.
But when I try to query the comments or likes connection under each post, I got stuck. I could use the same technique and use postId for them if there was no pagination. But now I have to include filter parameters for the pagination. And there maybe other filter parameters for some where condition as well.
I've found the cacheKeyFn option of dataloader. I simply create a string key for the passed filter object to the dataloader, and it doesn't duplicate them. It just passes the unique ones to the batchFn. But I can't create a sql query with TypeORM to get the results for each first, after, orderBy arguments separately and map the results back to the function which called the dataloader.
I've searched the spectrum.chat source code and I think they don't allow users to query nested connections. Also tried Github GraphQL Explorer and it lets you query nested connections.
Is there any recommended way to achieve this? I understood how to pass an object to dataloader and batch them using cacheKeyFn, but I can't figure out how to get the results from PostgreSQL in one query and map the results to return from the loader.
Thanks!
So, if you restrict things a bit, this is doable. The restriction is to only allowed batched connections on the first page of results, e.g. so all the connections you're fetching in parallel are being done with the parameters. This is a reasonable constraint because it lets you do things like get the first 10 feed items and the first 3 comments for each of them, which represents a fairly typical use case. Trying to support independent pagination within a single query is unlikely to fulfil any real world use cases for a UI, so it's likely an over-optimisation. With this in mind, you can support the "for each parent get the first N children" use case with PostgreSQL using window.
It's a bit fiddly, but there are answers floating around which will get you in the right direction: Grouped LIMIT in PostgreSQL: show the first N rows for each group?
So use dateloader how you are with cacheKeyFn, and let your loader function recognise whether you can perform the optimisation (e.g. after is null and all other arguments are the same). If you can optimise, use a windowing query, otherwise do unoptimised queries in parallel as you would normally.

How to save one value of Parse object without overwriting entire object?

I have two users accessing the same object. If userA saves without first fetching the object to refresh their version, data that userB has already successfully saved would be overwritten. Is there any way(perhaps cloud code?) to access and update one, and only one, data value of a PFObject?
I was thinking about pushing the save out to the cloud, refreshing the object once it gets there, updating the value in the cloud, and then saving it back. However that's a pain and still not without it's faults.
This seems easy enough, but to me was more difficult than it should have been. Intuitively, you should be able to filter out the fields you don't want in beforeSave. Indeed, this was the advice given in several posts on Parse.com. In my experience though, it would actually treat the filtering as deletions.
My goal was a bit different - I was trying to filter out a few fields and not only save a few fields, but translating to your context, you could try querying the existing matching record, and override the new object. You can't abort via response.failure(), and I don't know what would happen if you immediately save the existing record with the field of interest and null out the request.object property - you could experiment on your own with that:
Parse.Cloud.beforeSave("Foo", function(request, response) {
// check for master key if client is not end user etc (and option you may not need)
if (!request.master) {
var query = new Parse.Query("Foo");
query.get(request.object.id).then(function(existing) {
exiting.set("some_field", request.object.get("some_field"));
request.object = exiting; // haven't tried this, otherwise, set all fields from existing to new
response.success();
}, function(error) {
response.success();
});
}
});

Need advice on hierarchical data pagination

I have pretty simple database structure which, with the exception for some self-referencing and intermediate relations, reduces to products-to-category conception.
Final data structure i retrieve using ResultClass::HashRefInflator after some conversions looks like that:
my $data = $self->db->resultset('Category')->with_translation($lang)->with_categories->with_products->display_flattened;
[
[0] {
parent_name "Parent Category",
id 3,
name "First Child category",
parent_id 1,
position 1,
products [
[0] {
name "Product One",
},
...
],
}
...
]
Things were going well until, in attempt to reduce initial products index page size, i decided to implement infinite scroll feature there. So in general, it's a pagination issue. The thing i am having hard times with is the fact that i can apply paging only on Category or Product resultset, not on a whole hierarchy to retrieve piece of data i want for next screen.
For instance, if i want 20 items per screen (item can either be a Category or Product) and i apply ->page(1) on Schema::ResultSet::Category, it will contain 20 categories with all products in them instead of 1st category with 19 related products and so on.
The only option which comes to my mind at the moment is storing the whole data structure as a single-dimensional array in some kind of in-memory storage like Redis or memcached and slice it as intended but i know it's wrong.
I don't totally know what you want here, because it sounds like you want the
category to count as part of the pagination; seems like an odd choice to me. If
you can avoid counting the category, then a better option might be to instead of
getting the first page of categories, get the first page of Products and their
related categories. That way you have a bound set of data instead of the
possible explosion you listed above.
You'd need to make some changes (by at least adding methods) to your resultsets,
but that shouldn't be too hard. Here's what you could do instead:
my $data = $self->db->resultset('Product')
->with_translation($lang)
->with_categories
->with_products
->search(undef, { page => 1 })
->display_flattened;
So given this, you'll get a page of products (I think that's 25) and then their
related categories.
As a side note, it's a little odd that you do with_categories on a category
resultset. I might be misunderstanding, but at the very least that's a
confusing thing to do.
Did you try:
my $data = $self->db->resultset('Category')->with_translation($lang)->with_categories->with_products->search_rs(undef, { page => 1 })->display_flattened;

Meteor and MongoDB dropdown population and integrity

Hopefully I can describe this correctly but I come from the RDBMS world and I'm building an inventory type application with Meteor. Meteor and Mongodb may not be the best option for this application but hopefully it can be done and this seems like a circumstance that many converts will run into.
I'm trying to forget many of the things I know about relational databases and referential integrity so I can get my head wrapped around Mongodb but I'm hung up on this issue and how I would appropriately find the data with Meteor.
The inventory application will have a number of drop downs but I'll use an example to better explain. Let's say I wanted to track an item so I'll want the Name, Qty on Hand, Manufacturer, and Location. Much more than that but I'm keeping it simple.
The Name and Qty on Hand are easy since they are entered by the user but the Manufacturer and the Location should be chosen in a drop down from a data driven list (I'm assuming a Collection of sorts (or a new one added to the list if it is a new Manufacturer or Location). Odds are that I will use the Autocomplete package as well but the point is the same. I certainly wouldn't want the end user to misspell the Manufacturer name and thereby end up with documents that are supposed to have the same Manufacturer but that don't due to a typo. So I need some way to enforce the integrity of the data stored for Manufacturer and Location.
The reason is because when the user is viewing all inventory items later, they will have the option of filtering the data. They might want to filter the inventory items by Manufacturer. Or by Location. Or by both.
In my relational way of thinking this would just be three tables. INVENTORY, MANUFACTURER, and LOCATION. In the INVENTORY table I would store the ID of the related respective table row.
I'm trying to figure out how to store this data with Mongodb and, equally important, how to then find these Manufacturer and Location items to populate the drop down in the first place.
I found the following article which helps me understand some things but not quite what I need to connect the dots in my head.
Thanks!
referential data
[EDIT]
Still working at this, of course, but the best I've come up with is to do it normalized way much like is listed in the above article. Something like this:
inventory
{
name: "Pen",
manufacturer: id: "25643"},
location: {id: "95789"}
}
manufacturer
{
name: "BIC",
id: "25643"
}
location
{
name: "East Warehouse",
id: "95789"
}
Seems like this (in a more simple form) would have to be an extremely common need for many/most applications so want to make sure that I'm approaching it correctly. Even if this example code were correct, should I use an id field with generated numbers like that or should I just use the built-in _id field?
I've come from a similar background so I don't know if I'm doing it correctly in my application but I have gone for a similar option to you. My app is an e-learning app so an Organisation will have many Courses.
So my schema looks similar to yours except I obviously have an array of objects that look like {course_id: <id>}
I then registered a helper than takes the data from the organisation and adds in the additional data I need about the courses.
// Gets Organisation Courses - In your case could get the locations/manufacturers
UI.registerHelper('organisationCourses', function() {
user = Meteor.user();
if (user) {
organisation = Organisations.findOne({_id: user.profile.organisation._id});
courses = organisation.courses.courses;
return courses;
} else {
return false;
}
});
// This takes the coursedata and for each course_id value finds and adds all the course data to the object
UI.registerHelper('courseData', function() {
var courseContent = this;
var course = Courses.findOne({'_id': courseContent.course_id});
return _.extend(courseContent, _.omit(course, '_id'));
});
Then from my page all I have to call is:
{{#each organisationCourses}}
{{#with courseData}}
{{> admListCoursesItem}}
{{/with}}
{{/each}}
If I remember rightly I picked up this approach from an EventedMind How-to video.