I've been using algolia with PHP and you can update/save objects using with the addObjects or saveObjects method on the client.
My problem is after you submit a bulk of objects to be saved and they update in the algolia database, if you do a search query for a keyword that used to be inside an attribute that has since been changed the results still show that object despite the data being updated. After clicking edit you can see the data on the browse page with the changes, but for whatever reason it still shows the results.
I tried closing/restarting the browser because ont he javascript side it's supposed to be in memory by default cache, but that doesn't seem to be the case. I tried using "refresh index" in the admin panel for algolia as well as calling clearCache on the javascript library and nothing is cleared. Same cached results based on old attribute data. I've even deleted the full index and resumitted all the objects and still got the same results.
So how exactly do you clear the search query cache after object attributes have been successfully updated in the algolia database?
Turns out this isn't a coding or api issue, if you so happen to have a synonym set in algolia's admin panel what happens is when you search for a term like "foo" and you have an alias in the admin panel that says "bar" is a synonym for "foo" any objects you have in the database that has "bar" in any attribute value will be listed for terms "foo" because algolia goes in and replaces all values that matches that synonym then does a query for the term you're looking for and then you'll get unexpected results if you aren't careful.
Related
I am trying to design a REST API in which there is a paginated collection. In my current design I use a page based approach:
GET /entities?page=2&pageSize=25
Retrieving a single entity is trivial:
GET /entities/4
A user story related to this API requires that when viewing a single entity in a screen, two buttons "Previous" and "Next" enable switching to said entities. An example:
GET /entities?page=2&pageSize=25
returns:
[{id: 2}, {id: 4}, {id: 17}]
Now, when viewing entity with id 4, the named buttons would navigate to the entities with id 2 or id 17.
My assumption is, that a client (web frontend in my case) could be able to "remember" the pagination information and use that when fetching the previous or next entity. This could be applied to eventual filters which I might add to the endpoint.
A basic idea of how to implement this would be to get the current page and for edge cases the previous and the next page (required if currently viewing the first / last resource of the collection). But that seems like an inefficient solution.
So my question is: Is my chosen pagination method even compatible with what I try to archive? If it is, how would clients use the API to archive the next/previous feature?
In this case I ended up with a frontend based solution.
Given that my constraint was that I could not change the type of pagination used, this is a crude solution:
Frontend makes initial GET request on the paginated data. This request may not only contain pagination parameters (page + size) but also filters (color = red). This is the search context.
The user accesses one of the items and some other view is displayed. Important: The search context is kept in local storage or similar.
If the user wants to browse to the next / previous item, the search is executed again with the same search context.
In the search result, search for the ID of the currently displayed item
If the currently displayed item is found, show the next / previous item
If it is not found, do some fallback. (In my case I chose to simply display the initial search UI)
I am not happy with this implementation as it has some major drawbacks IMO:
Not efficient: For every browsing click, a search request is made. When not using an efficient search database like Elasticsearch this will probably cause a lot of stress on the db.
Not perfect: If the currently displayed item is not in the search result, there is no sensible way to get the next / previous item
If the currently displayed item is the first / last item in the search result and you want to browse back / forward, the search would have to be executed twice, once on the current page, once on the previous / next.
As stated, I think this is not a good solution, I am still looking for a clever, efficient way to do this.
I am trying to create some firestore security rules. However, every rule that I write that involves something other than the users database pulling the document of the current user results in an error. There is some difference I am missing.
Here is the query and the data. The resource object is always null. Any get function that involves pulling from the design database using the designId variable also results in null.
You're putting a pattern into the form, which is not valid. You need to provide the specific document that you want to simulate a read or write. This means you need to copy the ID of the document into that field. It should be something like "/designs/j8R...Lkh", except you provide the actual value.
I am looking into best -practices for returning search results. I have a search page that subscribes to a publication that returns a find based on the searched regex query in multiple fields. This gets put into the minimongo collection, on the client.
At this time, the way it is being handled is that facets are being set up from the subscription. My question is if the filtering for the pre-loaded results from the backend should be done client side, or if the query should be sent back.
Example :
Given a collection of fruits, i want to find all that have the color red. The server returns this, but I have facets based on the fruits. So, i have a checkbox for strawberries, apples, cherries, etc. If I click on the checkbox for cherries, should I just be filtering the current minimongo collection, or should I re-query?
Logically, I already have all the needed items in my collection that I could be filtering on, so I am not sure why I would need to hit the back-end. The only time I should hit the backend is if in the search, I type in a new query (such as blue), and the facets get re-done appropriately
If your original search is returning all matching documents then adding criteria on the client can just be done in your minimongo query if the fields on which the additional criteria were returned with the original search.
OTOH if the original search is returning a paginated list or just the top N results or if the required keys weren't included then you want to continue the search on the server.
In a traditional request-response system, you might also want to query the server each time if the underlying data was rapidly changing (ex: a reservations system). With Meteor the reactive nature of pub-sub means that the data on the client is being constantly refreshed with adds/changes/deletions via DDP over WebSocket.
I am replicating a collection (I only have access to mongo shell on the server). In the current collection all documents have a field called jsonURL. The value of this field is a url http://www.something.com/api/abc.json. I want to copy each document from oldCollection to newCollection, but I want also want to fetch data from that url and add that to each new document created.
I last time heard that XMLHTTPRequest was on mongo's list, but as a low priority feature (I can understand why). And as I found nothing in the documentation, I am guessing its still in the queue. I am hoping I can get something in forEach(function(eachDoc){});
Do I have any other way of achieving this. Thanks.
I am writing a search form in CakePHP 2.0, current I have set it up running with the index action and view (it also posts to the index action) with validation against the model so that if anything incorrect is entered into a search field (fields include date, price) there is a nice validation error message next to the element. Basically it is a bit like a scaffolded add form.
If validation is successful I need to actually run a query and return some data. I don't want to display this data in the index view - should I:
Run the query then render a different view (which means the URL doesn't change - not sure I want that).
Store the search parameters in a session, redirect off to another action then retrieve the search details.
Is there any other way?
Both options are ok. You must decide what you like more, to not change the url or to change it?
you may also use the named parameters to pass the info so a user can bookmark their request, though it would need to do the validations in the same page as where it shows results. I usually do this with the cakedc search plugin.
Returning to your two options, if you mean which is better in performance i would choose number one, since the second one needs to load a new model/controller etc