I'm using Spring Data Redis with Geospatial Index. Every query method work. I would like to use the COUNT option, but I didn't find any more complex example in the documentation. Does anyone knows how to use the COUNT option at this context?
You can use Pageable to limit results. Staying on page 0 with e.g. Pageable.of(0, 10) limits the results to 10. However the number of items per page is not passed on to the COUNT option of the GEORADIUS command.
I've opened DATAREDIS-666 for this.
Related
I've built a KNIME workflow that helps me analyse (sales) data from numerous channels. In the past I used to export all orders manually and use an XSLX or CSV reader but I want to do it via WooCommerce's REST API to reduce manual labor.
I would like to be able to receive all orders up until now from a single query. So far, I only get as many as the # I fill in for &per_page=X. But if I fill in like 1000, it gives an error. This + my common sense give me the feeling I'm thinking the wrong way!
If it is not possible, is looping through all pages the second best thing?
I've managed to connect to the api via basic auth. The following query returns orders, but only 10:
I've tried increasing the number per_page but I do not think this is the right way to get all orders in one table.
https://XXXX.nl/wp-json/wc/v3/orders?consumer_key=XXXX&consumer_secret=XXXX
My current mindset would like to be able to receive all orders up until now from a single query. But it personally also feels like that this is not the common way to do it. Is looping through all pages the second best thing?
Thanks in advance for your responses. I am more of a data analist than a data engineer or scientist and I hope your answers will help me towards my goal of being more of a scientist :)
It's possible by passing params "per_page" with the request
per_page integer Maximum number of items to be returned in result set. Default is 10.
Try -1 as the value
https://woocommerce.github.io/woocommerce-rest-api-docs/?php#list-all-orders
I'm using Algolia along with instantsearch.js in a project to make searches and show categories and contents inside them (category page and search pages are powered by Algolia). We are using instantsearch.js v1 is from cdn.
Our main issue is that search doesn't provide more than 1000 records which we need.
As far as I understand correctly, browse() method provides more results, but it's not usable in instantsearch.js.
Also, after reading docs, I found out that there's a new option called paginationLimitedTo, which allows displaying more than 1000 records:
https://www.algolia.com/doc/rest-api/search/#paginationlimitedto
So, setting this would allow displaying more than 1000 records.
Can you help me here, how should I achieve getting more than 1000 records, or if it'd achieve our goal, how do we set this paginationLimitedTo attribute in instantsearch.js ? I'm okay if I need to build or edit instantsearch.js for the time being.
Thanks in advance,
In order to change the value for paginationLimitedTo, you will need to create a custom client object, then get your index by doing calling client.initIndex(indexName), and then change the setting by calling
index.setSettings({
paginationLimitedTo: 1000
});
You can check the guide for that in the docs here.
Also, please remember the following:
We recommend keeping the default value to guarantee excellent performance. Increasing the pagination limit will have a direct impact on the performance of search queries. A too high value will also make it very easy for anyone to retrieve (“scrape”) your entire dataset.
I want to limit the number of results given by a Neoeloquent query, take() works fine but I don't know how should I use skip()? I read the laravel 5.2 Doc. I'm trying to use skip(10)->take(10) but it says "Method skip does not exist."
here is my code:
$artifact=Models\Artifact::where('aid',$request->aid)->first();
$comments=$artifact->comments->take(10);
With the answer you provided what happens is that you will be fetching all of the comments so with a large number of them it will be a bottleneck on performance especially that you do not need all of them. What you can do is use limit and offset on the query with the methods take and skip respectively, as follows:
$comments = $artifact->comments()->take(10)->skip(5)->get()
ok, I found an answer to my own question, since the result set of $artifact->comments is a laravel collection, there is no skip() method. using another method named slice() I could solve the problem and get my desired subset of result. Now I have:
$comments=$artifact->comments->slice($startOffset, $count);
which works fine. Another method named splice() returns similar values but please consider that it will modify the original result set.
I have a use case where I need to get list of Objects from mongo based off a query. But, to improve performance I am adding Pagination.
So, for first call I get list of say 10 Objects, in next I need 10 more. But I cannot use offset and pageSize directly because the first 10 objects displayed on the page may have been modified [ deleted ].
Solution is to find Object Id of last object passed and retrieve next 10 objects after that ObjectId.
Please help how to efficiently do it using Morphia mongo.
Using morphia you can do this by the following command.
datastore.find(YourClass.class).field(id).smallerThan(lastId).limit(10).order("-ts");
Since you are querying for retrieving the items after the last retrieved id, you won't be bothered to deal with deleted items.
One thing I have thought up of is that you will have the same problem as with using skip() here unless you intend to change how your interface works.
Using ranged queries like this demands that you use a different kind of interface since it is must harder to detect now exactly what page you are on and how many pages exist in the future, especially if you are doing this to avoid problems with conventional paging.
The default type of interface to arise from this type of paging is merely a infinitely scrolling page, think of YouTube video comments or Facebook wall feed or even Google+. There is no physical pagination or "pages", instead you have a get more button.
This is the type of interface you will need to use to get ranged paging working better.
As for the query #cubbuk gives a good example:
datastore.find(YourClass.class).field(id).smallerThan(lastId).limit(10).order("-ts");
Except it should be greaterThan(lastId) since you want to find everything above that last _id. I would also sort by _id unless you make your OjbectIds sometime before you insert a record, if this is the case then you can use a specific timestamp set on insert instead.
I'm using lucene on a site of mine and I want to show the total result count from a query, for example:
Showing results x to y of z
But I can't find any method which will return me the total number of potential results. I can only seem to find methods which you have to specify the number of results you want, and since I only want 10 per page it seems logical to pass in 10 as the number of results.
Or am I doing this wrong, should I be passing in say 1000 and then just taking the 10 in the range that I require?
BTW, since I know you personally I should point out for others I already knew you were referring to Lucene.net and not Lucene :) although the API would be the same
In versions prior to 2.9.x you could call IndexSearcher.Search(Query query, Filter filter) which returned a Hits object, one of which properties [methods, technically, due to the Java port] was Length()
This is now marked Obsolete since it will be removed in 3.0, the only results of Search return TopDocs or TopFieldDocs objects.
Your alternatives are
a) Use IndexServer.Search(Query query, int count) which will return a TopDocs object, so TopDocs.TotalHits will show you the total possible hits but at the expense of actually creating <count> results
b) A faster way is to implement your own Collector object (inherit from Lucene.Net.Search.Collector) and call IndexSearcher.Search(Query query, Collector collector). The search method will call Collect(int docId) on your collector on every match, so if internally you keep track of that you have a way of garnering all the results.
It should be noted Lucene is not a total-resultset query environment and is designed to stream the most relevant results to you (the developer) as fast as possible. Any method which gives you a "total results" count is just a wrapper enumerating over all the matches (as with the Collector method).
The trick is to keep this enumeration as fast as possible. The most expensive part is deserialisation of Documents from the index, populating each field etc. At least with the newer API design, requiring you to write your own Collector, the principle is made clear by telling the developer to avoid deserialising each result from the index since only matching document Ids and a score are provided by default.
The top docs collector does this for you, for example
TopDocs topDocs = searcher.search(qry, 10);
int totalHits = topDocs.totalHits ;
The above query will count all hits, but return only 10.