how do I use skip/take in Vinelab/Neoeloquent queries: PHP - offset

I want to limit the number of results given by a Neoeloquent query, take() works fine but I don't know how should I use skip()? I read the laravel 5.2 Doc. I'm trying to use skip(10)->take(10) but it says "Method skip does not exist."
here is my code:
$artifact=Models\Artifact::where('aid',$request->aid)->first();
$comments=$artifact->comments->take(10);

With the answer you provided what happens is that you will be fetching all of the comments so with a large number of them it will be a bottleneck on performance especially that you do not need all of them. What you can do is use limit and offset on the query with the methods take and skip respectively, as follows:
$comments = $artifact->comments()->take(10)->skip(5)->get()

ok, I found an answer to my own question, since the result set of $artifact->comments is a laravel collection, there is no skip() method. using another method named slice() I could solve the problem and get my desired subset of result. Now I have:
$comments=$artifact->comments->slice($startOffset, $count);
which works fine. Another method named splice() returns similar values but please consider that it will modify the original result set.

Related

ObjectBox dynamic queries

e.g. the end user makes selections from two of five possible filters, the last three filters being left as ‘all’.
Rather than me creating queries for every possible combination of the 5 filters (25 different queries in total), what is the most efficient syntax for handling this?
Should I use .and to chain the queries together, and then can I specify ‘all’ for any which are not required?
A query builder can be used to build the query according to the selected filters. That is adding query criteria only using inside if conditions checking for the the filters.
I solved this as follows. Use a ternary operator ?: and in the second condition query one of the values with .notNull()
This gives the result of 'all', effectively ignoring this part of the query.
This is a hack, but it works. It is obviously an expensive solution, as the ideal would be to skip over unwanted filters completely.
Note to developer: 'if' cannot be used within query structure in dart. Thanks for finding time to respond, hopefully my additional info helps.

Firestore, why use "update" instead of "set merge"?

set with merge will update fields in the document or create it if it doesn't exists
update will update fields but will fail if the document doesn't exist
Wouldn't it be much easier to always use set merges?
Are the prices slightly different?
Between set merge and update there is difference in the use case.
You may find detailed information regarding this on this post.
Regarding the pricing, as stated here:
Each set or update operation counts as a single write and is being billed according to the region.
=========================================================================
EDIT:
The choice of which operation to use is greatly depending on the use case, as if you use "set merge" for a batch update, your request will successfully update all existing documents but also create dummy documents for non existent ids, which sometimes is not what you want.
After investigating a bit further, we could add another difference:
set merge will always override the data with the data you pass, while
update is specifically designed to give you the possibility to perform a partial update of a document without the possibility of creating incomplete documents that your code isn't otherwise prepared to handle. Please check this answer, as well as this scenario.
The difference is that .set(data, {merge:true}) will update the document if it exists, or create the document if it doesn't.
.update() fails if the document doesn't exist.
But why does .update() still exist? Well, probably for backward compatibility. I believe .set() with merge:true has been introduced at a later date than .update(). As you have pointed out, set/merge is more versatile. I use it instead of .update() and instead of .add()

How to check if a result contains rows? (FbDataReader.HasRows always returns true!)

I am using the Firebird ADO.NET Data Provider and before I pass the reader on to a consuming service I would like to determine whether any rows were returned. Consider the following snippet:
FbCommand cmd = GetSomeCommandFromTheEther();
FbDataReader reader = cmd.ExecuteReader();
if (reader.HasRows)
DoSomethingWith(reader);
else
TellTheUserWeGotNothing();
What I've now learned is that FbDataReader.HasRows always returns True. In fact looking at the source code it would appear it is just a wrapper for FbDataReader.command.IsSelectCommand, not only useless, it makes the property name "HasRows" a complete misnomer.
In any event, how can I find out whether a given query has rows, without advancing the record pointer? Note that I want to pass the reader off to an external service; if I call FbDataReader.Read() to check its result, I will consume a row and DoSomethingWith() will not get this first row.
I am afraid you have stumbled on a Firebird limitation. As stated in following Firebird FAQ link:
Why FbDataReader.HasRows returns always true?
The FbDataReader.HasRows property is implemented for compatibility
only. It returns always true because Firebird doesn't have a way for
know if a query returns rows of not without fetching the data.
There is already a mention of this in the Firebird Tracker. Check the issue DNET-305.
On the other hand, in .NET, it seems OleDbDataReader and SqlDataReader, which inherit from DbDataReader have the same problem, as stated in this MSDN link.
Since FbDataReader inherits from the same class as those, you might want to consider one of the workarounds that Microsoft suggests in its MSDN article, which is to perform first a select count(*). Granted, that is unelegant and a waste of time and resources but at least it could help you out.

Mongo pagination

I have a use case where I need to get list of Objects from mongo based off a query. But, to improve performance I am adding Pagination.
So, for first call I get list of say 10 Objects, in next I need 10 more. But I cannot use offset and pageSize directly because the first 10 objects displayed on the page may have been modified [ deleted ].
Solution is to find Object Id of last object passed and retrieve next 10 objects after that ObjectId.
Please help how to efficiently do it using Morphia mongo.
Using morphia you can do this by the following command.
datastore.find(YourClass.class).field(id).smallerThan(lastId).limit(10).order("-ts");
Since you are querying for retrieving the items after the last retrieved id, you won't be bothered to deal with deleted items.
One thing I have thought up of is that you will have the same problem as with using skip() here unless you intend to change how your interface works.
Using ranged queries like this demands that you use a different kind of interface since it is must harder to detect now exactly what page you are on and how many pages exist in the future, especially if you are doing this to avoid problems with conventional paging.
The default type of interface to arise from this type of paging is merely a infinitely scrolling page, think of YouTube video comments or Facebook wall feed or even Google+. There is no physical pagination or "pages", instead you have a get more button.
This is the type of interface you will need to use to get ranged paging working better.
As for the query #cubbuk gives a good example:
datastore.find(YourClass.class).field(id).smallerThan(lastId).limit(10).order("-ts");
Except it should be greaterThan(lastId) since you want to find everything above that last _id. I would also sort by _id unless you make your OjbectIds sometime before you insert a record, if this is the case then you can use a specific timestamp set on insert instead.

Lisp Code Unexpected Results

I am trying to solve a homework problem where I have to return a selected users' grades in
order by course number (not allowed to use built-in sort function). I don't understand the results: the first entry isn't sorted, and some extra students seem to be returned. I don't know why and I spent over three hours trying to solve this one problem. Thanks.
A good start would be to get rid of functions like car, cdr, cadar, ...
Write access functions for the data records. Use first, second and third.
For accessing the list's first element use the function FIRST.
For accessing the rest of the elements use the function REST.
This makes the code easier to read and understand.