When I executed find method and I have a cursor, the cursor contains retrieved documents before iterating over it?
I have this method:
public async Task<Post> GetPostByIdAsync(ObjectId id, CancellationToken cancellationToken)
{
using var cursor = await _Collection.FindAsync(post => post.Id == id, cancellationToken: cancellationToken);
return await cursor.SingleAsync(cancellationToken);
}
First, I'm calling FindAsync that returns a cursor, and then, I'm calling SingleAsync that returns the document. I have some questions about it.
In what point the query is executed? When I call FindAsync or when I call SingleAsync?
When I executed find method and I have a cursor, the cursor contains retrieved documents before iterating over it?
If I have a cursor with 100 documents, but I only iterate over first 20, the others 80 documents are queried and retrieved from server?
Why getting the cursor and iterating over it are the two operations async?
public async Task<Post> GetPostByIdAsync(ObjectId id, CancellationToken cancellationToken)
{
var query = _Collection.AsQueryable().Where(post => post.Id == id);
return await query.SingleAsync(cancellationToken);
}
If I call AsQueryable because I want to use LINQ, I have only one async method. There are any server operation running sync and blocking the thread?
I see that IMongoQueryable<T> is a subtype of IAsyncCursorSource<T>. What is the difference between Cursor and Cursor Source?
When a find is issued and the result set exceeds default (or specified) batch size, the response to the find includes:
The first batch of documents
Cursor id to retrieve the next batch
If you know you will only be processing a certain number of documents, use limit to restrict the result set to that many documents.
Related
How to get docs id from some matching condition?
I was able to get this to work:
Future getChat({required String userIdsArr}) async {
var docId = '';
await chat.where('User ids', isEqualTo: userIdsArr).get().then((value) {
value.docs.forEach((element) {
docId = element.id;
});
});
//print(docId);
return docId
}
this returns the correct record, however, I think this is a terrible way of quering the database because I have to fetch all the records everytime.
Is there a way to write this so that I get the doc Id of the matching condition?
Unfortunately, there is not a better way to accomplish this. When you use the where clause though, it won't fetch everything like you suspect, only records that contain the value you are querying for. I don't believe it's as expensive of a call as you might think.
I'm trying to execute two queries against the database, the first one inserts the data then I do another query to get some data from a different collection then combine the result, but the zipWith is not being executed.
Mono<String> orderMono = orderDtoMono
.map(EntityDtoUtil::toEntity)
.flatMap(this.repository::insert)
.zipWhen(order -> this.clientRepository.findById(order. getOrderId()))
.map(i -> {
System.out.println("here");
return i.getT1().getOrderId() +" : "+i.getT2().getSuccessUrl();});
return orderMono;
The insert query works, however, the zipWhen is not executed and I get API result as 200 empty body.
The desired result would be the string created inside map operator at the end.
Zip will work only when both publishers return some data. Otherwise it will be empty.
In your case, you had just inserted a new record via repository .flatMap(this.repository::insert) - then you immediately expect this order id to be present via this.clientRepository.findById(order. getOrderId()). I believe it will be empty - so in your case you get empty response.
Do you have any DB trigger to insert records in other tables based on the new order id? Can you explain this - this.clientRepository.findById(order. getOrderId()) ?
My app receives data from a remote server and calls ReplaceOne to either insert new or replace existing document with a given key with Upsert = true. (the key is made anonymous with *) The code only runs in a single thread.
However, occasionally, the app crashes with the following error:
Unhandled Exception: MongoDB.Driver.MongoWriteException: A write operation resulted in an error.
E11000 duplicate key error collection: ****.orders index: _id_ dup key: { : "****-********-********-************" } ---> MongoDB.Driver.MongoBulkWriteException`1[MongoDB.Bson.BsonDocument]: A bulk write operation resulted in one or more errors.
E11000 duplicate key error collection: ****.orders index: _id_ dup key: { : "****-********-********-************" }
at MongoDB.Driver.MongoCollectionImpl`1.BulkWrite(IEnumerable`1 requests, BulkWriteOptions options, CancellationToken cancellationToken)
at MongoDB.Driver.MongoCollectionBase`1.ReplaceOne(FilterDefinition`1 filter, TDocument replacement, UpdateOptions options, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
at MongoDB.Driver.MongoCollectionBase`1.ReplaceOne(FilterDefinition`1 filter, TDocument replacement, UpdateOptions options, CancellationToken cancellationToken)
at Dashboard.Backend.AccountMonitor.ProcessOrder(OrderField& order)
at Dashboard.Backend.AccountMonitor.OnRtnOrder(Object sender, OrderField& order)
at XAPI.Callback.XApi._OnRtnOrder(IntPtr ptr1, Int32 size1)
at XAPI.Callback.XApi.OnRespone(Byte type, IntPtr pApi1, IntPtr pApi2, Double double1, Double double2, IntPtr ptr1, Int32 size1, IntPtr ptr2, Int32 size2, IntPtr ptr3, Int32 size3)
Aborted (core dumped)
My question is, why is it possible to have dup key when I use ReplaceOne with Upsert = true options?
The app is working in the following environment and runtime:
.NET Command Line Tools (1.0.0-preview2-003121)
Product Information:
Version: 1.0.0-preview2-003121
Commit SHA-1 hash: 1e9d529bc5
Runtime Environment:
OS Name: ubuntu
OS Version: 16.04
OS Platform: Linux
RID: ubuntu.16.04-x64
And MongoDB.Driver 2.3.0-rc1.
Upsert works based on the filter query. If the filter query doesn't match, it will try to insert the document.
If the filter query finds the document, it will replace the document.
In your case, it could have gone in either way i.e. insert/update. Please check the data to analyze the scenario.
Insert scenario:-
The actual _id is created automatically by upsert if _id is not present in filter criteria. So, _id shouldn't create uniqueness issue. If some other fields are part of unique index, it would create uniqueness issue.
Replace scenario:-
The field that you are trying to update should have unique index defined on it. Please check the indexes on the collection and its attributes.
Optional. When true, replaceOne() either: Inserts the document from
the replacement parameter if no document matches the filter. Replaces
the document that matches the filter with the replacement document.
To avoid multiple upserts, ensure that the query fields are uniquely
indexed.
Defaults to false.
MongoDB will add the _id field to the replacement document if it is
not specified in either the filter or replacement documents. If _id is
present in both, the values must be equal.
I could not get IsUpsert = true to work correctly due to a unique index on the same field used for the filter, leading to this error: E11000 duplicate key error collection A retry, as suggested in this Jira ticket, is not a great workaround.
What did seem to work was a Try/Catch block with InsertOne and then ReplaceOne without any options.
try
{
// insert into MongoDB
BsonDocument document = BsonDocument.Parse(obj.ToString());
collection.InsertOne(document);
}
catch
{
BsonDocument document = BsonDocument.Parse(obj.ToString());
var filter = Builders<BsonDocument>.Filter.Eq("data.order_no", obj.data.order_no);
collection.ReplaceOne(filter, document);
}
There is not enough information from you, but probably the scenario is the following:
You receive data from server, replaceOne command doesn't match any record and try to insert new one, but probably you have a key in a document that is unique and already exists in a collection. Review and make some changes in your data before trying to update or insert it.
I can co-sign on this one:
public async Task ReplaceOneAsync(T item)
{
try
{
await _mongoCollection.ReplaceOneAsync(x => x.Id.Equals(item.Id), item, new UpdateOptions { IsUpsert = true });
}
catch (MongoWriteException)
{
var count = await _mongoCollection.CountAsync(x => x.Id.Equals(item.Id)); // lands here - and count == 1 !!!
}
}
There is a bug in older MongoDB drivers, for example v2.8.1 has this problem. Update your MongoDB driver and the problem will go away. Please note when you use a new driver the DB version also needs to be updated and be compatible.
Environment : MongoDb 3.2, Morphia 1.1.0
So lets say i am having a collection of Employees and Employee entity has several fields. I need to do something like apply multiple filters (conditional) and return a batch of 10 records per request.
pesudocode as below.
#Entity("Employee")
Employee{
String firstname,
String lastName,
int salary,
int deptCode,
String nationality
}
and in my EmployeeFilterRequesti carry the request parameter to the dao
EmployeeFilterRequest{
int salaryLessThen
int deptCode,
String nationality..
}
Pseudoclass
class EmployeeDao{
public List<Employee> returnList;
public getFilteredResponse(EmployeeFilterRequest request){
DataStore ds = getTheDatastore();
Query<Employee> query = ds.createQuery(Emploee.class).disableValidation();
//conditional request #1
if(request.filterBySalary){
query.filter("salary >", request.salary);
}
//conditional request #2
if(request.filterBydeptCode){
query.filter("deptCode ==", request.deptCode);
}
//conditional request #3
if(request.filterByNationality){
query.filter("nationality ==", request.nationality);
}
returnList = query.batchSize(10).asList();
/******* **THIS IS RETURNING ME ALL THE RECORDS IN THE COLLECTION, EXPECTED ONLY 10** *****/
}
}
SO as explained above in the code.. i want to perform conditional filtering on multiple fields. and even if batchSize is present as 10, i am getting complete records in the collection.
how to resolve this ???
Regards
Punith
Blakes is right. You want to use limit() rather than batchSize(). The batch size only affects how many documents each trip to the server comes back with. This can be useful when pulling over a lot of really large documents but it doesn't affect the total number of documents fetched by the query.
As a side note, you should be careful using asList() as it will create objects out of every document returned by the query and could exhaust your VM's heap. Using fetch() will let you incrementally hydrate documents as you need each one. You might actually need them all as a List and with a size of 10 this is probably fine. It's just something to keep in mind as you work with other queries.
Is there a way to use FirstOrDefault() inside a complex query but not throw an exception if it returns null value?
My query:
contex.Table1.Where(t => t.Property == "Value").FirstOrDefault()
.Object.Table2.Where(t => t.Property2 == "Value2").FirstOrDefault();
If the query on the first table (Table1) doesn't return an object the code throws an exception. Is there a way to make it return just null?
Try a SelectMany on Table2, without the intermediate FirstOrDefault():
context.Table1.Where(t1 => t1.Property1 == "Value1")
.SelectMany(t1 => t1.Table2.Where(t2 => t2.Property2 == "Value2"))
.FirstOrDefault();
Also, you might want to use SQL Profiler to check the SQL that is being sent by EF. I believe the query as constructed in your question will result in two queries being sent to the database; one for each FirstOrDefault().
You could build your own helper function, that takes IEnumerable
public static TSource CustomFirstOrDefault<TSource>(this IEnumerable<TSource> source)
{
return source.FirstOrDefault() ?? new List<TSource>();
}
This would effectively return an empty list, which, when called upon, providing your code in your Object property can handle nulls, won't bomb out, cause you'll just be returning a 0 item collection, instead of a null.
Only the first query with Where is a database query. As soon as you apply a "greedy" operator like FirstOrDefault the query gets executed. The second query is performed in memory. If Object.Table2 is a collection (which it apparently is) and you don't have lazy loading enabled your code will crash because the collection is null. If you have lazy loading enabled a second query is silently executed to load the collection - the complete collection and the filter is executed in memory.
You query should instead look like #adrift's code which would really be only one database query.