I want to pagination with Spring Data Mongo.From docs spring data mongo can do :
public interface TwitterRepository extends MongoRepository<Twitter, String> {
List<Twitter> findByNameIn(List<String> names, Pageable pageable);
}
If Twitter Document Object like this:
#Document
public class Twitter {
String name;
#DBRef
List<Comment> comments
}
Does spring data mongo support pagination with comments?
Note: The code specified is not tested, it will just serve as a pointer for you
The following mongo query limits the the array size to be returned:
db.Twitter.find( {}, { comments: { $slice: 6 } } )
The above mechanism can be used to enforce pagination like so:
db.Twitter.find( {}, { comments: { $slice: [skip, limit] } } )
You can try by annotating your method
#Query(value="{ 'name' : {'$in': ?0} }", fields="{ 'comments': { '$slice': [?1,?2] } }")
List<Twitter> findByNameIn(List<String> names, int skip, int limit);
}
You can specify that in your query like so:
Query query = new Query();
query.fields().slice("comments", 1, 1);
mongoTemplate.find(query, DocumentClass.class);
or you can try and execute the command directly using:
mongoTemplate.executeCommand("db.Twitter.find( {}, { comments: { $slice: [skip, limit] } } )")
General Pagination Mechanisms:
General Pagination mechanisms only work at the document level, examples of which are given below.
For them you will have to manually splice the returned comments at the application level.
If you using the MongoTemplate class (Spring-Data Docs) then:
Use org.springframework.data.mongodb.core.query.Query class's skip() and limit() method to perform pagination
Query query = new Query();
query.limit(10);
query.skip(10);
mongoTemplate.find(query, DocumentClass.class);
If you are using Repository (Spring-Data-Reposioty) then use PagingAndSortingRepository
Related
I'm using MongoDB 4.2 with Spring Boot 2.3.1 and I'm looking for a way to avoid read skew in my scenario.
I have a collection named "metadata" and one named "messages". The latter contains messages like this:
{
"aggregateId" : "myAggregateId",
"messageId" : "uuid",
"message" : "some message"
}
and "metadata" contains the version for each "aggregate":
{
"aggregateId" : "myAggregateId",
"version" : NumberLong(40)
}
The reason for not just storing messages in a subarray is among other things that the number of messages per aggregate can be greater than 16Mb (which is the document limit in MongoDB).
When issuing a query I think I'd like to create an interface like this for the users:
public interface MyRepository {
Mono<Aggregate> findByAggregateId(String aggregateId);
}
where Aggregate is defined like this:
public class Aggregate {
private final String aggregateId;
private final int version;
private Flux<Message> messages;
}
The problem now is that I'd like Aggregate to be consistent when reading! I.e. if there are writes to the same aggregate before messages are subscribed to then I don't want the new messages to be included (those written after I've subscribed to Mono<Aggregate>).
Let's look at an example. This is one attempt at an implementation:
public Mono<Aggregate> findByAggregateId(String aggregateId) {
return transactionalOperator.execute(status ->
reactiveMongoTemplate.findOne(query(where("aggregateId").is(aggregateId)), Document.class, "metadata")
.map(metadata -> {
Aggregate aggregate = new Aggregate(metadata.getString("aggregateId"), metadata.getLong("version"));
Flux<Message> messages = reactiveMongoTemplate.find(query, Message.class, "messages");
aggregate.setMessages(messages);
return aggregate;
})
);
}
I totally understand that this won't work since the messages Flux is not subscribed to in the transaction. But I can't figure out how I should combine the outer Aggregate that is a Mono and an inner Flux (messages) and retain the non-blocking capabilities AND consistency (i.e. avoid read skew)?
One approach would be to change the Aggregate class to this:
public class Aggregate {
private final String aggregateId;
private final int version;
private Stream<Message> messages;
}
and change the findByAggregateId implementation to this:
public Mono<Aggregate> findByAggregateId(String aggregateId) {
return transactionalOperator.execute(status ->
reactiveMongoTemplate.findOne(query(where("aggregateId").is(aggregateId)), Document.class, "metadata")
.flatMap(metadata -> {
Aggregate aggregate = new Aggregate(metadata.getString("aggregateId"), metadata.getLong("version"));
Stream<Message> messages = reactiveMongoTemplate.find(query, Message.class, "messages").toStream();
aggregate.setMessages(messages);
return aggregate;
})
);
}
but calling toStream is a blocking operation so this is not right.
So what is the correct way to deal with this?
Hello I have json data like that:
{
"_id":ObjectId('5dfe907f80580559fedcc9b1'),
"companyMail":"mail#gmail.com"
"workers":[
{
"name":name,
"surName":surname,
"mail":"mail2#gmail.com",
"password":"password",
"companyMail":"mail#gmail.com",
}
]
}
And I want to get an worker from workers:
{
"name":name,
"surName":surname,
"mail":"mail2#gmail.com",
"password":"password",
"companyMail":"mail#gmail.com",
}
I'm writing this query:
collection.findOne({
'companyMail':"mail#gmail.com",
'workers.mail':"mail2#gmail.com",
});
But it gives me whole of data. I only want to get worker which I search. How can I do that with Mongo Dart.
https://pub.dev/packages/mongo_dart
I found the solution. We should use aggregation but we should add some specific query to get one result. In dart mongo, we can use Filter object to add. Like that:
final pipeline = AggregationPipelineBuilder()
.addStage(Match(where.eq('companyMail', companyMail).map['\$query']))
.addStage(Match(where.eq('customers.mail', customerMail).map['\$query']))
.addStage(Project({
"_id": 0, //You can use as:'customer' instead of this keyword.
"customers": Filter(input: '\$customers',cond: {'\$eq':["\$\$this.mail",customerMail]}).build(),
}))
.build();
final result = await DbCollection(_db, 'Companies')
.aggregateToStream(pipeline)
.toList();
mongo-dart API driver is very bad and there is no good documentation whereas mongo-node.js API driver is very good and has very good documentation, so better to do server side with node, for example in node your problem will solve by one line code:
collection.find(
{
'companyMail':"mail#gmail.com",
'workers.mail':"mail2#gmail.com",
}).project({
'_id':0, 'workers':1
});
Pass options to project the workers field only
db.company.findOne(
{
'companyMail':"mail#gmail.com",
'workers.mail':"mail2#gmail.com",
},
{
"workers":1,
_id:0
}
);
In mongo-dart , looking at their api, you can use aggregation which is as follows
final pipeline = AggregationPipelineBuilder()
.addStage(Match(where.eq('companyMail','mail#gmail.com')))
.addStage(Project({
'_id': 0,
"workers":1,
})).build())
final result =
await DbCollection(db, 'company')
.aggregateToStream(pipeline).toList();
// result[0] should give you one worker
I am using ElasticsearchRepository and I want to search some keywords. What I want to query is like;
//Get all results which contains at least one of these keywords
public List<Student> searchInBody(List<String> keywords);
I have already created a query for single keyword and It works but I don't know how to create a query for multiple keywords. Is there any way to do this?
#Repository
public interface StudentRepository extends
ElasticsearchRepository<Student, String> {
public List<Student> findByNameOrderByCreateDate(String name);
#Query("{\"query\" : {\"match\" : {\"_all\" : \"?0\"}}}")
List<ParsedContent> searchInBody(String keyword);
}
Yes, You can pass an array of String objects in ElasticsearchRepository.
Elasticsearch provides terms query for that.
Also You have to use JSONArray instead of List<String> i.e. you have to convert your List<String> to JsonArray. (Reason: check syntax of elastic query provided below)
Here is how you can use it in your code:
#Query("{\"bool\": {\"must\": {\"terms\": {\"your_field_name\":?0}}}}")
List<ParsedContent> searchInBody(JSONArray keyword);
Result will contain objects with atleast one keyword provided in your keyword array.
Following is rest request representation of above java code that you can use in your kibana console or in terminal:
GET your_index_name/_search
{
"query" : {
"bool": {
"must": {
"terms": {
"your_field_name":["keyword_1", "keyword_2"]
}
}
}
}
}
Note: For more options, You can check terms-set-query
Learning FHIR and trying to implement with MEAN stack which uses MongoDb as database, I would like to seek your help on my question.
When I get the POST request for a new resource docment, I will insert it into MongoDB. Since the MongoDB will add the _id (object id) to the resources as a unique id. When I retrieve the document, it will have the extra field _id. I think it will make the resources not compliance any more since the _id is not defined in the resources.
May I know how to handle this issue? Will this extra _id matter in the FHIR resource?
Best regards,
Autorun
So, I'm also using MongoDB - along with mongoose - to implement FHIR in nodejs.
I've just added a field called id in the schema definition for mongoose like this
import mongoose from 'mongoose';
import shortid from 'shortid';
class resource extends mongoose.Schema {
constructor(schema) {
super();
this.add({
// just added this to make MongoDB use shortid
_id: { type: String, default: shortid.generate },
id: { type: {} },
id_: { type: {} },
implicitRules: { type: String },
implicitRules_: { type: {} },
language: { type: String },
language_: { type: {} },
...schema
});
}
}
export default resource;
and then _id field takes its value from the id when create/update a resource
my code for upserting a patient resource
upsert(root, params, context, ast) {
const projection = this.getProjection(ast);
if (!params.data.id) {
params.data.id = shortid.generate();
}
params.data.resourceType = 'Patient';
const upserted = model
.findByIdAndUpdate(params.data.id, params.data, {
new: true,
upsert: true,
select: projection
})
.exec();
if (!upserted) {
throw new Error('Error upserting');
}
return upserted;
}
yes, the _id will not be conformant. You can't change it to 'id'?
Perhaps you can take a look at the Spark server, which also uses a MongoDB to store the resources. In the Spark.Store.Mongo namespace you will see some helper methods to convert a Mongo BSONdocument to a FHIR resource.
I have this schema articleSchema:
{
//other attributes
tags : [ String ]
}
I want to search for articles based on a certain criteria and retrieve only the tags, and then create a single array of the tags from all the articles without duplicates.
Is there any built in functionality in mondgodb and mongoose of doing this?
As pointed out by WiredPrairie, distinct was the solution.
var query = { /** Query for the articles that I want tags from */ };
//Using mongoose-q
return Article.distinctQ('tags', query);
Try it.
db.getCollection('collection').find({'your query'}).distinct('tags', function(err, results){
console.log(results);
});