I have two domain objects,
#Document
public class PracticeQuestion {
private int userId;
private List<Question> questions;
// Getters and setters
}
#Document
public class Question {
private int questionID;
private String type;
// Getters and setters
}
My JSON doc is like this,
{
"_id" : ObjectId("506d9c0ce4b005cb478c2e97"),
"userId" : 1,
"questions" : [
{
"questionID" : 1,
"type" : "optional"
},
{
"questionID" : 3,
"type" : "mandatory"
}
]
}
I have to update the "type" based on userId and questionId, so I have written a findBy query method inside the custom Repository interface,
public interface CustomRepository extends MongoRepository<PracticeQuestion, String> {
List<PracticeQuestion> findByUserIdAndQuestionsQuestionID(int userId,int questionID);
}
My problem is when I execute this method with userId as 1 and questionID as 3, it returns the entire questions list irrespective of the questionID. Is the query method name valid or how should I write the query for nested objects.
Thanks for any suggestion.
Just use the #Query annotation on that method.
public interface CustomRepository extends MongoRepository<PracticeQuestion, String> {
#Query(value = "{ 'userId' : ?0, 'questions.questionID' : ?1 }", fields = "{ 'questions.questionID' : 1 }")
List<PracticeQuestion> findByUserIdAndQuestionsQuestionID(int userId, int questionID);
}
By adding the fields part of the #Query annotation, you are telling Mongo to only return that part of the document. Beware though, it still returns the entire document in the same format - just missing everything you did not specify. So your code will still have to return List<PracticeQuestion> and you will have to do:
foreach (PracticeQuestion pq : practiceQuestions) {
Question q = pq.getQuestions().get(0); // This should be your question.
}
Property expressions
Property expressions can refer only to a direct property of the managed entity, as shown in the preceding example. At query creation time you already make sure that the parsed property is a property of the managed domain class. However, you can also define constraints by traversing nested properties. Assume Persons have Addresses with ZipCodes. In that case a method name of List<Person> findByAddressZipCode(ZipCode zipCode);
creates the property traversal x.address.zipCode. The resolution algorithm starts with interpreting the entire part (AddressZipCode) as the property and checks the domain class for a property with that name (uncapitalized). If the algorithm succeeds it uses that property. If not, the algorithm splits up the source at the camel case parts from the right side into a head and a tail and tries to find the corresponding property, in our example, AddressZip and Code. If the algorithm finds a property with that head it takes the tail and continue building the tree down from there, splitting the tail up in the way just described. If the first split does not match, the algorithm move the split point to the left (Address, ZipCode) and continues.
Although this should work for most cases, it is possible for the algorithm to select the wrong property. Suppose the Person class has an addressZip property as well. The algorithm would match in the first split round already and essentially choose the wrong property and finally fail (as the type of addressZip probably has no code property). To resolve this ambiguity you can use _ inside your method name to manually define traversal points. So our method name would end up like so:
UserDataRepository:
List<UserData> findByAddress_ZipCode(ZipCode zipCode);
UserData findByUserId(String userId);
ProfileRepository:
Profile findByProfileId(String profileId);
UserDataRepositoryImpl:
UserData userData = userDateRepository.findByUserId(userId);
Profile profile = profileRepository.findByProfileId(userData.getProfileId());
userData.setProfile(profile);
Sample Pojo :
public class UserData {
private String userId;
private String status;
private Address address;
private String profileId;
//New Property
private Profile profile;
//TODO:setter & getter
}
public class Profile {
private String email;
private String profileId;
}
For the above Document/POJO in your Repository Class:
UserData findByProfile_Email(String email);
For ref : http://docs.spring.io/spring-data/data-commons/docs/1.6.1.RELEASE/reference/html/repositories.html
You need to use Mongo Aggregation framework :
1) Create custom method for mongo repository : Add custom method to Repository
UnwindOperation unwind = Aggregation.unwind("questions");
MatchOperation match = Aggregation.match(Criteria.where("userId").is(userId).and("questions.questionId").is(questionID));
Aggregation aggregation = Aggregation.newAggregation(unwind,match);
AggregationResults<PracticeQuestionUnwind> results = mongoOperations.aggregate(aggregation, "PracticeQuestion",
PracticeQuestionUnwind.class);
return results.getMappedResults();
2) You need to cretae a class(Because unwind operation has changed the class structure) like below :
public class PracticeQuestionUnwind {
private String userId;
private Question questions;
This will give you only those result which matches the provide userId and questionId
Result for userId: 1 and questionId : 111 :
{
"userId": "1",
"questions": {
"questionId": "111",
"type": "optional"
}
}
i too had similar issue. for that i added $ before the nested class attributes.
try below query
#Query(value = "{ 'userId' : ?0, 'questions.$questionID' : ?1 }") List<PracticeQuestion> findPracticeQuestionByUserIdAndQuestionsQuestionID(int userId, int questionID);
Related
I have two collections in MongoDB, Parent an Child. Below is the structure of the document.
Parent: {
'_id': 'some_value',
'name': 'some_value',
'child': {
'_id': 'some_value',
'name': 'some_value',
'key':'value'
}
}
I am trying pass list of child Ids in MongoRepository method in order to retrieve Parent objects but getting null values. Below is my code.
import org.bson.types.ObjectId;
class MyRepository extends CrudRepository<Parent,Long> {
#Query("{'child._id': {$in : ?0 }}")
List<Parent> findByChild_IdIn(List<ObjectId> childIds);
}
I am invoking my method as shown below.
import org.bson.types.ObjectId;
List<String> childrenIds = getChildrenIdList();
List<ObjectId> childIds = childrenIds.stream().map(childrenId -> new ObjectId(childrenId)).collect(Collectors.toList());
List<Parent> parentsList = myRepository.findByChild_IdIn(childIds);
What am I doing wrong here? Why it is giving null values.
TL;DR
Annotate the id field in the inner document with #MongoId.
Longer answer
As mentioned in this answer, you would usually not require an _id field in a child document:
The _id field is a required field of the parent document, and is typically not necessary or present in embedded documents. If you require a unique identifier, you can certainly create them, and you may use the _id field to store them if that is convenient for your code or your mental model; more typically, they are named after what they represent (e.g. "username", "otherSystemKey", etc). Neither MongoDB itself, nor any of the drivers will automatically populate an _id field except on the top-level document.
In other words, unlike in RDMBS "normalized" schemas, here you would not require the children documents to have a unique id.
You may still want though that your children documents contain some kind of "id" field that refers to a document in another collection (eg "GoodBoys").
But indeed, for whatever reason, you may require a "unique" field in those inner children documents.
The following model would support the proposed structure:
#Data
#Builder
#Document(collection = "parents")
public class Parent {
#Id
private String id;
private String name;
private Child child;
#Data
#Builder
public static class Child {
#MongoId
private String id;
private String name;
private String key;
}
}
You can retrieve parents by list of ids with either of:
public interface ParentsRepository extends MongoRepository<Parent, String> {
// QueryDSL
List<Parent> findByChildIdIn(List<String> ids);
// Native Query
#Query("{'child._id': {$in: ?0}}")
List<Parent> findByChildrenIds(List<String> ids);
}
Add a few parents with something like:
parentsRepository.saveAll(Arrays.asList(
Parent.builder()
.name("parent1")
.child(Parent.Child.builder().id(new ObjectId().toString()).name("child1").key("value1").build())
.build(),
Parent.builder()
.name("parent2")
.child(Parent.Child.builder().id(new ObjectId().toString()).name("child2").key("value2").build())
.build()
));
The resulting entries in MongoDB will look like this:
{
"_id" : ObjectId("5e07384596d9077ccae89a8c"),
"name" : "parent1",
"child" : {
"_id" : "5e07384596d9077ccae89a8a",
"name" : "child1",
"key" : "value1"
},
"_class" : "com.lubumbax.mongoids.model.Parent"
}
Note two important things here:
The parent id in Mongo is an actual ObjectId that is de facto unique.
The child id in Mongo is a string that we can assume as being unique.
This approach works fine for this case in which we either have inserted the children ids from Java to MongoDB with new ObjectId().toString() or in whatever other way, as long as the resulting children id is just a string in MongoDB.
That means that the children ids are not strictly said an ObjectId in MongoDB.
#Id in the Child
If we annotate the children id field with #MongoId, the resulting query will be something like:
StringBasedMongoQuery: Created query Document{{child._id=Document{{$in=[5e0740e41095314a3401e49c, 5e0740e41095314a3401e49d]}}}} for Document{{}} fields.
MongoTemplate : find using query: { "child._id" : { "$in" : ["5e0740e41095314a3401e49c", "5e0740e41095314a3401e49d"]}} fields: Document{{}} for class: com.lubumbax.mongoids.model.Parent in collection: parents
If instead we annotate the children id field with #Id, the resulting query will be:
StringBasedMongoQuery: Created query Document{{child._id=Document{{$in=[5e0740e41095314a3401e49c, 5e0740e41095314a3401e49d]}}}} for Document{{}} fields.
MongoTemplate : find using query: { "child._id" : { "$in" : [{ "$oid" : "5e0740e41095314a3401e49c"}, { "$oid" : "5e0740e41095314a3401e49d"}]}} fields: Document{{}} for class: com.lubumbax.mongoids.model.Parent in collection: parents
Note the $oid there. The MongoDB Java driver expects the id attribute in MongoDB to be an actual ObjectId, so it tries to "cast" our string to an $oid.
The problem with that is that in MongoDB, the _id attribute of the children is simply a string and not a MongoDB's ObjectId. Thus, our repository method won't find our parent!
All ObjectId()
What happens if we insert a new document to MongoDB where the child _id is an actual ObjectId rather than a string?:
> db.parents.insert({
name: "parent3",
child: {
_id: ObjectId(),
name: "child3",
key: "value3"
}
});
The resulting entry is:
> db.parents.find({});
{
"_id" : ObjectId("5e074233669d34403ed6bcd2"),
"name" : "parent3",
"child" : {
"_id" : ObjectId("5e074233669d34403ed6bcd1"),
"name" : "child3",
"key" : "value3"
}
}
If we try to find this one now whith the #MongoId annotated child _id field, we won't find it!
The resulting query would be:
StringBasedMongoQuery: Created query Document{{child._id=Document{{$in=[5e074233669d34403ed6bcd1]}}}} for Document{{}} fields.
MongoTemplate : find using query: { "child._id" : { "$in" : ["5e074233669d34403ed6bcd1"]}} fields: Document{{}} for class: com.lubumbax.mongoids.model.Parent in collection: parents
Why? Because now the _id attribute in MongoDB is an actual ObjectId and we are trying to query it as a plain string.
We might be able to workaround that by tweaking the query with SpEL, but IMHO we are entering "Land of Pain".
That document would be found though, as we might expect, if we annotated the child _id field with #Id:
StringBasedMongoQuery: Created query Document{{child._id=Document{{$in=[5e074233669d34403ed6bcd1]}}}} for Document{{}} fields.
MongoTemplate : find using query: { "child._id" : { "$in" : [{ "$oid" : "5e074233669d34403ed6bcd1"}]}} fields: Document{{}} for class: com.lubumbax.mongoids.model.Parent in collection: parents
Once again, and as suggested at the top of this answer, I would discourage you from using ObjectId in children documents.
Some conclusions
As stated above, I would discourage anyone from using ObjectId in children documents.
A 'parents' entry in our MongoDB collection is what is unique. What that document contains may, or may not, represent a Parent entity in our Java application.
This is one of the pilars of NoSQL, in comparison to RDBMS where we would "tend to" normalize our schemas.
From that point of view, there is not such a thing as "part of the information in a document is unique" like, the nested children are unique.
The children would better be called "children element" (maybe there is a better name for this) rather than "chilren documents", because they are not actual documents but rather a part of a "parents" document (for those parents that happen to have a "child" element in their structure).
If we still want to somehow "link" the nested children elements to "exclusive" (or unique) documents in another collection, we can do so indeed (see below).
We just
Refer from nested element to documents in another collection
That in my opinion is a better practise.
The idea is that a document in a collection contains all we need in order to represent an entity.
For instance, in order to represent parents, apart of their names, age, and any other information specific of the parents, we might just want to know the name of their child (in a world where parents have only one child).
If we needed to access to more information of those children at some point we may have another collection with children detailed information. We could thus "link" or "refer" from the nested children in the parents collection to the documents in the children collection.
Our Java model would look something like this for the Children:
#Data
#Builder
#Document(collection = "children")
public class Child {
#Id
private String id;
private String name;
private String key;
private Integer age;
}
A Parent entity would not have any "hard" relation to a Child entity:
#Data
#Builder
#Document(collection = "parents")
public class Parent {
#Id
private String id;
private String name;
private ChildData child;
#Data
#Builder
public static class ChildData {
private String id;
private String name;
}
}
Note that in this case I prefer to name the nested children objects as Parent.ChildData, because it contains just the information I need about a child as to represent a Parent entity.
Note as well that I am not annotating the nested child id field with #Id. Per convention, MappingMongoConverter in this case will anyways map a field called id to a mongo _id field.
Given that there is no relation in MongoDB (as we understand them in RDBMS) between a nested child id and the children ids, we may even rename the ChildData.id field to ChildData.link.
You can see an example of this idea in the LinkitAir PoC.
There, the Flight entity model (stored in MongoDB in a flights collection) contains a nested AirportData document that via its code field "refers" to Airport entities (stored in MongoDB in a airports collection).
Is there a recommended way to go about dealing with documents that don't have the _class field with spring-data-couchbase( if there is one)? Trying it simply just throws an exception as expected.
Edit: Apologies if this was a bit too vague, let me add a bit more context.
I want to fetch data from couchbase for some student by name, let's say . The repository looks something like -
#Repository
public interface StudentRepository extends CouchbaseRepository {
Optional<StudentDocument> findByName(String name);
}
Now the documents in couchbase don't have the _class field OR say if we are entering a different "key" and "value" for _class field as we don't want to rely on it, so this method fails. I sort of hacked a workaround for this using -
`
#Override
public Student getStudent(String name) {
N1qlQuery query = N1qlQuery.simple(String.format("select *, META().id AS _ID, META().cas AS _CAS" +
" from student where name = \'%s\';", name));
return Optional.ofNullable(studentRepository.getCouchbaseOperations()
.findByN1QL(query, StudentWrapper.class)
.get(0))
.map(StudentWrapper::getStudent)
.orElseGet(() -> {
throw new HttpClientErrorException(HttpStatus.NOT_FOUND);
});
}
`
I was wondering if there is an alternate way of achieving this
While using Spring spEL, Couchbase will automatically include the _class (or whatever attribute you have defined as your type) for you:
public interface AreaRepository extends CouchbaseRepository<Area, String> {
//The _class/type is automatically included by Couchbase
List<Area> findByBusinessUnityIdAndRemoved(String businessId, boolean removed);
}
However, if you want to use N1QL, you have to add the #{#n1ql.filter} :
public interface BusinessUnityRepository extends CouchbaseRepository<BusinessUnity, String>{
#Query("#{#n1ql.selectEntity} where #{#n1ql.filter} and companyId = $2 and $1 within #{#n1ql.bucket}")
BusinessUnity findByAreaRefId(String areaRefId, String companyId);
}
the #{#n1ql.filter} will automatically add the filter by type for you.
I'm having a duplicate query when performing a simple query. The files:
SomeClass.java:
#Document(collection = "someCollection")
public class SomeClass {
private String _id;
private String someField;
//...
}
SomeClassRepository.java:
#Repository
public interface SomeClassRepository extends MongoRepository<SomeClass, String> {
}
Service.java:
#Autowired
private SomeClassRepository someClassRepository;
public SomeClass find(String id){
return someClassRepository.findOne(id);
}
application.properties:
logging.level.org.springframework.data.mongodb.core.MongoTemplate=DEBUG
Log file:
14:14:46.514 [qtp1658534033-19] DEBUG o.s.data.mongodb.core.MongoTemplate - findOne using query: { "_id" : "40c23743-afdb-45ca-9231-c467f8e8b320"} fields: null for class: class com.somepackage.SomeClass in collection: someCollection
14:14:46.534 [qtp1658534033-19] DEBUG o.s.data.mongodb.core.MongoTemplate - findOne using query: { "_id" : "40c23743-afdb-45ca-9231-c467f8e8b320"} in db.collection: someDatabase.someCollection
I also tried to:
1) use #Id annotation with a field named "someId"
2) use #Id annotation with a field named "id"
3) use a field named "id" (without #Id annotation)
Unfortunately, I always have two queries to the database.
Anyone knows how to perform a single query?
Thanks!
Its only single query that is sent to database. Your log messages are coming from two different places.
First place : doFindOne method - link; Second place :
FindOneCallback class - link
You can also confirm the logs by looking at db logs. More info here
I'm trying to implement a rest api using RepositoryRestResource and RestTemplate
It all works rather well, except for loading #DBRef's
Consider this data model:
public class Order
{
#Id
String id;
#DBRef
Customer customer;
... other stuff
}
public class Customer
{
#Id
String id;
String name;
...
}
And the following repository (similar one for customer)
#RepositoryRestResource(excerptProjection = OrderSummary.class)
public interface OrderRestRepository extends MongoRepositor<Order,String>{}
The rest api returns the following JSON:
{
"id" : 4,
**other stuff**,
"_links" : {
"self" : {
"href" : "http://localhost:12345/api/orders/4"
},
"customer" : {
"href" : "http://localhost:12345/api/orders/4/customer"
}
}
}
Which if loaded correctly by the resttemplate will create a new Order instance with customer = null
Is it possible to eagerly resolve the customer on the repository end and embed the JSON?
Eagerly resolving dependent entities in this case will raise most probably N+1 database access problem.
I don't think there is a way to do that using default Spring Data REST/Mongo repositories implementation.
Here are some alternatives:
Construct an own custom #RestController method that would access the database and construct desired output
Use Projections to populate fields from related collection, e.g.
#Projection(name = "main", types = Order.class)
public interface OrderProjection {
...
// either
#Value("#{customerRepository.findById(target.customerId)}")
Customer getCustomer();
// or
#Value("#{customerService.getById(target.customerId)}")
Customer getCustomer();
// or
CustomerProjection getCustomer();
}
#Projection(name = "main", types = Customer.class)
public interface CustomerProjection {
...
}
The customerService.getById can employ caching (e.g. using Spring #Cachable annotation) to mitigate the performance penalty of accessing the database additionally for each result set record.
Add redundancy to your data model and store copies of the Customer object fields in the Order collection on creation/update.
This kind of problem arises, in my opinion, because MongoDB doesn't support joining different document collections very well (its "$lookup" operator has significant limitations in comparison to the common SQL JOINs).
MongoDB docs also do not recommend using #DBRef fields unless joining collections hosted in distinct servers:
Unless you have a compelling reason to use DBRefs, use manual references instead.
Here's also a similar question.
I'm working on a Grails project, and I want to use GORM in order to read/write documents in a MongoDB. I have a specific scenario I want to implement, and I've been trying to make this work for 2-3 days now, but I can't find out what I'm doing wrong. I've read Grails documentation, many related posts on the web, but still nothing.
For the sake of simplicity I'm working on a simple enough scenario, which is the following:
I have 4 Domain classes:
Someone.groovy
class Someone {
static mapWith = 'mongo'
static embedded = ['animal']
static constraints = {}
String name
Animal animal
}
Animal.groovy
class Animal {
static mapWith = 'mongo'
static mapping = {
discriminator column: "type"
}
static belongsTo = Someone
static constraints = {}
}
Dog.groovy
class Dog extends Animal {
static mapWith = 'mongo'
String dogName
static mapping = {
discriminator value: "dog"
}
static belongsTo = Someone
static constraints = {}
}
Cat.groovy
class Cat extends Animal {
static mapWith = 'mongo'
String catName
static mapping = {
discriminator value: "cat"
}
static belongsTo = Someone
static constraints = {}
}
Now, in a Service class, called CrudService, I have a method, e.g. foo(), and in that method I create a new Someone object, and save it to my Mongo database.
Someone someone = new Someone(name: "A name", animal: new Dog(dogName: "Azor"))
someone.save(flush: true)
When I check my mongoDB collection to see what was saved, I see:
{
"_id" : NumberLong(21),
"animal" : {
"_class" : "Dog",
"dogName" : "Azor"
},
"name" : "A name",
"version" : 0
}
So, as you can see, the document was saved, but the discriminator column name (and values) were completely ignored. The document was saved with the value "_class" as the column name (which is the default discriminator column name) and the Class name (in this case Dog) as its value.
Can anyone please help me and tell me what I'm doing wrong here?
Is there a chance that the discriminator feature of GORM, works correctly for RDBMS but not for NoSQL databases?
(Currently I'm also looking at Morphia in order to see whether it can provide a solution to my problem, but I would prefer to use GORM since it is the default ORM for Grails).
Thank you in advance for any suggestion.