mongodb save of embedded document succeeds; however next read does not have the changes. Why? - mongodb

Update
Ockham's razor sliced through this problem.
Turns out the issue was:
Document did not have an _id field
#Id
private String id;
So the update() method would insert a new record rather than update existing record.
the 'find' code used 'findOne()'
for some reason findOne() appeared to behave differently in different environemts (i.e. in local dev environments it would retrieve 'most recent', but on our server environment , retrieved the 'oldest'.). Whatever something in local env masked the problem.
TLDR
'save and immediate retrieve' mongodb document with embedded document (i.e. field w/ java object/json) does not show expected updates
problem appears intermittently and only on some environments
Background
I'm new to mongo so don't know the tricks/techniques/gotchas/etc.. I'm versed in relationaldb and transactions, so this error threw me.
for better or worse, I designed a mongo collection that looks like this:
#Document
public class BidOffers {
...
// note 'Suppliers' here is plural, ie. offers from multiple suppliers
SuppliersOffers offers;
where SuppliersOffers object simply has a list of SupplierOffer objects
public class SuppliersOffers {
List<SupplierOffer> offers = new ArrayList<>();
and SupplierOffer just has a 'supplier code' and a price
public class SupplierOffer {
String supplierCode
BigDecimal price
}
Usecase
In this usecase:
Retrieve document BidOffers from mongodb
Note that document has only one offer
Add a new offer to the list
Save document
Print saved document to log, i.e. and note/confirm "saved document now has two offers in list, not just one"
Some (short) time later (i.e. in the same thread) retrieve the document
Expected Results
retrieved document has two offers
Actual Result
retrieve document has only one offer
More details
Java 8
Spring Boot 2.x
Problem appears on Mongodb 4.0 version (AWS Managed Service)
Problem appears regularly, but not 100% consistently
I do not see this problem when testing locally (Mongdb 3.6.8)

Related

JPQL can not prevent cache for JPQL query

Two (JSF + JPA + EclipseLink + MySQL) applications share the same database. One application runs a scheduled task where the other one creates tasks for schedules. The tasks created by the first application is collected by queries in the second one without any issue. The second application updates fields in the task, but the changes done by the second application is not refreshed when queried by JPQL.
I have added QueryHints.CACHE_USAGE as CacheUsage.DoNotCheckCache, still, the latest updates are not reflected in the query results.
The code is given below.
How can I get the latest updates done to the database from a JPQL query?
public List<T> findByJpql(String jpql, Map<String, Object> parameters, boolean withoutCache) {
TypedQuery<T> qry = getEntityManager().createQuery(jpql, entityClass);
Set s = parameters.entrySet();
Iterator it = s.iterator();
while (it.hasNext()) {
Map.Entry m = (Map.Entry) it.next();
String pPara = (String) m.getKey();
if (m.getValue() instanceof Date) {
Date pVal = (Date) m.getValue();
qry.setParameter(pPara, pVal, TemporalType.DATE);
} else {
Object pVal = (Object) m.getValue();
qry.setParameter(pPara, pVal);
}
}
if(withoutCache){
qry.setHint(QueryHints.CACHE_USAGE, CacheUsage.DoNotCheckCache);
}
return qry.getResultList();
}
The CacheUsage settings affect what EclipseLink can query using what is in memory, but not what happens after it goes to the database for results.
It seems you don't want to out right avoid the cache, but refresh it I assume so the latest changes can be visible. This is a very common situation when multiple apps and levels of caching are involved, so there are many different solutions you might want to look into such as manual invalidation or even if both apps are JPA based, cache coordination (so one app can send an invalidation even to the other). Or you can control this on specific queries with the "eclipselink.refresh" query hint, which will force the query to reload the data within the cached object with what is returned from the database. Please take care with it, as if used in a local EntityManager, any modified entities that would be returned by the query will also be refreshed and changes lost
References for caching:
https://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Basic_JPA_Development/Caching
https://www.eclipse.org/eclipselink/documentation/2.6/concepts/cache010.htm
Make the Entity not to depend on cache by adding the following lines.
#Cache(
type=CacheType.NONE, // Cache nothing
expiry=0,
alwaysRefresh=true
)

Spring data and MongoDB - bidirectional connection of documents

I have two Documents in my Spring data - MongoDB application:
The first one is Contact and looks like this:
public class Contact {
...
private List<Account> accounts;
and the second one is Account and looks like this:
public class Account {
...
private Contact contact;
My question now is, whether there is a better way of:
1. create contact object
2. save contact object into database
3. create account object
4. set contact object into account object
5. save account object into database
6. set created account object into contact object
7. update contact object
These are many steps and I will avoid to do such a long list to get Contact and Account connected bidirectional.
Try this approach
MongoDB is a NOSQL DB and hence there is no need of an order to be preserved, such as create and store contact object and then do so more in a sequential way.
Maintain a sequence for Contact and Account object. Before storing these two records get the next number in the sequence and insert the Contact and Account documents.
References for autoincrement sequence
https://docs.mongodb.com/v3.0/tutorial/create-an-auto-incrementing-field/
https://www.tutorialspoint.com/mongodb/mongodb_autoincrement_sequence.htm
Pseudo Code:
Get the next Sequence of Contact and Account Id
Add the id's to respective documents
Insert the Documents in Mongodb
While retrieving the records you can use $lookup which is a left outer join.
Please note that chance of loss of integrity in data can happen if one insert is happened successfully and other insert did not happen for some reason.
We dont have transaction support in Mongodb across collections, more info.

Laravel with mongodb belongsToMany sync

Im working with Laravel 4 and mongodb 2.0.4 module
I have User and Role class and Im trying to use belongsToMany relation with attach, detach and sync methods
User class
public function roles()
{
return $this->belongsToMany('Role', null, 'user_ids', 'role_ids');
}
Role class
public function users()
{
return $this->belongsToMany('User', null, 'role_ids', 'user_ids'
}
When I run attach method
$user = User::find($id);
$user->roles()->attach(array($role_id));
mongodb generate one of the query wrong (or not?)
user.update({"_id":{"$id":"54f8d7802228d5e42b000036"}},{"$addToSet":{"role_ds":{"$each":["54f8d7b02228d5e42b000037"]}}},{"multiple":true})
role.update({"_id":["54f8d7b02228d5e42b000037"]},{"$addToSet":{"user_ids":{"$each":["54f8d7802228d5e42b000036"]}}},{"multiple":true})
user collection is updated but role collection stay intact.
It should generate query like this?
role.update({"_id":{"$id":"54f8d7b02228d5e42b000037"}},{"$addToSet":{"user_ids":{"$each":["54f8d7802228d5e42b000036"]}}},{"multiple":true})
This problem is present with both attach and detach methods. Only sync runs correctly. But only if there is one element. If you run sync on multiple elements one of the collections stays always intact because of wrong query.
Am I missing something or there is really a problem with this relation?
Any help would be great. Thank you
Replace
$user->roles()->attach(array($role_id));
With
$user->roles()->attach($role_id);
If your parameter is not array, you have to use attach method. sync method only accept array as parameter. Here is a good explanation about them. Hope it will be useful for you.

Create index in correct collection

I annotate a document with #Index(unique = true) like so:
public class ADocumentWithUniqueIndex {
private static final long serialVersionUID = 1L;
#Indexed(unique = true)
private String iAmUnique;
public String getiAmUnique() {
return iAmUnique;
}
public void setiAmUnique(String iAmUnique) {
this.iAmUnique = iAmUnique;
}
}
When saving the object, I specify a custom collection:
MongoOperations mongoDb = ...
mongoDb.save(document, "MyCollection");
As a result I get:
A new document in "MyCollection"
An index in the collection "ADocumentWithUniqueIndex"
How can I create the index in "MyCollection" instead without having to explicitly specify it in the annotation?
BACKGROUND:
The default collection name is too ambiguous in our use case. We cannot guarantee, that there wouldn't be two documents with the same name but in different packages. So we added the package name to the collection.
Mapping a document to a collection is dealt with in an infrastructure component.
The implementation details like collection name etc. shouldn't leak into the individual documents.
I understand this is a bit of an "abstraction on top of an abstraction" smell but required since we had to support MongoDb and Windows Azure blob storage. Not anymore though...
This seemed like a fairly standard approach to hide the persistence details in a infrastructure component. Any comments on the approach appreciated as well.
It's kind of unusual to define the collection for an object to be stored and then expect the index annotations to work. There's a few options you have here:
Use #Document on ADocumentWithUniqueIndex and configure the collection name manually. This will cause all objects of that class to be persisted into that collection of course.
Manually create indexes via MongoOperations.indexOps() into the collections you'd like to use. This would be more consistent to your approach of manually determining the collection name during persistence operations.

How to get Hibernate Search to index

I have an app that I'm attempting to integrate hibernate search into. I'm using Hibernate Search 3.4.2. I have a domain class that looks like the following:
#Indexed
public Group {
#Fieldindex (index = Index.TOKENIZED, store = Store.YES)
private String groupName;
}
In my test cases, I create a few Groups and save them to the database. Once stored in the database, I create the index and then search for given text strings. This seems to work.
The problem I'm having is that any new Groups created after the index has been created are not indexed. From what I've read, I thought that once the index is created, any new items persisted would be automatically indexed with the new values, but this doesn't seem to be the behavior I'm getting. Is there something I've missed in the way of configuration? Or do I have to do something manually to tell Hibernate Search that I've added a new object to be indexed?
Needless to say, I'm a bit confused...
[EDIT] I'm using JPA, so my hibernate search confguration is contained in my persistence.xml as follows:
<property name="hibernate.search.default.directory_provider" value="filesystem"/>
<property name="hibernate.search.default.indexBase" value="D:\var2\lucene\indexes"/>
I can see that the index files are created, and I can use Luke to view the contents, they just don't ever seem to get updated when I persist a new object.
As stated in the documentation "By default, every time an object is inserted, updated or deleted through Hibernate, Hibernate Search updates the according Lucene index".
What I would do is to check my persistence.xml and see if I have not accidentally set hibernate.search.indexing_strategy = manual
If that's not the case, maybe you could try to force it and see if that works?
hibernate.search.indexing_strategy = event
Which framework are you using? Maybe check out the last post of this question.
// Jakob