i want to use the reactive spring data elasticsearch client but I need a bulk operation. Checking the code repository on github and I can't any bulk operation with the reactive client.
Can anyone explain why it's not implemented?
Thanks in advance
The reactive operations in Spring Data Elasticsearch are a quite new addition, and as stated for example in the Javadoc for the ReactiveElasticsearchOperations class:
Interface that specifies a basic set of Elasticsearch operations executed in a reactive way.
The implementation is by no means exhaustive and there surely is quite some stuff missing.
So the best thing to do if you think that there is something missing, is to file an issue in the Jira.
Related
I am new to Spring and MongoDB. Trying to identify that when I had to process records from more than one collection, is it better option to do a lookup or go for writing code in spring or go for lookup?
It cannot be explicit to decide due to the following reasons.
Data size
Number of collections which will be part of lookup
Index usage
Query efficiency.
It's better to evaluate both the options and decide. This is a good article to understand schema design
Can I use mongoengine or djongo for ODM and pymongo for interaction with the db?
I've read these two about something related to my question:
Insert data by pymongo using mongoengine ORM in pyramid
Use MongoEngine and PyMongo together
But, I couldn't find what I'm looking for (I guess).
So here's what I'm trying to find:
¿Does this practice affect the performance of my application?
¿How well recommended is it?
So, if it is recommended, and everything is right, ¿Do I need to put an extra layer of security or something?, because, I want to build an API using the serializations for models that django-rest-framework-mongoengine offers, and then do what I have to do in the view of the API endpoint.
It could be djongo or something like it, what I want is just an ODM for serializing, define a structure for the API and so on, use pymongo for queries, cause according to what I've been reading, mongoengine could make slower the interaction with the db
The term "ORM" does not apply to MongoDB since MongoDB is non-relational. The proper term is "ODM" - object-document mapper.
Generally, a MongoDB ODM is built on top of a MongoDB driver. The functionalities of the ODM and the driver are complementary - the driver provides low-level database access and the ODM provides high-level features like schema, associations, callbacks.
If you want to use the high-level features, it makes sense to use an ODM. If you don't need any of those features and just want to perform basic CRUD operations, using a driver directly is more efficient. Some applications use both of these strategies depending on the operation that needs to be performed.
I am using Couchbase with Spring Data and wish to implement bulkGet of Couchbase. Please let me know the following:
Is it possible via Spring Data?
If yes, can you share an example?
Is findAll (using _all view) comparable to bulkGet in terms of performance?
Can I fetch the _id along with the Couchbase document?
Environment:- Couchbase 4.0, Spring Data 2.0.0.RELEASE, Java 8.
Thanks in Advance!
I assume you are asking about a bulk get in the context of repositories.
First, there is currently no complete support of a "bulkGet" in Spring Data Couchbase. Most of the implementation is based on the SDK synchronous API, and bulk get is something usually done using the asynchronous API, using RxJava.
Note that there is no actual "bulkGet" operation at the protocol level in Couchbase, it's just the SDK issuing multiple single Get and batching them together.
To answer your second question, the above is important. The bulk get pattern discussed in the Couchbase Java SDK documentation (here) gives a slight performance boost because unlike in synchronous mode, we don't wait for the retrieval of one item to get the next.
The findAll() and findAll(Iterable) methods in Spring Data Couchbase both operate on top of a view, which allows to only retrieve documents that match the entity type of your repository but introduces a level of indirection that can lower performance compared to a pure sequence of key/value gets.
So the closest you could get to a bulk operation like that in Spring Data Couchbase would be to know all the IDs you're interested in and then perform a findOne per ID.
In the near term, the code behind the findAll(Iterable) signature could maybe be improved by applying a bulk get pattern on all provided IDs, but that would mean forgetting about the type checking induced by the view, so I'm not sure...
I'm looking for the right way so use ElasticSearch with MongoDB. I want to save several informations in MongoDB. Additionally i want to save a larger text with ElasticSearch to support complex fulltext-search.
My problem at the moment is:
I'm not sure what the best solution is for this. Most solutions i found to synchronize MongoDB with ElasticSearch are using "river" which is deprecated!
What is the best way to combine these two technologies?
Is it even the best way to save it in MongoDB and ElasticSearch?
I found multiple articles that explained, that ElasticSearch alone is not safe enough and that you have to use another DBMS.
Also under robustness on the mongoDB website I found this:
Unfortunately, Elasticsearch (and the components it's made of) does not currently handle OutOfMemory-errors very well.
[source]
So saving the data redundant is probably the best way.
Thanks in advance!
Hei,
We are also working with both Elasticsearch and MongoDb. We started with a river and after having a lot of issues with it we got rid of it before becoming deprecated. The way we do it is: when saving data to mongo we create a message in a queue which notifies the search storage to do the insert/delete operation with the given data.
So basically we keep them in sync manually and there will always be a delay between mongo and elaticsearch. The good part is that if elasticsearch would fail, we have implemented an endpoint which reimports the data from mongo to ES. Also, the structure inside ES it's different from the one in mongo. Before, it was a lot more complicated to do this with the river. Imagine that we even had our own custom implementation.
Hope my answer helps at least a bit.
I've started a new job where they are using mongodb in a java environment.
They have implemented a pattern using DTOs and factories with the morphia driver, this may be due to a migration onto mongodb from a key value store previously. The client is a JSON client.
It seems to me that the jackson-mongo-mapper would be a better approach because it's just mapping pojos from json to BSON and back, seems like it could do away with all DTO factory facade?
Anyone know any pros and cons with these different approaches?
Spring Data for Mongodb is very nice since you can use even another data store or mix them and repository interface is very helpful.
Kundera is an option through JPA2
http://agilemobiledeveloper.wordpress.com/2013/08/22/working-with-mongodb-using-kundera/
There's a lot of java to mongodb options.
http://www.agilemobiledeveloper.com/2013/01/31/hibernate-ogm-mongodb-vs-kundera-vs-jongo-vs-mongodb-api-vs-morphia-vs-spring-data-mongo-mongodb-drivers-for-java/
Adding your own data layer and making sure you use DI and test it fully is very helpful.
NOSQLUnit is awesome -> https://github.com/lordofthejars/nosql-unit
DTOs are good for keeping a separation between implementation and design, so when they need or want to switch from mongo to some other NoSQL or SQL database it can be done cleanly.