I'm trying to read a file with Spring XD (source), modify data (processor) and save it in Mongo (sink). I've used the default MongoDB sink which do a save and expect a Spring Data Mongo entity.
But I'd like to do an update with upsert. How can I do ?
Thanks.
The mongo outbound adapter uses MongoTemplate.save() which is already an upsert according to the javadoc.
If you provide your own ObjectId for the same object it will update. You can for example do this during your processor.
Related
I write a big aggregate query for mongodb, now I try to translate it in Spring Data Mongodb API, and it's very complicated, and spring data api did not help me a lot.
So like with #Query annotation, is it possible to just specify my raw aggregate query in text and map my field with Spring Data (or just Mongodb Java driver) ?
I won't c/p my aggregate request because, it's not the purpose of my question.
I found a solution by using MongoDB java driver, which is available through Spring Data :
DBCollection collection = mongoTemplate.getCollection("myCollection");
and I used BasicDBObject from this solution : MongoDB aggregation with Java driver
I need to keep data in ElasticSearch in sync with the data I have and maintain in MongoDB.
Currently I have a batch job that finds all the changed data and updates it in elastic search using Spring-Batch and Spring-Data-ElasticSearch.
This works, but I'm looking for a solution where every change is directly mirrored in ElasticSearch.
give this a go
mongo connector
have a read through this 5 ways to sync data
I'm using allanbank async driver for operations on mongodb. Is there any api available through which I can convert returned Document to a POJO, like we can do in spring driver available for mongodb
You could use morphia (which would mean using mongodb's java driver, or you could use jackson to map back to your POJOs.
You can use Spring Data , it includes integrated object mapping between documents and POJOs.
I wanted something for async driver as well. As MongoDB came with Codec Feature in 3.0 version (both sync and async version of driver), it's not neccessary to use libraries such as Morphia anymore so I wrote simple library for mapping POJO into MongoDB myself.
in Spring Data MongoDB you can use
#Autowired
MongoConverter mongoConverter;
Then
TARGET_OBJECT = mongoConverter.read(YOUR_TARGET.class, YOUR_DOCUMENT);
I want to read around 1 million documents from my mongodb database and I am using spring data mongodb. I do not want to read all of 1 million data at once for performance reasons. Is there any way in spring-data-mongodb to do this. In raw java driver we have DBCursor.
One way i know is using pagination through repositories. Is there any other way in latest versions of spring data mongodb?
Yes. You can use pagination with spring data mongodb. MongoRepository extends from PagingAndSortingRepository which means you can call findAll(Pagable) and provide page information.
Alternatively, you can use mongoOperations/mongoTemplate to get a DBCollection reference, and then call find() on the collection and that will return you the DBCursor you want.
I am working on a project where we have millions of entries stored in MongoDB database and, i want to index all this data using SOLR.
After extensive Searching i came to know there are no proper "Data Import Handlers" for mongoDB database.
Can anyone tell me what are the proper approaches for indexing data in MongoDB using SOLR ?
I want to use all the features of SOLR and want it to be scalable in real-time. I saw one or two approaches from different posts but not sure how they will work real time..
Many Thanks
10Gen introduce Mongodb Connector. You can integrate Mongodb with Solr using this tool.
Blog post : Introducing Mongo Connector
Github page : mongo-connector
I have created a plugin to allow you to load data from MongoDb using the Solr data import handler.
Check it out at:
https://github.com/james75/SolrMongoImporter
I wrote a response to a similar question, except it was how to import data from MySQL into SOLR. The example code is in PHP, but should give you a general idea. All you would need to do is set up an iterator to step through your MongoDB assets, extract the data to SOLR datatypes, and then save it to your SOLR index.
If you want it to be real-time, you could add some custom code to the save mechanism (assuming this can be done with MongoDB), and save directly to the SOLR index, then run a commit script to commit data every 15 minutes (via cron).