I use kafka and spring cloud stream in functional programing model.
I want to use the reactive api
So I have a Function bean that takes Flux and return flux
The returned flux is created in separate class
Do I need to subscribe to activate the new/return flux?
That will not work if I understand you correctly. The expectation for "streaming cases" is that your function adds operations to the incoming flux and returns it. The framework will subscribe to what your function has returned and the stream begins. Because of that if you create a new instance of Flux it would not work.
What I mean it is by design.
Once we have truly reactive binders (which we don't at the moment)then things will change.
Related
I want to write to MongoDB with Spring Integration and Project Reactor. My needed command is a merge operation, so I started that with the following snippet:
MergeOperation mergeOperation = Aggregation.merge()
.intoCollection("someCollection")
.on("_id")
.whenMatched(MergeOperation.WhenDocumentsMatch.mergeDocuments())
.whenNotMatched(MergeOperation.WhenDocumentsDontMatch.discardDocument())
.build();
#Bean
public IntegrationFlow dataPipeline() {
return IntegrationFlows.from(somePublisher)
// .handle(-----) - MergeOperation usage syntax
.get();
}
I would like to know what is the recommended way of using the merge command with Reactive Spring Data Mongo, and if its supported and possible with reactive streams. Since I've seen that there's a dedicated class for reactive aggregations, I wonder if the absent of reactive merge operation class means no support for the merge operation with reactive streams. If it is possible, I'd like to get some help with the syntax
I am new to multi-tenancy with mongodb using spring-data-mongodb, we need to use spring-data-mongodb for rest APIs and scheduling tasks( we have more than one schedulers in our application) in the same code with thread-safe. Is autowiring mongoTemplate will make application thread safe as the same mongoTemplate will be accessed from Schedulers and APIs both. Please get me the good practice in such a situation.
Regards
Kris
MongoTemplate itself is thread-safe, i.e. you can call it from multiple threads at the same time, and it will work correctly, i.e. send the different requests correctly to MongoDB.
But that doesn't guarantee consistency: if the scheduler is running and executes multiple updates in the same task, an API call can possibly get some updated records and some other records that aren't updated yet.
By the way: multi-tenancy is having data from multiple organisational entities in the same database. I'm not sure how that links to your question, did you mean multi-threading?
If you use different databases, then you can't use an autowired MongoTemplate.
For autowiring, there must be a single instance, but since the database connection string is a dependency of a MongoTemplate, there must be a single database as well.
You could go for an approach where you do not auto-wire the MongoTemplate directly, but use some sort of factory pattern to create the correct MongoTemplate for the current tenant. See Making spring-data-mongodb multi-tenant for some examples. (It's an old question, but its answers get updated every now and then).
Or you could go with an infrastructural solution, and deploy separate instances of your application, one for each tenant, e.g. on Kubernetes.
We have a requirement where we have to read a batch of a entitytype from the database, submit info about each entity to a service which will callback later with some data to update in the caller entity, save all the caller entities with the updated data. We thought of using spring-batch however we use Couchbase as our database which is eventually consistent and has no support for transactions.
I was going through the spring-batch documentation and I came across the Spring Batch Meta-Data ERD diagram here :
https://docs.spring.io/spring-batch/4.1.x/reference/html/index-single.html#metaDataSchema
With the above information in mind, my question is:
Can Couchbase be used as the underlying job-repository for spring-batch? What are the things I should keep in mind if its possible to use it? Any links to example implementations would be welcome.
The JobRepository needs to be transactional in order for Spring Batch to work properly. Here is an excerpt from the Transaction Configuration for the JobRepository section of the reference documentation:
The behavior of the framework is not well defined if the repository methods are not transactional.
Since Couchbase has no support for transactions as you mentioned, it is not possible to use it as an underlying datasource for the JobRepository.
Hi I create my own Connector of Cassandra using the datastax drivers. But I´m facing some Memory leaks issues, so I start considering another solutions like Alpakka de lightbend which has a Cassandra connector.
But after check the poor documentation I´m changing my mind, since it´s just using the connector with CQLSH queries, and in my case I manage DTO objects.
Anybody knows any documentation where I can see if Alpakka cassandra manage the save of DTO´s with consistency level?.
This code is from my current connector. I would like to achieve something similar.
private void updateCreateEntry(DTO originalDto, Mapper cassandraMapper) {
ConsistencyLevel consistencyLevel = ((DTOCassandra) originalDto).getConsistencyLevel();
//.- For writing we set the consistency level to quorum
cassandraMapper.save(originalDto, Option.consistencyLevel(consistencyLevel != null ? consistencyLevel : DEFAULT_CONSISTENCY_LEVEL));
}
As you've noticed, presently the Cassandra connector within Alpakka is quite thin. If you need a richer support for your DTO, you could choose a richer client like Phantom.
There are excellent examples on how to use Phantom - check this one out for instance. Once you have created you model, Phantom will give you a def store[T](t: T): Future[ResultSet] function to insert data.
You can feed calls to these function to a mapAsync(n) combinator to make use of them in your Akka Stream.
I am just wondering whether using a OneToManyResultSetExtractor or a ResultSetExtractor with Spring Batch's JdbcCursorItemReader?
The issue I have is that the expected RowMapper only deals with one object per row and I have a join sql query that returns many rows per object.
Out of the box, it does not support the use of a ResultSetExtractor. The reason for this is that the wrapping ItemReader is stateful and needs to be able to keep track of how many rows have been consumed (it wouldn't know otherwise). The way that type of functionality is typically done in Spring Batch is by using an ItemProcessor to enrich the object. Your ItemReader would return the one (of the one to many) and then the ItemProcessor would enrich the object with the many. This is a common pattern in batch processing called the driving query pattern. You can read more about it in the Spring Batch documentation here: http://docs.spring.io/spring-batch/trunk/reference/html/patterns.html
That being said, you could also wrap the JdbcCursorItemReader with your own implementation that performs the logic of aggregation for you.