Combine Mono with every Flux element emitted - reactive-programming

I have a Flux and Mono as below:
Mono<MyRequest> req = request.bodyToMono(MyRequest.class);
Mono<List<String>> mono1 = req.map(r -> r.getList());;
Flux<Long> flux1 = req.map(r -> r.getVals()) // getVals() return list of Long
.flatMapMany(Flux::fromIterable);
Now for each number in flux1, I want to call a method where params are the id from flux1 and the List<String> from mono1. Something like,
flux1.flatMap(id -> process(id, mono1))
But passing and processing same mono1 results in error Only one connection receive subscriber allowed. How can I achieve above? Thanks!

Since both information are coming from the same source, you could just run the whole thing with one pipeline like this and wrap both elements in a Tuple or better, a domain object that has more meaning:
Mono<MyRequest> req = // ...
Flux<Tuple2<Long, List<String>>> tuples = req.flatMapMany(r ->
Flux.fromIterable(r.getVals())
.map(id -> Tuples.of(id, r.getList()))
);
// once there, you can map that with your process method like
tuples.map(tup -> process(tup.getT1(), tup.getT2());
Note that this looks unusual, and this basically comes from the structure of that object you're receiving.

Related

Call a PostgreSQL function and get result back with no loop

I have a simple rust program that interacts with a PostgreSQL database.
The actual code is:
for row in &db_client.query("select magic_value from priv.magic_value();", &[]).unwrap()
{
magic_value = row.get("magic_value");
println!("Magic value is = {}", magic_value);
}
And.. it works. But I don't like it: I know this function will return one and only one value.
From the example I found, for example here: https://docs.rs/postgres/latest/postgres/index.html
and here: https://tms-dev-blog.com/postgresql-database-with-rust-how-to/
You always have a recordset to loop on.
Which is the clean way to call a function without looping?
query returns a Result<Vec<Row>, _>. You are already unwrapping the Vec, so you can just use it directly instead of looping. By turning the Vec into an owning iterator yourself, you can even easily obtain a Row instead of a &Row.
magic_value = db_client.query("select magic_value from priv.magic_value();", &[])
.unwrap() // -> Vec<Row>
.into_iter() // -> impl Iterator<Item=Row>
.next() // -> Option<Row>
.unwrap() // -> Row
.get("magic_value");

How to sequence a task to execute once all Single's in a collection complete

I am using Helidon DBClient transactions and have found myself in a situation where I end up with a list of Singles, List<Single<T>> and want to perform the next task only after completing all of the singles.
I am looking for something of equivalent to CompletableFuture.allOf() but with Single.
I could map each of the single toCompletableFuture() and then do a CompletableFuture.allOf() on top, but is there a better way? Could someone point me in the right direction with this?
--
Why did I end up with a List<Single>?
I have a collection of POJOs which I turn into named insert .execute() all within an open transaction. Since I .stream() the original collection and perform inserts using the .map() operator, I end up with a List when I terminate the stream to collect a List. None of the inserts might have actually been executed. At this point, I want to wait until all of the Singles have been completed before I proceed to the next stage.
This is something I would naturally do with a CompletableFuture.allOf(), but I do not want to change the API dialect for just this and stick to Single/Multi.
Single.flatMap, Single.flatMapSingle, Multi.flatMap will effectively inline the future represented by the publisher passed as argument.
You can convert a List<Single<T>> to Single<List<T>> like this:
List<Single<Integer>> listOfSingle = List.of(Single.just(1), Single.just(2));
Single<List<Integer>> singleOfList = Multi.just(listOfSingle)
.flatMap(Function.identity())
.collectList();
Things can be tricky when you are dealing with Single<Void> as Void cannot be instantiated and null is not a valid value (i.e. Single.just(null) throws a NullPointerException).
// convert List<Single<Void>> to Single<List<Void>>
Single<List<Void>> listSingle =
Multi.just(List.of(Single.<Void>empty(), Single.<Void>empty()))
.flatMap(Function.identity())
.collectList();
// convert Single<List<Void>> to Single<Void>
// Void cannot be instantiated, it needs to be casted from null
// BUT null is not a valid value...
Single<Void> single = listSingle.toOptionalSingle()
// convert Single<List<Void>> to Single<Optional<List<Void>>>
// then Use Optional.map to convert Optional<List<Void>> to Optional<Void>
.map(o -> o.map(i -> (Void) null))
// convert Single<Optional<Void>> to Single<Void>
.flatMapOptional(Function.identity());
// Make sure it works
single.forSingle(o -> System.out.println("ok"))
.await();

How to combine the elements of an arbitrary number of dependent Fluxes?

In the non reactive world the following code snippet is nothing special:
interface Enhancer {
Result enhance(Result result);
}
Result result = Result.empty();
result = fooEnhancer.enhance(result);
result = barEnhancer.enhance(result);
result = bazEnhancer.enhance(result);
There are three different Enhancer implementations taking a Result instance, enhancing it and returning the enhanced result. Let's assume the order of the enhancer calls matters.
Now what if these methods are replaced by reactive variants returning a Flux<Result>? Because the methods depend on the result(s) of the preceding method, we cannot use combineLatest here.
A possible solution could be:
Flux.just(Result.empty())
.switchMap(result -> first(result)
.switchMap(result -> second(result)
.switchMap(result -> third(result))))
.subscribe(result -> doSomethingWith(result));
Note that the switchMap calls are nested. As we are only interested in the final result, we let switchMap switch to the next flux as soon as new events are emitted in preceding fluxes.
Now let's try to do it with a dynamic number of fluxes. Non reactive (without fluxes), this would again be nothing special:
List<Enhancer> enhancers = <ordered list of different Enhancer impls>;
Result result = Result.empty();
for (Enhancer enhancer : enhancers) {
result = enhancer.enhance(result);
}
But how can I generalize the above reactive example with three fluxes to deal with an arbitrary number of fluxes?
I found a solution using recursion:
#FunctionalInterface
interface FluxProvider {
Flux<Result> get(Result result);
}
// recursive method creating the final Flux
private Flux<Result> cascadingSwitchMap(Result input, List<FluxProvider> fluxProviders, int idx) {
if (idx < fluxProviders.size()) {
return fluxProviders.get(idx).get(input).switchMap(result -> cascadingSwitchMap(result, fluxProviders, idx + 1));
}
return Flux.just(input);
}
// code using the recursive method
List<FluxProvider> fluxProviders = new ArrayList<>();
fluxProviders.add(fooEnhancer::enhance);
fluxProviders.add(barEnhancer::enhance);
fluxProviders.add(bazEnhancer::enhance);
cascadingSwitchMap(Result.empty(), fluxProviders, 0)
.subscribe(result -> doSomethingWith(result));
But maybe there is a more elegant solution using an operator/feature of project-reactor. Does anybody know such a feature? In fact, the requirement doesn't seem to be such an unusual one, is it?
switchMap feels inappropriate here. If you have a List<Enhancer> by the time the Flux pipeline is declared, why not apply a logic close to what you had in imperative style:
List<Enhancer> enhancers = <ordered list of different Enhancer impls>;
Mono<Result> resultMono = Mono.just(Result.empty)
for (Enhancer enhancer : enhancers) {
resultMono = resultMono.map(enhancer::enhance); //previousValue -> enhancer.enhance(previousValue)
}
return resultMono;
That can even be performed later at subscription time for even more dynamic resolution of the enhancers by wrapping the whole code above in a Mono.defer(() -> {...}) block.

How to get current position of iterator in ByteString?

I have an instance of ByteString. To read data from it I should use it's iterator() method.
I read some data and then I decide than I need to create a view (separate iterator of some chunk of data).
I can't use slice() of original iterator, because that would make it unusable, because docs says that:
After calling this method, one should discard the iterator it was called on, and use only the iterator that was returned. Using the old
iterator is undefined, subject to change, and may result in changes to
the new iterator as well.
So, it seems that I need to call slice() on ByteString. But slice() has from and until parameters and I don't know from. I need something like this:
ByteString originalByteString = ...; // <-- This is my input data
ByteIterator originalIterator = originalByteString .iterator();
...
read some data from originalIterator
...
int length = 100; // < -- Size of the view
int from = originalIterator.currentPosition(); // <-- I need this
int until = from + length;
ByteString viewOfOriginalByteString = originalByteString.slice(from, until);
ByteIterator iteratorForView = viewOfOriginalByteString.iterator(); // <-- This is my goal
Update:
Tried to do this with duplicate():
ByteIterator iteratorForView = originalIterator.duplicate()._2.take(length);
ByteIterator's from field is private, and none of the methods seems to simply return it. All I can suggest is to use originalIterator.duplicate to get a safe copy, or else to "cheat" by using reflection to read the from field, assuming reflection is available in your deployment environment.

What would be the opposite to hasFields?

I'm using logical deletes by adding a field deletedAt. If I want to get only the deleted documents it would be something like r.table('clients').hasFields('deletedAt'). My method has a withDeletes parameter which determines if deleted documents are excluded or not.
Finally, people at the #rethinkdb IRC channel suggested me to use the filter method and that did the trick:
query = adapter.table(table).filter(filters)
if withDeleted
query = adapter.filter (doc) ->
return doc.hasFields 'deletedAt'
else
query = adapter.filter (doc) ->
return doc.hasFields('deletedAt').not()
query.run connection, (err, results) ->
...
My question is why do I have to use filter and not something like:
query = adapter.table(table).filter(filters)
query = if withDeleted then query.hasFields 'deletedAt' else query.hasFields('deletedAt').not()
...
or something like that.
Thanks in advance.
The hasFields function can be called on both objects and sequences, but not cannot.
This query:
query.hasFields('deletedAt')
Behaves the same as this one (on sequences of objects):
query.filter((doc) -> return doc.hasFields('deletedAt'))
However, this query:
query.hasFields('deletedAt').not()
Behaves like this:
query.filter((doc) -> return doc.hasFields('deletedAt')).not()
But that doesn't make sense. you want the not to be inside the filter, not after it. Like this:
query.filter((doc) -> return doc.hasFields('deletedAt').not())
One nice that about RethinkDB is that because of the way queries are built up in host language it's very easy to define new fluent syntax by just defining functions in your language. For example if you wanted to have a lacksFields function you could define it in Python (sorry I don't really know coffeescript) like so:
def lacks_fields(stream, *args):
res = stream
for arg in args:
res = res.filter(lambda x: ~x.has_fields(arg))
return res
Then you can use a nice fluent syntax like:
lacks_fields(stream, "foo", "bar", "buzz")