Blocking RxAndroidBle write operation - rx-java2

How can I perform blocking write operation in Android with RxAndroidBle. Only if the write operation is successful the next command should be performed.
protected void doWriteBytes(UUID characteristic, byte[] bytes) {
final Disposable disposable = connectionObservable
.flatMapSingle(rxBleConnection -> rxBleConnection.writeCharacteristic(characteristic, bytes))
.observeOn(AndroidSchedulers.mainThread())
.retry(BT_RETRY_TIMES_ON_ERROR)
.subscribe(
value -> {
Timber.d("Write characteristic %s: %s",
BluetoothGattUuid.prettyPrint(characteristic),
byteInHex(value));
},
throwable -> onError(throwable)
);
compositeDisposable.add(disposable);
}
protected void test() {
// blocking write bytes
doWriteBytes(UUID.fromString("f433bd80-75b8-11e2-97d9-0002a5d5c51b"), new byte[] {0x35, 0x12});
// following command should be only performed if doWriteBytes is successful executed
foo();
// blocking write bytes
doWriteBytes(UUID.fromString("f433bd80-75b8-11e2-97d9-0002a5d5c51b"), new byte[] {0x5, 0x6, 0x1});
bar();
}
I know subscribe and onComplete but it is also possible to do without these methods?
The background is I want to override the test method in several different subclasses, so I can perform various doWriteBytes commands (for e.g. ACK commands) to send some bytes to a Bluetooth device but I need to be sure that the next command is only performed if the ACK command is successful send.
Maybe it is more a RxJava2 problem but I am not quite familiar with it.
Edit:
Thanks for your answer #Dariusz Seweryn. Sorry my question was probably not really clear. I will try to concretize it.
I want to write the source code like a normal function in test() to abstract the RxJava2 implementations. The only different is that the doWriteBytes and other Bluetooth operations (notification, read) should be done via RxAndroidBle.
What I have to write to a Bluetooth device depends on the notifications bytes or some other algorithm in the test() method. Additionally, I want to overwrite the test() method to implement a different Bluetooth communication flow for completely different Bluetooth device. It is always important that the Bluetooth operations are sequentially processed.
Now I have three ideas in mind:
1) My first idea is to implement all RxAndroidBle operations blocking, so I can use simple for e.g. loops.
2) My second idea is to dynamically add (concat?) at runtime observations to another in the test() method which is sequentially processes but I need always the return values from the previous observations?
3) My third idea is to combine write/notify/write operation as a method, which I can call in the test() method. The operation should be write bytes to a characterstic A then wait for the notification on characteristic B, do some processing with the received bytes and writes again to characteristic C.
But what is written or how the notification process should be dynamically during runtime in the test() method added.
Maybe there is a elegant solution for my problem in RxJava2 or it is not possible at all?
Edit2:
I tried to implement all three ideas but unfortunately I didn't succed.
1)
connectionObservable
.flatMapSingle(rxBleConnection -> rxBleConnection.writeCharacteristic(characteristic, bytes))
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.retry(BT_RETRY_TIMES_ON_ERROR)
.blockingSubscribe(
value -> {
Timber.d("Write characteristic %s: %s",
BluetoothGattUuid.prettyPrint(characteristic),
byteInHex(value));
processBtQueue();
},
throwable -> onError(throwable)
);
It is always blocking even on success? Do I have to release it somewhere? Additionally the method return void and not a disposable anymore but then I can't dispose it.
2) I am struggling at this idea. On which observable should I concat to if I don't know the starting observable? The connectionObserable doesn't work because it holds the RxBleConnection. The second problem is that the values after a Bluetooth operation are then Java Object classes!? Do I have to cast it every time? Do you have an example how I can concat for example a Bluetooth write operation to a notification Bluetooth result?
3) The problem with this idea is that I don't know how to dynamical add at runtime the processing part to the notification outside of the RxJava subscribe part?
I have a working solution for idea nr 3
protected Observable<byte[]> doWriteNotify(UUID characteristic, byte[] bytes, Observable<byte[]> notificationObservable) {
Observable observable = connectionObservable
.flatMapSingle(rxBleConnection -> rxBleConnection.writeCharacteristic(characteristic, bytes))
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.retry(BT_RETRY_TIMES_ON_ERROR)
.flatMap( writeBytes -> notificationObservable)
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.retry(BT_RETRY_TIMES_ON_ERROR);
compositeDisposable.add(observable.subscribe());
return observable;
}
Btw. should I create separate threads on stackoverflow with these questions?
If it's help, you can find my experimental source code here.

I know subscribe and onComplete but it is also possible to do without these methods?
Given:
Completable foo() { ... }
Completable bar() { ... )
One could do:
Disposable testDisposable = connectionObservable
.flatMapCompletable(connection ->
connection.writeCharacteristic(UUID.fromString("f433bd80-75b8-11e2-97d9-0002a5d5c51b"), new byte[] {0x35, 0x12}).ignoreElement()
.andThen(foo())
.andThen(connection.writeCharacteristic(UUID.fromString("f433bd80-75b8-11e2-97d9-0002a5d5c51b"), new byte[] {0x5, 0x6, 0x1}).ignoreElement())
.andThen(bar())
)
.subscribe(
() -> { /* completed */ },
throwable -> { /* error happened */ }
)
Having the above closed with one .subscribe() one can have control over how many connections the flow will finish. In the above example if a connection would end prematurely during the first write — all next operations (foo(), write, bar()) will not happen at all.
Edit:
All your ideas could potentially work — you can try them out.
Maybe there is an elegant solution for my problem in RxJava2 or it is not possible at all?
There are .blocking*() functions for Observable/Single/Completable classes if you really need them. Be careful though — you may start experiencing some hard to debug issues depending on your implementation due to introducing more state to your application.

Related

is there a library similar to Reentrant Lock in Java for Swift?

I am currently working on a function that can be used by multiple threads. The issue is that the function needs to complete first and store the result in the cache. In the meantime, other threads could be calling this function and I would need them to wait until is completed. We were able to accomplish this on Java using Reentrant Lock is there a similar library in swift? I saw that NSRecursiveLock approaches what we are trying to do, however, we want to keep it with swift only. I have also been seeing multiple articles such as this one that talks about using GCD, however, I believe this is for something similar but different: https://medium.com/#prasanna.aithal/multi-threading-in-ios-using-swift-82f3601f171c
Thank you in advance.
Recursion with locking is always a bit of a pain point. A clean solution would be to refactor your function that requires the lock into an external API that acquires the lock and forwards to an internal API that doesn't. Internally don't call the external API.
A simple example might be something like this (this is almost Swift code - parameters and actual work implementations need to be filled in)
extension DispatchSemaphore
{
func withLock<R>(_ block: () throws -> R) rethrows -> R
{
wait()
defer { signal() }
return try block()
}
}
let myLock = DispatchSemaphore(value: 1)
func recursiveLockingFunction(parameters)
{
func nonLockingFunc(parameters) {
if /* some terminating case goes here */ {
// Do the terminating case
return
}
// Do whatever you need to do to handle the partial problem and
// and reduce the parameters
nonLockingFunc(reducedParameters)
}
myLock.withLock { nonLockingFunc(parameters) }
}
Whether this will work for you depends on your design, but should work if the only problem is that the function you want to lock is recursive. And it only uses GCD (DispatchSemaphore) to achieve it.

How to correctly use suspend functions with coroutines on webflux?

I'm new to reactive programming and because I've already used kotlin with spring-web in the past, I decided to go to spring-webflux on this new project I'm working on. Then I discovered Mono and Flux apis and decided to use spring-data-r2dbc to keep full reactive stack (I'm aware I don't know how far this new project could be from meeting all reactive expectations, I'm doing this to learn a new tool, not because this is the perfect scenario for this new tool)
then I noticed I could replace all reactive streams apis from webflux with kotlin's native coroutines. I also opted by coroutines simply to learn and have less 'external frameworky' code
my application is quite simple (it's an url shortener):
1. parse some url out of http request's body into 3 parts
2. exchange each part to its postgres id on each respective table
3. concat these 3 ids into a new url, sending an 200 http response with this new url
my reactive controller is
#Configuration
class UrlRouter {
#Bean
fun urlRoutes(
urlHandler: UrlHandler,
redirectHandler: RedirectHandler
) = coRouter {
POST("/e", urlHandler::encode)
GET("/{*url}", redirectHandler::redirect)
}
}
as you can imagine, UrlHandler is responsible for the steps numbered above and RedirectHandler does the oposite: receiving an encoded url, it redirects to the right url received on number 1.
question 1: checking on coRouter, I assumed that for each http call, spring will start a new coroutine to resolve that call(oposing to a new thread on traditional spring-web), and each of these can create and depend on several other sub coroutines. Is this right? Does this hierarchy exist?
here's my UrlHandler fragment:
#Component
class UrlHandler(
private val cache: CacheService,
#Value("\${redirect-url-prefix}") private val prefix: String
) {
companion object {
val mapper = jacksonObjectMapper()
}
suspend fun encode(serverRequest: ServerRequest): ServerResponse =
try {
val bodyMap: Map<String, String> = mapper.readValue(serverRequest.awaitBody<String>())
// parseUrl being a string extension function just splitting
// that could throw IndexOutOfBoundsException
val (host, path, query) = bodyMap["url"]!!.parseUrl()
val hostId: Long = cache.findIdFromHost(host)
val pathId: Long? = cache.findIdFromPath(path)
val queryId: Long? = cache.findIdFromQuery(query)
val encodedUrl = "$prefix/${someOmmitedStringConcatenation(hostId, pathId, queryId)}"
ok().bodyValueAndAwait(mapOf("url" to encodedUrl))
} catch (e: IndexOutOfBoundsException) {
ServerResponse.badRequest().buildAndAwait()
}
all three findIdFrom*** calls try to retrieve an existing id and if it doesn't exist, save new entity and return new id from postgres sequence. This is done by CoroutineCrudRepository interfaces. Since my methods should always suspend, all 3 findIdFrom*** also suspend:
#Repository
interface HostUrlRepo : CoroutineCrudRepository<HostUrl, Long> {
suspend fun findByHost(host: String): HostUrl?
}
question 2: looking here I've found either invoke reactive query methods or have native suspended functions. Since I've read methods should always suspend, I've decided to keep myself using suspend. Is this bad/wrong in any way?
these 3 findIdFrom*** are independent and could be called to run in parallel and then only at someOmmitedStringConcatenation I should wait for any unfinished calls to actually build my encoded url
question 3: since every single method has the suspend modifier, it will run exactly as on traditional imperative sequential paradigm (wasting any benefit from parallel programming) ?
question 4: is this a valid scenario for coroutines usage? If so, how should I change my code to best fit the parallelism I want above?
possible solutions I've found for question 4:
question 4.1: source 1 inside each findIdFrom*** wrap it with withContext(Dispatchers.IO){ /*actual code here*/ } and then on encode function:
coroutineScope {
val hostIdDeferred = async { findIdFrom***() }
val pathIdDeferred = async { findIdFrom***() }
val queryIdDeferred = async { findIdFrom***() }
}
and when I want to use them, just use hostIdDeferred.await() to get the value. If I'm using Dispatchers.IO scope to run code inside new children coroutines, why coroutineScope is necessary? Is this the correct way, specifying a scope to the new coroutine child and then using coroutineScope to have a deferred val?
question 4.2: source 2 val resultOne = Async(Dispatchers.IO) { function1() } Intellij wasn't able to recognize/import any Async expression. How can I use this one and how it differs from previous one?
I'm open to improve and clarify any point on this question
I'll try to answer some of your questions:
q2: No, nothing wrong with it. Suspend methods can propagate all the way back to a controller. If your controllers are reactive, i.e. if you use RSocket with org.springframework.messaging.handler.annotation.MessageMapping, then even even controller methods can be suspend.
q3: right, but each method is still your source code is much simpler
q4.2: I wouldn't consider that website as a trustworthy source. There is an official documentation with examples: async

RxJava Relay vs Subjects

I'm trying to understand the purpose of this library by Jake Warthon:
https://github.com/JakeWharton/RxRelay
Basically: A Subject except without the ability to call onComplete or
onError. Subjects are stateful in a damaging way: when they receive an
onComplete or onError they no longer become usable for moving data.
I get idea, it's a valid use case, but the above seems easy to achieve just using the existing subjects.
1. Don't forward errors/completions events to the subject:
`observable.subscribe({ subject.onNext(it) }, { log error / throw exception },{ ... })`
2. Don't expose the subject, make your method signature return an observable instead.
fun(): Observable<> { return subject }
I'm obviously missing something here and I'm very curios on what it is!
class MyPublishRelay<I> : Consumer<I> {
private val subject: Subject<I> = PublishSubject.create<I>()
override fun accept(intent: I) = subject.onNext(intent)
fun subscribe(): Disposable = subject.subscribe()
fun subscribe(c: Consumer<in I>): Disposable = subject.subscribe(c)
//.. OTHER SUBSCRIBE OVERLOADS
}
subscribe has overloads and, usually, people get used to the subscribe(Consumer) overload. Then they use subjects and suddenly onComplete is also invoked. RxRelay saves the user from themselves who don't think about the difference between subscribe(Consumer) and subscribe(Observer).
Don't forward errors/completions events to the subject:
Indeed, but based on our experience with beginners, they often don't think about this or even know about the available methods to consider.
Don't expose the subject, make your method signature return an observable instead.
If you need a way to send items into the subject, this doesn't work. The purpose is to use the subject to perform item multicasting, sometimes from another Observable. If you are in full control of the emissions through the Subject, you should have the decency of not calling onComplete and not letting anything else do it either.
Subjects have far more overhead because they have to track and handle
terminal event states. Relays are stateless aside from subscription
management.
- Jake Wharton
(This is from the issue OP opened on GitHub and felt it was a more a correct answer and wanted to "relay" it here for others to see. https://github.com/JakeWharton/RxRelay/issues/30)
In addition to #akarnokd answer:
In some cases you can't control the flow of data inside the Observable, an example of this is when observing data changes from a database table using Room Database.
If you use Subjects, executing subjects.getValue will always throw error about null safety. So you have to put "? or !!" everywhere in your code even though you know that it will be not nullable.
public T getValue() {
Object o = value.get();
if (NotificationLite.isComplete(o) || NotificationLite.isError(o)) {
return null;
}
return NotificationLite.getValue(o);
}

Concatenating two observable sequences that both have subscribeOn. How do I ensure my observable runs on a thread?

When it comes to enforcing that a certain piece of Observable.create code runs in a specific thread (i.e. background thread), i worry that using the subscribeOn operator might not work because there are times that I might chain this observable sequence to another observable sequence that runs on a main thread (using observeOn).
Example
The situation is that I have an Observable sequence running on the main thread (i.e. an alert box asking the user for input, as to whether perform the network call or not).
Would it be better to ensure that this Observable.create code runs in the background thread by having something like:
Observable<String>.empty()
.observeOn(ConcurrentMainScheduler(queue: background.queue))
.concat(myObservableNetworkCall)
Why not just use subscribeOn?
The problem is if I had used subscribeOn (second) and the previous observable (the alert controller) was set to run on the background thread using subscribeOn (first), then the second subscribeOn operator would not work since the first call is closer to the source observable:
If you specify multiple subscribeOn() operators, the one closes to the source (the left-most), will be the one used.
Thomas Nield on RxJava's subscribeOn and observeOn operators (February 2016)
That may be the behavior for RxJava, but I am not sure for Swift. Reactivex.io simply says that we should not call subscribeOn multiple times.
I tend to wrap operations into Observable<Void>s and they need to be run on different threads... That is why I am asking for how to ensure an Observable code run in the thread I specified it to. subscribeOn wouldn't work because I can concatenate the observable.
I want the thread it should run in to be encapsulated in my Observable definition, not higher up in the chain.
Is the best practice to do the following:
Start with an Observable.empty using the data type I wish to use.
Use observeOn to force the thread that I want it to run in.
Concatenate it with the actual Observable that I want to use.
Edit
I have read the subscribeOn and observeOn documentation on reactivex.io.
I'm familiar with how to switch between threads using subscribeOn and observeOn.
What I'm specifically concerned about is the complication of using subscribeOn when concatenating or combining observable sequences.
The problem is, the observables need to run specifically on one thread, AND they don't know where and who they'll be concatenated with. Since I know exactly which thread they should run on, I'd prefer to encapsulate the scheduler definition within the observable's definition instead of when I'm chaining a sequence.
In the function declaration it is better not to specify on which thread the function is to be called.
For instance:
func myObservableNetworkCall() -> Observable<String> {
return Observable<String>.create { observer in
// your network code here
return Disposables.create {
// Your dispose
}
}
}
func otherObservableNetworkCall(s: String) -> Observable<String> {
return Observable<String>.create { observer in
// your network code here
return Disposables.create {
// Your dispose
}
}
}
And then switch between Schedulers:
myObservableNetworkCall()
.observeOn(ConcurrentMainScheduler(queue: background.queue)) // .background thread, network request, mapping, etc...
.flatMap { string in
otherObservableNetworkCall(s: string)
}
.observeOn(MainScheduler.instance) // switch to MainScheduler, UI updates
.subscribe(onNext:{ string in
// do something
})

How is the skipping implemented in Spring Batch?

I was wondering how I could determine in my ItemWriter, whether Spring Batch was currently in chunk-processing-mode or in the fallback single-item-processing-mode. In the first place I didn't find the information how this fallback mechanism is implemented anyway.
Even if I haven't found the solution to my actual problem yet, I'd like to share my knowledge about the fallback mechanism with you.
Feel free to add answers with additional information if I missed anything ;-)
The implementation of the skip mechanism can be found in the FaultTolerantChunkProcessor and in the RetryTemplate.
Let's assume you configured skippable exceptions but no retryable exceptions. And there is a failing item in your current chunk causing an exception.
Now, first of all the whole chunk shall be written. In the processor's write() method you can see, that a RetryTemplate is called. It also gets two references to a RetryCallback and a RecoveryCallback.
Switch over to the RetryTemplate. Find the following method:
protected <T> T doExecute(RetryCallback<T> retryCallback, RecoveryCallback<T> recoveryCallback, RetryState state)
There you can see that the RetryTemplate is retried as long as it's not exhausted (i.e. exactly once in our configuration). Such a retry will be caused by a retryable exception. Non-retryable exceptions will immediately abort the retry mechanism here.
After the retries are exhausted or aborted, the RecoveryCallback will be called:
e = handleRetryExhausted(recoveryCallback, context, state);
That's where the single-item-processing mode will kick-in now!
The RecoveryCallback (which was defined in the processor's write() method!) will put a lock on the input chunk (inputs.setBusy(true)) and run its scan() method. There you can see, that a single item is taken from the chunk:
List<O> items = Collections.singletonList(outputIterator.next());
If this single item can be processed by the ItemWriter correctly, than the chunk will be finished and the ChunkOrientedTasklet will run another chunk (for the next single items). This will cause a regular call to the RetryCallback, but since the chunk has been locked by the RecoveryTemplate, the scan() method will be called immediately:
if (!inputs.isBusy()) {
// ...
}
else {
scan(contribution, inputs, outputs, chunkMonitor);
}
So another single item will be processed and this is repeated, until the original chunk has been processed item-by-item:
if (outputs.isEmpty()) {
inputs.setBusy(false);
That's it. I hope you found this helpful. And I even more hope that you could find this easily via a search engine and didn't waste too much time, finding this out by yourself. ;-)
A possible approach to my original problem (the ItemWriter would like to know, whether it's in chunk or single-item mode) could be one of the following alternatives:
Only when the passed chunk is of size one, any further checks have to be done
When the passed chunk is a java.util.Collections.SingletonList, we would be quite sure, since the FaultTolerantChunkProcessor does the following:
List items = Collections.singletonList(outputIterator.next());
Unfortunately, this class is private and so we can't check it with instanceOf.
In reverse, if the chunk is an ArrayList we could also be quite sure, since the Spring Batch's Chunk class uses it:
private List items = new ArrayList();
One blurring left would be buffered items read from the execution context. But I'd expect those to be ArrayLists also.
Anyway, I still find this method too vague. I'd rather like to have this information provided by the framework.
An alternative would be to hook my ItemWriter in the framework execution. Maybe ItemWriteListener.onWriteError() is appropriate.
Update: The onWriteError() method will not be called if you're in single-item mode and throw an exception in the ItemWriter. I think that's a bug a filed it: https://jira.springsource.org/browse/BATCH-2027
So this alternative drops out.
Here's a snippet to do the same without any framework means directly in the writer
private int writeErrorCount = 0;
#Override
public void write(final List<? extends Long> items) throws Exception {
try {
writeWhatever(items);
} catch (final Exception e) {
if (this.writeErrorCount == 0) {
this.writeErrorCount = items.size();
} else {
this.writeErrorCount--;
}
throw e;
}
this.writeErrorCount--;
}
public boolean isWriterInSingleItemMode() {
return writeErrorCount != 0;
}
Attention: One should rather check for the skippable exceptions here and not for Exception in general.