Reconnection using retrywhen rxjava2 in android - rx-java2

I have the following RxJava disposable where I listen to real-time updates from the server
someNetworkBaseFlowable
.observeOn(schedulerProvider.io())
.subscribeOn(AndroidSchedulers.mainThread())
.subscribe({
// handle success
}, {
// handle failure
})
When network fails, this subscription fails and I loose the connectivity to the server even when the network is back.
I've been trying to us retryWhen to resubscribe to the server as follow
someNetworkBaseFlowable
.observeOn(schedulerProvider.io())
.subscribeOn(AndroidSchedulers.mainThread())
.retryWhen { error ->
error.flatMap {
Flowable.timer(5, TimeUnit.SECONDS)
}
}
.subscribe({
// handle succes
}, {
// handle failure
})
I though this would try to ping or reconnect to the server and resubscribe every 5 seconds, however this is not the case!
I've been struggling for a while, and any help with this issue will be appreciated.

You may slightly modify your code as follows:
someNetworkBaseFlowable
.observeOn(schedulerProvider.io())
.subscribeOn(AndroidSchedulers.mainThread())
.retryWhen { errorFlowable : Flowable<Throwable> ->
errorFlowable
.ofType(YourExceptionType::class.java) // for Kotlin
// .ofType(YourExceptionType.class) // for Java
.switchMap{ // to avoid duplicates
Flowable.timer(5L, TimeUnit.SECONDS)
}
}
.subscribe(
{
// handle succes
},
{
// handle failure
})
Does it retries now if YourExceptionType is caught?

Related

solclient send request resulting success and fail at the same time

I'm starting a new project with Solace as the load balancer. As I follow the guideline on the official doc to build a service that can send requests to Solace, I encounter a weird issue where my request is successful and fails simultaneously. Here's my code
function initSolace(pass: string) {
var factoryProps = new solace.SolclientFactoryProperties();
factoryProps.profile = solace.SolclientFactoryProfiles.version10;
solace.SolclientFactory.init(factoryProps);
session = solace.SolclientFactory.createSession({
"url": "ws://localhost:8008",
"userName": "tech_core",
"vpnName": "testing",
"password": pass
}, new solace.MessageRxCBInfo(messageRxCb));
session.on(solace.SessionEventCode.UP_NOTICE, function (sessionEvent: any) {
requestData(10000).subscribe();
});
session.connect();
}
async function messageRxCb(session: any, message: any) {
message = Parser.decodeBinaryAttachmentToPb(message, pnlPb.RespPnl);
console.log('result from RxCb', message); // I got the correct response here
}
function requestData(timeout = 10000) {
return new Observable(subscriber => {
const parsedPayload = Parser.createPb({displayCurrency: 'USD'}, pnlPb.ReqPnl);
const msg = Parser.encodePbToBinaryAttachment(parsedPayload, pnlPb.ReqPnl);
const request = solace.SolclientFactory.createMessage();
request.setDestination(solace.SolclientFactory.createTopicDestination('my/testing/topic'));
request.setDeliveryMode(solace.MessageDeliveryModeType.DIRECT);
request.setDeliverToOne(true);
request.setBinaryAttachment(msg);
session.sendRequest(request, timeout,
(ses: any, message: any) => {
console.log('SUCCESS', message);
subscriber.next(message);
},
(ses: any, event: any) => {
console.error('FAIL', event); // I got a timeout error here
subscriber.error(event);
},
'correlation test'
)
});
}
As I run the code, I gets the timeout error from the requestData function AND the correct data from the messageRxCb function as well.
How does this happening? Did I miss any config here?
The Request-Reply pattern is a closely coupled communication pattern.
Every request that is posted by the requestor requires a response from the replier within the timeout specified.
I see from your code sample that you have configured a timeout of 10000ms. What it means is that every request that is posted should receive an incoming reply within 10000ms. If this response is not received, then it would result in the timeout error that you see in the console.
The reason why you see that the request is successfull and the error is because while the request has been successfully posted, a reply was not received within the specified timeout.
Do you already have a replier setup for this interaction? If not then I would suggesting setting up a simple boilerplate replier listening on this and testing the flow again.
Additionally, it would be good coding practise to handle the timeout error in a functionally appropriate manner.
Regards
Hari

Make promise calls with a fixed timeout

I am currently trying to make it check for a database connection. but it seems like the result, ie connection is pending.
I am looking to impliment a system where i can send an timeout input, where the promise would be rejected due to a fixed timeout.
Something like;
try {
start(timeout: 6000) // 6 secs timeout on promise. default, ie no params: 3sec
} catch(e) {
// failed due to, in this case, timeout, since there is no connection running, and the database is pending their promise.
}
How can i accomplish such timeout?
current running example gives the following:
connected to mongo Promise { <pending> }
starting server code:
const start = async () => {
console.log("connecting to database");
try {
console.log("Attempting to establish connection....");
var result = await mongoose.connect('mongodb://localhost/db',{useNewUrlParser: true});
console.log("connected to mongo",result);
return result;
} catch (error) {
console.log("failed to connect to mongo",error);
}
}
try {
start();
} catch(e) {
console.log("failed to start server",e);
throw new Error("failed to start server",e);
}

How can I listen for notifications and send write operations with RxBleAndroid using the same connection?

Problem:
I'm listening for a notification using the following code:
bleDevice.establishConnection(false)
.flatMap { rxBleConnection -> rxBleConnection.setupNotification(charUUID) }
.doOnNext { }
.flatMap { notificationObservable -> notificationObservable } // <-- Notification has been set up, now observe value changes.
.subscribe(
{ bytes ->
run {
// Log.i("Notification!", bytes!!.contentToString())
// println(bytes.toHex())
sp?.play(pool?.get(mRandom.nextInt(pool!!.size))!!, 1F, 1F, 0, 0, 1F)
}
},
{ throwable -> Log.i(TAG, throwable.toString())}
)
This notification works. I am able to see the value of the notification change when my device's sensor is activated.
Now, I want to click a button and send a write operation using the following code:
bleDevice.establishConnection(false)
.flatMapSingle({ rxBleConnection -> rxBleConnection.writeCharacteristic(charUUID, bytesToWrite) })
.subscribe(
{ characteristicValue ->
run {
Log.d(TAG, "Write Command Succeeded")
}
},
{ throwable ->
run {
Log.d(TAG, "Write Command Failed")
Log.d(TAG, throwable.toString())
}
}
)
When I click the button I get the error message in the log output below. It says I am already connected. How can I send a write operation without attempting to connect again?
Expected behavior
I am expecting to be able to listen to notifications and also send write operations in the same Activity.
Log Output
D/ColorsFragment: Write Command Failed
com.polidea.rxandroidble2.exceptions.BleAlreadyConnectedException: Already connected to device with MAC address 34:81:F4:C6:09:0F
You should share the same connection within all of the reads/writes. The most simple way is to use RxReplayingShare:
val connection = rxBleDevice.establishConnection(false).compose(ReplayingShare.instance())
connection.subscribe({}, {}) // initiate connection
connection.flatMap({ /* do your first read/write */ }).take(1).subscribe()
connection.flatMap({ /* do your second read/write */ }).take(1).subscribe()
This approach is not suitable for all of the cases, so I would recommend you take a look at the documentation page focused on this issue.

Redux-saga and socket subscription causes Uncaught TypeError: Converting circular structure to JSON

I am having trouble subscribing to a socketcluster (http://socketcluster.io/) channel when using a redux-saga generator in my chat app. The socketcluster backend is setup in a way where any messages are saved in the database then published into the receiving user's personal channel, which is named after the user's id. For example, User A has an id '123abc' and would subscribe to the channel named '123abc' for their realtime messages.
The code below does receive new messages that are published to a channel but it throws a "TypeError: Converting circular structure to JSON" onload and breaks all of my other redux-saga generators in the app. I've done digging in Chrome Devtools and my theory is that it has something to do with queue created in the createChannel function. Also, I've tried returning a deferred promise in the subscribeToChannel function but that also caused a Circular Conversion Error, I can post that code on request.
I referred to this answer at first: https://stackoverflow.com/a/35288877/5068616 and it helped me get the below code in place but I cannot find any similar issues on the internet. Also something to note, I am utilizing redux-socket-cluster (https://github.com/mattkrick/redux-socket-cluster) to sync up the socket and state, but I don't think it is the root of the problem
sagas.js
export default function* root() {
yield [
fork(startSubscription),
]
}
function* startSubscription(getState) {
while (true) {
const {
userId
} = yield take(actions.SUBSCRIBE_TO_MY_CHANNEL);
yield call(monitorChangeEvents, subscribeToChannel(userId))
}
}
function* monitorChangeEvents(channel) {
while (true) {
const info = yield call(channel.take) // Blocks until the promise resolves
console.log(info)
}
}
function subscribeToChannel(channelName) {
const channel = createChannel();
const socket = socketCluster.connect(socketConfig);
const c = socket.subscribe(channelName);
c.watch(event => {
channel.put(event)
})
return channel;
}
function createChannel() {
const messageQueue = []
const resolveQueue = []
function put(msg) {
// anyone waiting for a message ?
if (resolveQueue.length) {
// deliver the message to the oldest one waiting (First In First Out)
const nextResolve = resolveQueue.shift()
nextResolve(msg)
} else {
// no one is waiting ? queue the event
messageQueue.push(msg)
}
}
// returns a Promise resolved with the next message
function take() {
// do we have queued messages ?
if (messageQueue.length) {
// deliver the oldest queued message
return Promise.resolve(messageQueue.shift())
} else {
// no queued messages ? queue the taker until a message arrives
return new Promise((resolve) => resolveQueue.push(resolve))
}
}
return {
take,
put
}
}
Thanks for the help!

Rxjava User-Retry observable with .cache operator?

i've an observable that I create with the following code.
Observable.create(new Observable.OnSubscribe<ReturnType>() {
#Override
public void call(Subscriber<? super ReturnType> subscriber) {
try {
if (!subscriber.isUnsubscribed()) {
subscriber.onNext(performRequest());
}
subscriber.onCompleted();
} catch (Exception e) {
subscriber.onError(e);
}
}
});
performRequest() will perform a long running task as you might expect.
Now, since i might be launching the same Observable twice or more in a very short amount of time, I decided to write such transformer:
protected Observable.Transformer<ReturnType, ReturnType> attachToRunningTaskIfAvailable() {
return origObservable -> {
synchronized (mapOfRunningTasks) {
// If not in maps
if ( ! mapOfRunningTasks.containsKey(getCacheKey()) ) {
Timber.d("Cache miss for %s", getCacheKey());
mapOfRunningTasks.put(
getCacheKey(),
origObservable
.doOnTerminate(() -> {
Timber.d("Removed from tasks %s", getCacheKey());
synchronized (mapOfRunningTasks) {
mapOfRunningTasks.remove(getCacheKey());
}
})
.cache()
);
} else {
Timber.d("Cache Hit for %s", getCacheKey());
}
return mapOfRunningTasks.get(getCacheKey());
}
};
}
Which basically puts the original .cache observable in a HashMap<String, Observable>.
This basically disallows multiple requests with the same getCacheKey() (Example login) to call performRequest() in parallel. Instead, if a second login request arrives while another is in progress, the second request observable gets "discarded" and the already-running will be used instead. => All the calls to onNext are going to be cached and sent to both subscribers actually hitting my backend only once.
Now, suppouse this code:
// Observable loginTask
public void doLogin(Observable<UserInfo> loginTask) {
loginTask.subscribe(
(userInfo) -> {},
(throwable) -> {
if (userWantsToRetry()) {
doLogin(loinTask);
}
}
);
}
Where loginTask was composed with the previous transformer. Well, when an error occurs (might be connectivity) and the userWantsToRetry() then i'll basically re-call the method with the same observable. Unfortunately that has been cached and I'll receive the same error without hitting performRequest() again since the sequence gets replayed.
Is there a way I could have both the "same requests grouping" behavior that the transformer provides me AND the retry button?
Your question has a lot going on and it's hard to put it into direct terms. I can make a couple recommendations though. Firstly your Observable.create can be simplified by using an Observable.defer(Func0<Observable<T>>). This will run the func every time a new subscriber is subscribed and catch and channel any exceptions to the subscriber's onError.
Observable.defer(() -> {
return Observable.just(performRequest());
});
Next, you can use observable.repeatWhen(Func1<Observable<Void>, Observable<?>>) to decide when you want to retry. Repeat operators will re-subscribe to the observable after an onComplete event. This particular overload will send an event to a subject when an onComplete event is received. The function you provide will receive this subject. Your function should call something like takeWhile(predicate) and onComplete when you do not want to retry again.
Observable.just(1,2,3).flatMap((Integer num) -> {
final AtomicInteger tryCount = new AtomicInteger(0);
return Observable.just(num)
.repeatWhen((Observable<? extends Void> notifications) ->
notifications.takeWhile((x) -> num == 2 && tryCount.incrementAndGet() != 3));
})
.subscribe(System.out::println);
Output:
1
2
2
2
3
The above example shows that retries are aloud when the event is not 2 and up to a max of 22 retries. If you switch to a repeatWhen then the flatMap would contain your decision as to use a cached observable or the realWork observable. Hope this helps!