I am writing an Application that can receive 2 types of event coming into the system from separate sources. I want to have a Context to handle each of them. See the code below :
event MyEvent1{
//stuff for context1
}
event MyEvent2{
//stuff for context2
}
event Cascade{
//PRIORITY stuff for context1 & 2
}
monitor Application{
context parallel1 := context("E1processor");
context parallel2 := context("E2processor");
action onload{
spawn handleE1() to parallel1;
spawn handleE2() to parallel2;
on all MyEvent1() as e {
send e to parallel1;
}
on all MyEvent2() as e {
send e to parallel2;
}
}//onload
action handleE1( ){
on all MyEvent1() as e1 {
//do work, create and route CASCADE event
route Cascade();
//I want to do this!
route Cascade() to parallel2; // < ----- ERROR
}
on all Cascade(){
//URGENT stuff
}
}
action handleE2(){
on all MyEvent2() as e1 {
}
on all Cascade(){
//URGENT stuff
}
}
}//Application
My problem lies with the fact that I want to have the Cascade() event pushed to the front of the processing queue because it is a priority. But when I try to do the following:
//do work, create and route CASCADE event
route Cascade(); //<--- Works
//I want to do this!
route Cascade() to parallel2; // < ----- ERROR
It gives me an error - how can I route the event as a priority from one context to the other?
Unfortunately there's no way to priority send to another context. The answer might be more architectural in nature - can the Cascade handling simply be done in the main context for example?
Related
Currently, I'm using Vue inside an Electron application. Inside a Vue's master component there are possibly multiple children. Each child listens to a signal that might be broadcasted by Electron's main process, like so:
export default {
...
created() {
ipcRenderer.on('set-service-status', (e, data) => {
// something with the data
})
}
...
}
However when there are more than 11 child components, node throws the error MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 set-service-status listeners added. Use emitter.setMaxListeners() to increase limit. This makes sense since multiple event listeners are being setup, one for every component.
How could this be solved? Should I just listen for the set-service-status signal inside the master component and then use Vue's eventing system to broadcast the message further down to the children? Or is there a better way to deal with this?
as I understand the problem with your current setup is, your starting listening each time component created and this cause problem of having a lot of listeners for one IPC call.
instead of listening via created() put this logic inside of your vuex
and call it only once. or you can still use created() in your entry file, the main root component. and give the data to your child components as props. That also works.
for example;
function setupIpc(dispatch) {
ipcRenderer.on('set-service-status', (e, data) => {
// something with the data
})
ipcRenderer.on('fullscreenChanged', (e, args) => {
dispatch('fullscreenHandler', args)
})
ipcRenderer.send('ipcReady')
}
and only call once when you start the application,
updateState({ commit, dispatch }) {
setupIpc(dispatch)
setInterval(() => { dispatch('stateSaveImmediate') }, 5000)
dispatch('init')
ipcRenderer.once('configGet', (e, data) => {
if (data === !null || !undefined) {
commit(ActionTypes.UPDATE_STATE, data)
} else {
commit(ActionTypes.UPDATE_STATE_ERROR_NO_CONFIG_FILE)
}
dispatch('doSomething')
})
ipcRenderer.send('configGet')
},
I'm trying to implement user-driven refreshing in my Rx based networking code, and my current design is as follows:
Create a sink that has Void values passed into it every time the user initiates a refresh action
flatMap the latest .Next event on that sink's Observable into a new network call
Transform the network response into a new view model and pass that back into the view controller
The part I'm getting hung up on is how to create a sink for those events to go down. My current code is as follows:
func contactListModel() -> Observable<ContactListViewModel<Contact>> {
// Create a sink for refresh events
var refreshSink: AnyObserver<Void> = AnyObserver { event in }
let refreshObservable = Observable<Void>.create { observer in
refreshSink = observer
return NopDisposable.instance
}
// Define action handlers
let searchClosure = { (query: String?) in
self.contactsSearchTerm.value = query
}
let refreshClosure = refreshSink.onNext
// TODO: [RP] Make contact list view controller handle a nil view model to remove the need for this code
let initialViewModel = ContactListViewModel<Contact>(contacts: [], searchClosure: searchClosure, refreshClosure: refreshClosure)
// Perform an initial refresh
defer {
refreshSink.onNext()
}
// Set up subscription to push a new view model each refresh
return refreshObservable
.flatMapLatest {
return self.networking.request(.ListContacts)
}
.mapToObject(ListContactsResponse)
.map { response in
return ContactListViewModel(contacts: response.contacts, searchClosure: searchClosure, refreshClosure: refreshClosure)
}
.startWith(initialViewModel)
}
Now it's obvious why my code to create an event sink doesn't work here. The block being passed into refreshObservable's create method is only called once the observer is subscribed to, so the refreshSink won't be reassigned until then. Furthermore, if this observable is subscribed to more than once, the refreshSink variable will be reassigned.
So my question is this: how do I create an Observable that I can manually push events down? Or alternatively, is there a better design I could be using here?
I know ReactiveCocoa has the pipe static method on Signal that will do something like what I'm looking for, but I've found no equivalent in the Rx API.
i've an observable that I create with the following code.
Observable.create(new Observable.OnSubscribe<ReturnType>() {
#Override
public void call(Subscriber<? super ReturnType> subscriber) {
try {
if (!subscriber.isUnsubscribed()) {
subscriber.onNext(performRequest());
}
subscriber.onCompleted();
} catch (Exception e) {
subscriber.onError(e);
}
}
});
performRequest() will perform a long running task as you might expect.
Now, since i might be launching the same Observable twice or more in a very short amount of time, I decided to write such transformer:
protected Observable.Transformer<ReturnType, ReturnType> attachToRunningTaskIfAvailable() {
return origObservable -> {
synchronized (mapOfRunningTasks) {
// If not in maps
if ( ! mapOfRunningTasks.containsKey(getCacheKey()) ) {
Timber.d("Cache miss for %s", getCacheKey());
mapOfRunningTasks.put(
getCacheKey(),
origObservable
.doOnTerminate(() -> {
Timber.d("Removed from tasks %s", getCacheKey());
synchronized (mapOfRunningTasks) {
mapOfRunningTasks.remove(getCacheKey());
}
})
.cache()
);
} else {
Timber.d("Cache Hit for %s", getCacheKey());
}
return mapOfRunningTasks.get(getCacheKey());
}
};
}
Which basically puts the original .cache observable in a HashMap<String, Observable>.
This basically disallows multiple requests with the same getCacheKey() (Example login) to call performRequest() in parallel. Instead, if a second login request arrives while another is in progress, the second request observable gets "discarded" and the already-running will be used instead. => All the calls to onNext are going to be cached and sent to both subscribers actually hitting my backend only once.
Now, suppouse this code:
// Observable loginTask
public void doLogin(Observable<UserInfo> loginTask) {
loginTask.subscribe(
(userInfo) -> {},
(throwable) -> {
if (userWantsToRetry()) {
doLogin(loinTask);
}
}
);
}
Where loginTask was composed with the previous transformer. Well, when an error occurs (might be connectivity) and the userWantsToRetry() then i'll basically re-call the method with the same observable. Unfortunately that has been cached and I'll receive the same error without hitting performRequest() again since the sequence gets replayed.
Is there a way I could have both the "same requests grouping" behavior that the transformer provides me AND the retry button?
Your question has a lot going on and it's hard to put it into direct terms. I can make a couple recommendations though. Firstly your Observable.create can be simplified by using an Observable.defer(Func0<Observable<T>>). This will run the func every time a new subscriber is subscribed and catch and channel any exceptions to the subscriber's onError.
Observable.defer(() -> {
return Observable.just(performRequest());
});
Next, you can use observable.repeatWhen(Func1<Observable<Void>, Observable<?>>) to decide when you want to retry. Repeat operators will re-subscribe to the observable after an onComplete event. This particular overload will send an event to a subject when an onComplete event is received. The function you provide will receive this subject. Your function should call something like takeWhile(predicate) and onComplete when you do not want to retry again.
Observable.just(1,2,3).flatMap((Integer num) -> {
final AtomicInteger tryCount = new AtomicInteger(0);
return Observable.just(num)
.repeatWhen((Observable<? extends Void> notifications) ->
notifications.takeWhile((x) -> num == 2 && tryCount.incrementAndGet() != 3));
})
.subscribe(System.out::println);
Output:
1
2
2
2
3
The above example shows that retries are aloud when the event is not 2 and up to a max of 22 retries. If you switch to a repeatWhen then the flatMap would contain your decision as to use a cached observable or the realWork observable. Hope this helps!
Here is the sample code flow:
class FSMActor{
when(Idle) {
case Event(Start, Uninitialized) =>
case Event(InitMap(inMap), t # EvaluteRuleMap(v, c)) =>
logger.info(s"State = $stateName, Event = Event(_, InitMap(inMap))")
goto(EVALRule) using t.copy(ruleMap = inMap)
}
when(EVALRule) {
case Event(RowMap(m), t # EvaluteRuleMap(v, c)) =>
logger.debug("input row map m " + m)
**if ( <somecondition> ) { // If comment this if-else block, I could see rowMaps being received.
logger.debug(s"Moving to State Trigger x=$x")
goto(TriggerRule) using t.copy(ruleMap = x.get)
} else {
logger.debug(s"staying in EVALRule, x = $x")
stay
}**
}
when(TriggerRule) {
case Event(_, _) => ....
}
}
}
When the control in in "EVALRule" state, It will keep receiving streams maps(RowMap) and based on some computation, it moves to trigger rule.
Unfortunately for some weird reason, some of the incoming RowMaps are not being received at "case Event(RowMap(m), t # EvaluteRuleMap(v, c)) =>" and
If I comment the code bock (bolded in the above code) then I could see all incoming rowmaps being received.
Could anyone let me know why is so? I've been trying to find the cause but couldn't get it to.
Appreciate your help, thanks.
When if ( <somecondition> ) is true, you are moving to the TriggerRule state. In that state you are looking for messages of type EVENT instead of Event (all caps). So the message is not handled by the FSM.
In general, when missing messages in FSM, the best way to debug is to write a whenUnhandled block with a log/print statement to see what messages are not handled by the states you have defined.
There was some issue with message handling in the code itself, we debugged it and fixed the issue, now its working seamlessly.
Is it possible to continue editing the same object after GWT server request?
Consider best-practices code from another question
void start() {
// Either get p
context1.get(..).to( new Receiver<P> { onSuccess(P resp){p = resp;} ... }).fire();
// OR create p
p = context2.create( P.class );
// Then save p
req = context2.persist(p).to( new Receiver<P>{ /* note do not use context1 */
onViolation(...) { /*JSR 303 handler*/ };
onFailure( error ) { /* handle */ error.getMessage() };
onSuccess(X x) { /* whatever persist() returns handler */ }; } );
// drive editor with p
driver.edit( p, req);
}
....
void onSave() {
// editor
ctxt = driver.flush() /* note ctxt == context2 */
if ( driver.hasErrors() ) { /*JSR 303 handler*/};
// RF
ctxt.fire();
}
The question is, how to handle un-successful server response in the last line? (add receiver to "ctxt.fire();")
void onSave() {
// editor
ctxt = driver.flush() /* note ctxt == context2 */
if ( driver.hasErrors() ) { /*JSR 303 handler*/};
// RF
ctxt.fire(new Receiver<S>{
onSuccess() { ... how to continue editing the "p" object? ... }
onFailure() { ... how to continue editing the "p" object? ... } });
});
}
For example, on save, sever does some additional validation (e.g. that value is unique). And does not accept to save it.
So, server request finishes with "onSuccess(response)" method, but object was not saved (response value may contain list of errors).
Is it possible to allow user to continue editing the unsaved, but updated on client side object and make another request to the server?
The "deadlock" that I see:
It is not possible to reuse request context (ctxt), because "A request is already in progress" exception will be thrown.
It is not possible to create a new context, because all modifications to the object are in the old context (so they will be lost).
Mutable proxies are always bound to a request context. Proxies you receive from the server, however, are frozen and not mutable. The .edit method will make a mutable clone of a frozen proxy for a given request context.
If a request couldn't be fired (connection issues, server error), the context will be re-usable and you can continue editing the proxy. Same applies if the constraints were violated.
If a request was successfully fired (no matter if the server method threw an exception or not), the request context cannot be used no more and the same applies to the proxy.
Have a look at onTransportSuccess in AbstractRequestContext - this will tell you: the only cases in which you can continue using the request context are violation and general failure. So either you enforce a violation or you return (the erroneous) object to the client and continue working on it with a fresh request context (this will lead to issues with entity proxies I'm afraid since it will loose the reference state)