Listening for Electron's ipcRenderer message inside a Vue component - event-handling

Currently, I'm using Vue inside an Electron application. Inside a Vue's master component there are possibly multiple children. Each child listens to a signal that might be broadcasted by Electron's main process, like so:
export default {
...
created() {
ipcRenderer.on('set-service-status', (e, data) => {
// something with the data
})
}
...
}
However when there are more than 11 child components, node throws the error MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 set-service-status listeners added. Use emitter.setMaxListeners() to increase limit. This makes sense since multiple event listeners are being setup, one for every component.
How could this be solved? Should I just listen for the set-service-status signal inside the master component and then use Vue's eventing system to broadcast the message further down to the children? Or is there a better way to deal with this?

as I understand the problem with your current setup is, your starting listening each time component created and this cause problem of having a lot of listeners for one IPC call.
instead of listening via created() put this logic inside of your vuex
and call it only once. or you can still use created() in your entry file, the main root component. and give the data to your child components as props. That also works.
for example;
function setupIpc(dispatch) {
ipcRenderer.on('set-service-status', (e, data) => {
// something with the data
})
ipcRenderer.on('fullscreenChanged', (e, args) => {
dispatch('fullscreenHandler', args)
})
ipcRenderer.send('ipcReady')
}
and only call once when you start the application,
updateState({ commit, dispatch }) {
setupIpc(dispatch)
setInterval(() => { dispatch('stateSaveImmediate') }, 5000)
dispatch('init')
ipcRenderer.once('configGet', (e, data) => {
if (data === !null || !undefined) {
commit(ActionTypes.UPDATE_STATE, data)
} else {
commit(ActionTypes.UPDATE_STATE_ERROR_NO_CONFIG_FILE)
}
dispatch('doSomething')
})
ipcRenderer.send('configGet')
},

Related

how to get 'on' event listener in ibm-cloud/cloudant package?

The deprecated #cloudant/cloudant is replaced by ibm-cloud/cloudant package. In former I was using following code snippet
const feed = dummyDB.follow({ include_docs: true, since: 'now'})
feed.on('change', function (change) {
console.log(change)
})
feed.on('error', function (err) {
console.log(err)
})
feed.filter = function (doc, req) {
if (doc._deleted || doc.clusterId === clusterID) {
return true
}
return false
}
Could you share a code for which I can get feed.on event listener similar to above code in new npm package ibm-cloud/cloudant.
There isn't an event emitter for changes in the #ibm-cloud/cloudant package right now. You can emulate the behaviour by either:
polling postChanges (updating the since value after new results) and processing the response result property, which is a ChangesResult. That in turn has a results property that is an array of ChangesResultItem elements, each of which is equivalent to the change argument of the event handler function.
or
call postChangesAsStream with a feed type of continuous and process the stream returned in the response result property, each line of which is a JSON object that follows the structure of ChangesResultItem. In this case you'd also probably want to configure a heartbeat and timeouts.
In both cases you'd need to handle errors to reconnect in the event of network glitches etc.

Is it necessary to close a Mongodb Change Stream?

I coded the next Node/Express/Mongo script:
const { MongoClient } = require("mongodb");
const stream = require("stream");
async function main() {
// CONECTING TO LOCALHOST (REPLICA SET)
const client = new MongoClient("mongodb://localhost:27018");
try{
// CONECTION
await client.connect();
// EXECUTING MY WATCHER
console.log("Watching ...");
await myWatcher(client, 15000);
} catch (e) {
// ERROR MANAGEMENT
console.log(`Error > ${e}`);
} finally {
// CLOSING CLIENT CONECTION ???
await client.close(); << ????
}
}main().catch(console.error);
// MY WATCHER. LISTENING CHANGES FROM MY DATABASE
async function myWatcher(client, timeInMs, pipeline = []) {
// TARGET TO WATCH
const watching = client.db("myDatabase").collection("myCollection").watch(pipeline);
// WATCHING CHANGES ON TARGET
watching.on("change", (next) => {
console.log(JSON.stringify(next));
console.log(`Doing my things...`);
});
// CLOSING THE WATCHER ???
closeChangeStream(timeInMs, watching); << ????
}
// CHANGE STREAM CLOSER
function closeChangeStream(timeInMs = 60000, watching) {
return new Promise((resolve) => {
setTimeout(() => {
console.log("Closing the change stream");
watching.close();
resolve();
}, timeInMs);
});
}
So, the goal is to keep always myWatcher function in an active state, to watch any database changes and for example, send an user notification when is detected some updating. The closeChangeStream function close myWatcher function in X seconds after any database changes. So, to keep the myWatcher always active, do you recomment not to use the closeChangeStream function ??
Another thing. With this goal in mind, to keep always myWatcher function in an active state, if I keep the await client.close();, my code emits an error: Topology is closed, so when I ignore this await client.close(), my code works perfectly. Do you recomment not to use the await client.close() function to keep always myWatcher function in an active state ??
Im a newbee in this topics !
thanks for the advice !
Thanks for help !
MongoDB change streams are implemented in a pub/sub paradigm.
Send your application to a friend in the Sudan. Have both you and your friend run the application (that has the change stream implemented). If you open up mongosh and run db.getCollection('myCollection').updateOne({_id: ObjectId("6220ee09197c13d24a7997b7")}, {FirstName: Bob}); both you and your friend will get the console.log for the change stream.
This is assuming you're not running localhost, but you can simulate this with two copies of the applications locally.
The issue comes from going into production and suddenly you have 200 load bearers, 5 developers, etc. running and your watch fires a ton of writes around the globe.
I believe, the practice is to functionize it. Wrap your watch in a function and fire the function when you're about to do a write (and close after you do your associated writes).

Want to indefinitely Observe an Array that changes over time

I am trying to use Rxjs Observables To watch for changes in my array with no luck.
Imports:
import {Observable} from 'rxjs/Observable';
import 'rxjs/add/observable/of';
In my main service in Angular 2
I am getting an array from a socket.io server that changes when users connect or disconnect.
I set the data after every change.
I know userList is Updating when the socket emits, but for some reason I can't figure out how to continuously observe this change in my component.
Main Service:
socket.on('get users',(data)=>{
this.userList= data;
});
Function in Main Service - getUsers():
getUsers(){
return Observable.of(this.userList);
}
I am trying to both subscribe to the userList var, and use an async pipe, but neither are updating, They only work the first time then stop.
How do I make it actually indefinitely observe for changes?
MainService
userList: Subject = new Subject<any>();
userList$ = this.userList.asObservable();
socket.on('get users', (data: any) => {
this.userList.next(data);
});
In Component
this.mainService.userList$
.subscribe(
(data:any) => console.log(data)
);

Angular2 e2e test case with protractor throwing error

I have created my app with angular2-webpack-starter and i have used socket.io with it. I have created one common service to create socket connection and listen its method. this service is used and initialized after user is logged in. When app is running and i execute test case for login, i am checking url with below code :
browser.getCurrentUrl().then((url) => {
expect(url).toEqual('/dashboard');
});
The issue is when socket is connected its throwing error 'Timed out waiting for Protractor to synchronize with the page after 15 seconds' and if socket is not connected same test case is running without any error.
I'm not sure if connecting to the socket is actually make things take longer or not but if the 15 seconds isn't enough time, you can change the
allScriptsTimeout:timeout_in_millis in your protractor configuration file
protractor timeouts
So the solution I have found is:
(This is copied from here for your convenience. All credit goes to https://github.com/cpa-level-it
https://github.com/angular/angular/issues/11853#issuecomment-277185526)
What I did to fix the problem was using ngZone everywhere I have an observable that relies on socket.io.
So let's say you have this method in your service that gives you an observable on a socket.io.
private socket: SocketIOClient.Socket;
public getSocketIOEvents(): Observable<SocketIOEvent> {
if (this.socket == null) {
this.socket = io.connect(this._socketPath);
}
return Observable.create((observer: any) => {
this.socket.on('eventA', (item: any) => observer.next(new SocketIOEvent(item)));
this.socket.on('eventB', (item: any) => observer.next(new SocketIOEvent(item)));
return () => this.socket.close();
});
}
Then you need to use the ngZone service to tell Angular to create the socket outside the Angular 2 zone and then execute the callback of the Observable inside the Angular 2 zone.
import { NgZone } from '#angular/core';
constructor(
private socketService: SocketIOService, ,
private ngZone: NgZone) { }
ngOnInit() {
// Subscribe to the Observable outside Angular zone...
this.ngZone.runOutsideAngular(() => {
this.socketService
.getSocketIOEvents()
.subscribe(event => {
// Come back into Angular zone when there is a callback from the Observable
this.ngZone.run(() => {
this.handleEvent(event);
});
});
});
}
This way protractor doesn't hang waiting on the socket.

Rxjava User-Retry observable with .cache operator?

i've an observable that I create with the following code.
Observable.create(new Observable.OnSubscribe<ReturnType>() {
#Override
public void call(Subscriber<? super ReturnType> subscriber) {
try {
if (!subscriber.isUnsubscribed()) {
subscriber.onNext(performRequest());
}
subscriber.onCompleted();
} catch (Exception e) {
subscriber.onError(e);
}
}
});
performRequest() will perform a long running task as you might expect.
Now, since i might be launching the same Observable twice or more in a very short amount of time, I decided to write such transformer:
protected Observable.Transformer<ReturnType, ReturnType> attachToRunningTaskIfAvailable() {
return origObservable -> {
synchronized (mapOfRunningTasks) {
// If not in maps
if ( ! mapOfRunningTasks.containsKey(getCacheKey()) ) {
Timber.d("Cache miss for %s", getCacheKey());
mapOfRunningTasks.put(
getCacheKey(),
origObservable
.doOnTerminate(() -> {
Timber.d("Removed from tasks %s", getCacheKey());
synchronized (mapOfRunningTasks) {
mapOfRunningTasks.remove(getCacheKey());
}
})
.cache()
);
} else {
Timber.d("Cache Hit for %s", getCacheKey());
}
return mapOfRunningTasks.get(getCacheKey());
}
};
}
Which basically puts the original .cache observable in a HashMap<String, Observable>.
This basically disallows multiple requests with the same getCacheKey() (Example login) to call performRequest() in parallel. Instead, if a second login request arrives while another is in progress, the second request observable gets "discarded" and the already-running will be used instead. => All the calls to onNext are going to be cached and sent to both subscribers actually hitting my backend only once.
Now, suppouse this code:
// Observable loginTask
public void doLogin(Observable<UserInfo> loginTask) {
loginTask.subscribe(
(userInfo) -> {},
(throwable) -> {
if (userWantsToRetry()) {
doLogin(loinTask);
}
}
);
}
Where loginTask was composed with the previous transformer. Well, when an error occurs (might be connectivity) and the userWantsToRetry() then i'll basically re-call the method with the same observable. Unfortunately that has been cached and I'll receive the same error without hitting performRequest() again since the sequence gets replayed.
Is there a way I could have both the "same requests grouping" behavior that the transformer provides me AND the retry button?
Your question has a lot going on and it's hard to put it into direct terms. I can make a couple recommendations though. Firstly your Observable.create can be simplified by using an Observable.defer(Func0<Observable<T>>). This will run the func every time a new subscriber is subscribed and catch and channel any exceptions to the subscriber's onError.
Observable.defer(() -> {
return Observable.just(performRequest());
});
Next, you can use observable.repeatWhen(Func1<Observable<Void>, Observable<?>>) to decide when you want to retry. Repeat operators will re-subscribe to the observable after an onComplete event. This particular overload will send an event to a subject when an onComplete event is received. The function you provide will receive this subject. Your function should call something like takeWhile(predicate) and onComplete when you do not want to retry again.
Observable.just(1,2,3).flatMap((Integer num) -> {
final AtomicInteger tryCount = new AtomicInteger(0);
return Observable.just(num)
.repeatWhen((Observable<? extends Void> notifications) ->
notifications.takeWhile((x) -> num == 2 && tryCount.incrementAndGet() != 3));
})
.subscribe(System.out::println);
Output:
1
2
2
2
3
The above example shows that retries are aloud when the event is not 2 and up to a max of 22 retries. If you switch to a repeatWhen then the flatMap would contain your decision as to use a cached observable or the realWork observable. Hope this helps!