PeerJS data connection doesn't work as expected - peerjs

I am building audio calling app using PeerJS.
When I use peer.on('connection', function() { ... }) inside peer.on('call', function(call) { ... }), it doesn't work.
I want to be able to do the above so that I can close the audio stream when I receive data from the call initializer.
Why peer.on('connection', function() { ... } doesn't work inside peer.on('call', function(call) { ... }?
And how can I go about receiving the data from the caller in the receiver?
NOTE: I need to receiver the data inside peer.on('call', function(call) { ... } so that I can close the audio stream

Related

Flutter Future timeouts not always working correctly

Hey I need some help here for How to use timeouts in flutter correctly. First of all to explain what the main goal is:
I want to recive data from my Firebase RealTime Database but need to secure this request api call with an time out of 15 sec. So after 15 sec my timeout should throw an exception that will return to the Users frontend the alert for reasons of time out.
So I used the simple way to call timeouts on future functions:
This functions should only check if on some firebase node an ID is existing or not:
Inside this class where I have declared this functions I also have an instance which called : timeoutControl this is a class which contains a duration and some reasons for the exceptions.
Future<bool> isUserCheckedIn(String oid, String maybeCheckedInUserIdentifier, String onGateId) async {
try {
databaseReference = _firebaseDatabase.ref("Boarding").child(oid).child(onGateId);
final snapshot = await databaseReference.get().timeout(Duration(seconds: timeoutControl.durationForTimeOutInSec), onTimeout: () => timeoutControl.onEppTimeoutForTask());
if(snapshot.hasChild(maybeCheckedInUserIdentifier)) {
return true;
}
else {
return false;
}
}
catch (exception) {
return false;
}
}
The TimeOutClass where the instance timeoutControl comes from:
class CustomTimeouts {
int durationForTimeOutInSec = 15; // The seconds for how long to try until we throw an timeout exception
CustomTimeouts();
// TODO: Implement the exception reasons here later ...
onEppTimeoutForUpload() {
throw Exception("Some reason ...");
}
onEppTimeoutForTask() {
throw Exception("Some reason ...");
}
onEppTimeoutForDownload() {
throw Exception("Some reason ...");
}
}
So as you can see for example I tried to use this implementation above. This works fine ... sometimes I need to fight with un explain able things -_-. Let me try to introduce what in somecases are the problem:
Inside the frontend class make this call:
bool isUserCheckedIn = await service.isUserCheckedIn(placeIdentifier, userId, gateId);
Map<String, dynamic> data = {"gateIdActive" : isUserCheckedIn};
/*
The response here is an Custom transaction handler which contains an error or an returned param
etc. so this isn't relevant for you ...
*/
_gateService.updateGate(placeIdentifier, gateId, data).then((response) {
if(response.hasError()) {
setState(() {
EppDialog.showErrorToast(response.getErrorMessage()); // Shows an error message
isSendButtonDiabled = false; /*Reset buttons state*/
});
}
else {
// Create an gate process here ...
createGateEntrys(); // <-- If the closures update was successful we also handle some
// other data inside the RTDB for other reasons here ...
}
});
IMPORTANT to know for you guys is that I am gonna use the returned "boolean" value from this function call to update some other data which will be pushed and uploaded into another RTDB other node location for other reasons. And if this was also successful the application is going on to update some entrys also inside the RTDB -->createGateEntrys()<-- This function is called as the last one and is also marked as an async function and called with its closures context and no await statement.
The Data inside my Firebase RTDB:
"GateCheckIns" / "4mrithabdaofgnL39238nH" (The place identifier) / "NFdxcfadaies45a" (The Gate Identifier)/ "nHz2mhagadzadzgadHjoeua334" : 1 (as top of the key some users id who is checked in)
So on real devices this works always without any problems... But the case of an real device or simulator could not be the reason why I'am faceing with this problem now. Sometimes inside the Simulator this Function returns always false no matter if the currentUsers Identifier is inside the this child nodes or not. Therefore I realized the timeout is always called immediately so right after 1-2 sec because the exception was always one of these I was calling from my CustomTimeouts class and the function which throws the exception inside the .timeout(duration, onTimeout: () => ...) call. I couldn't figure it out because as I said on real devices I was not faceing with this problem.
Hope I was able to explain the problem it's a little bit complicated I know but for me is important that someone could explain me for what should I pay attention to if I am useing timeouts in this style etc.
( This is my first question here on StackOverFlow :) )

how to get 'on' event listener in ibm-cloud/cloudant package?

The deprecated #cloudant/cloudant is replaced by ibm-cloud/cloudant package. In former I was using following code snippet
const feed = dummyDB.follow({ include_docs: true, since: 'now'})
feed.on('change', function (change) {
console.log(change)
})
feed.on('error', function (err) {
console.log(err)
})
feed.filter = function (doc, req) {
if (doc._deleted || doc.clusterId === clusterID) {
return true
}
return false
}
Could you share a code for which I can get feed.on event listener similar to above code in new npm package ibm-cloud/cloudant.
There isn't an event emitter for changes in the #ibm-cloud/cloudant package right now. You can emulate the behaviour by either:
polling postChanges (updating the since value after new results) and processing the response result property, which is a ChangesResult. That in turn has a results property that is an array of ChangesResultItem elements, each of which is equivalent to the change argument of the event handler function.
or
call postChangesAsStream with a feed type of continuous and process the stream returned in the response result property, each line of which is a JSON object that follows the structure of ChangesResultItem. In this case you'd also probably want to configure a heartbeat and timeouts.
In both cases you'd need to handle errors to reconnect in the event of network glitches etc.

Is it necessary to close a Mongodb Change Stream?

I coded the next Node/Express/Mongo script:
const { MongoClient } = require("mongodb");
const stream = require("stream");
async function main() {
// CONECTING TO LOCALHOST (REPLICA SET)
const client = new MongoClient("mongodb://localhost:27018");
try{
// CONECTION
await client.connect();
// EXECUTING MY WATCHER
console.log("Watching ...");
await myWatcher(client, 15000);
} catch (e) {
// ERROR MANAGEMENT
console.log(`Error > ${e}`);
} finally {
// CLOSING CLIENT CONECTION ???
await client.close(); << ????
}
}main().catch(console.error);
// MY WATCHER. LISTENING CHANGES FROM MY DATABASE
async function myWatcher(client, timeInMs, pipeline = []) {
// TARGET TO WATCH
const watching = client.db("myDatabase").collection("myCollection").watch(pipeline);
// WATCHING CHANGES ON TARGET
watching.on("change", (next) => {
console.log(JSON.stringify(next));
console.log(`Doing my things...`);
});
// CLOSING THE WATCHER ???
closeChangeStream(timeInMs, watching); << ????
}
// CHANGE STREAM CLOSER
function closeChangeStream(timeInMs = 60000, watching) {
return new Promise((resolve) => {
setTimeout(() => {
console.log("Closing the change stream");
watching.close();
resolve();
}, timeInMs);
});
}
So, the goal is to keep always myWatcher function in an active state, to watch any database changes and for example, send an user notification when is detected some updating. The closeChangeStream function close myWatcher function in X seconds after any database changes. So, to keep the myWatcher always active, do you recomment not to use the closeChangeStream function ??
Another thing. With this goal in mind, to keep always myWatcher function in an active state, if I keep the await client.close();, my code emits an error: Topology is closed, so when I ignore this await client.close(), my code works perfectly. Do you recomment not to use the await client.close() function to keep always myWatcher function in an active state ??
Im a newbee in this topics !
thanks for the advice !
Thanks for help !
MongoDB change streams are implemented in a pub/sub paradigm.
Send your application to a friend in the Sudan. Have both you and your friend run the application (that has the change stream implemented). If you open up mongosh and run db.getCollection('myCollection').updateOne({_id: ObjectId("6220ee09197c13d24a7997b7")}, {FirstName: Bob}); both you and your friend will get the console.log for the change stream.
This is assuming you're not running localhost, but you can simulate this with two copies of the applications locally.
The issue comes from going into production and suddenly you have 200 load bearers, 5 developers, etc. running and your watch fires a ton of writes around the globe.
I believe, the practice is to functionize it. Wrap your watch in a function and fire the function when you're about to do a write (and close after you do your associated writes).

Listening for Electron's ipcRenderer message inside a Vue component

Currently, I'm using Vue inside an Electron application. Inside a Vue's master component there are possibly multiple children. Each child listens to a signal that might be broadcasted by Electron's main process, like so:
export default {
...
created() {
ipcRenderer.on('set-service-status', (e, data) => {
// something with the data
})
}
...
}
However when there are more than 11 child components, node throws the error MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 set-service-status listeners added. Use emitter.setMaxListeners() to increase limit. This makes sense since multiple event listeners are being setup, one for every component.
How could this be solved? Should I just listen for the set-service-status signal inside the master component and then use Vue's eventing system to broadcast the message further down to the children? Or is there a better way to deal with this?
as I understand the problem with your current setup is, your starting listening each time component created and this cause problem of having a lot of listeners for one IPC call.
instead of listening via created() put this logic inside of your vuex
and call it only once. or you can still use created() in your entry file, the main root component. and give the data to your child components as props. That also works.
for example;
function setupIpc(dispatch) {
ipcRenderer.on('set-service-status', (e, data) => {
// something with the data
})
ipcRenderer.on('fullscreenChanged', (e, args) => {
dispatch('fullscreenHandler', args)
})
ipcRenderer.send('ipcReady')
}
and only call once when you start the application,
updateState({ commit, dispatch }) {
setupIpc(dispatch)
setInterval(() => { dispatch('stateSaveImmediate') }, 5000)
dispatch('init')
ipcRenderer.once('configGet', (e, data) => {
if (data === !null || !undefined) {
commit(ActionTypes.UPDATE_STATE, data)
} else {
commit(ActionTypes.UPDATE_STATE_ERROR_NO_CONFIG_FILE)
}
dispatch('doSomething')
})
ipcRenderer.send('configGet')
},

Redux-saga and socket subscription causes Uncaught TypeError: Converting circular structure to JSON

I am having trouble subscribing to a socketcluster (http://socketcluster.io/) channel when using a redux-saga generator in my chat app. The socketcluster backend is setup in a way where any messages are saved in the database then published into the receiving user's personal channel, which is named after the user's id. For example, User A has an id '123abc' and would subscribe to the channel named '123abc' for their realtime messages.
The code below does receive new messages that are published to a channel but it throws a "TypeError: Converting circular structure to JSON" onload and breaks all of my other redux-saga generators in the app. I've done digging in Chrome Devtools and my theory is that it has something to do with queue created in the createChannel function. Also, I've tried returning a deferred promise in the subscribeToChannel function but that also caused a Circular Conversion Error, I can post that code on request.
I referred to this answer at first: https://stackoverflow.com/a/35288877/5068616 and it helped me get the below code in place but I cannot find any similar issues on the internet. Also something to note, I am utilizing redux-socket-cluster (https://github.com/mattkrick/redux-socket-cluster) to sync up the socket and state, but I don't think it is the root of the problem
sagas.js
export default function* root() {
yield [
fork(startSubscription),
]
}
function* startSubscription(getState) {
while (true) {
const {
userId
} = yield take(actions.SUBSCRIBE_TO_MY_CHANNEL);
yield call(monitorChangeEvents, subscribeToChannel(userId))
}
}
function* monitorChangeEvents(channel) {
while (true) {
const info = yield call(channel.take) // Blocks until the promise resolves
console.log(info)
}
}
function subscribeToChannel(channelName) {
const channel = createChannel();
const socket = socketCluster.connect(socketConfig);
const c = socket.subscribe(channelName);
c.watch(event => {
channel.put(event)
})
return channel;
}
function createChannel() {
const messageQueue = []
const resolveQueue = []
function put(msg) {
// anyone waiting for a message ?
if (resolveQueue.length) {
// deliver the message to the oldest one waiting (First In First Out)
const nextResolve = resolveQueue.shift()
nextResolve(msg)
} else {
// no one is waiting ? queue the event
messageQueue.push(msg)
}
}
// returns a Promise resolved with the next message
function take() {
// do we have queued messages ?
if (messageQueue.length) {
// deliver the oldest queued message
return Promise.resolve(messageQueue.shift())
} else {
// no queued messages ? queue the taker until a message arrives
return new Promise((resolve) => resolveQueue.push(resolve))
}
}
return {
take,
put
}
}
Thanks for the help!