TypeError: this.subClient.psubscribe is not a function - sockets

here's the init of the redis pubClient and subClient
onConnection event with socket
i'm trying to initialize the redis in every socket connection :
this.subClient.psubscribe(this.channel + "*", onError);
TypeError: this.subClient.psubscribe is not a function

As TJ mentioned, inline code would be much, much better. You should revise your question to include that. That said, I looked at your code and I see two problems.
You haven't opened your Redis connect. Instructions on how to do this can be found at the top of the README for Node Redis. I've pasted it below for your convenience, but you'll want to go over the README for more details:
import { createClient } from 'redis';
const client = createClient();
client.on('error', (err) => console.log('Redis Client Error', err));
await client.connect();
The method is pSubscribe not psubscribe. Examples for Redis Pub/Sub are also in the README, albeit a bit further down. Here's the salient bits:
const subscriber = client.duplicate();
await subscriber.connect();
await subscriber.pSubscribe('channe*', (message, channel) => {
console.log(message, channel); // 'message', 'channel'
});
await subscriber.pUnsubscribe('channe*');
Hope this helps and, please, help future readers by inlining your code. Thanks!

Related

How to execute function after stream is closed in Dart/Flutter?

So basically I am using the flutter_uploader package to upload files to a server and I'd like to execute a function after the upload is complete:
final StreamSubscription<UploadTaskProgress> subscription = _uploader.progress.listen(
(e) {
print(e.progress);
},
onError: (ex, stacktrace) {
throw Exception("Something went wrong updating the file...");
},
onDone: () {
myFunction(); // won't run
},
cancelOnError: true,
);
The problem is the onDone function doesn't execute thus meaning myFunction never executes. I've done some digging and I found that onDone gets called when we close the stream but there is no such method on the subscription variable. I have not used streams much and therefore am pretty bad with them.
My question is, how can I run myFunction? once the stream is complete? I thought that onDone would get called when such is the case but I guess not.
Thank you!
I didn't used that package before but I was reading a litle bit about the package and I think you can execute your funciton inside the main block, the other ones are to handle internal processes like stopping a background job or some other external stuff like notify the error to some error monitoring tool, this is what I propose to you:
final StreamSubscription<UploadTaskProgress> subscription =
_uploader.progress.listen(
(e) {
if (e.status is UploadTaskStatus._internal(3)) {
myFunction()
}
print(e.progress);
},
onError: (ex, stacktrace) {
throw Exception("Something went wrong updating the file...");
},
cancelOnError: true,
);
Just to be clear I'm not sure of the specific implementation, is just and idea I get from the docs, seems like the event also contains an status property which has a constant for when the event is completed
https://pub.dev/documentation/flutter_uploader/latest/flutter_uploader/UploadTaskProgress/UploadTaskProgress.html
https://pub.dev/documentation/flutter_uploader/latest/flutter_uploader/UploadTaskStatus-class.html
Hope this helps you :D

Is it necessary to close a Mongodb Change Stream?

I coded the next Node/Express/Mongo script:
const { MongoClient } = require("mongodb");
const stream = require("stream");
async function main() {
// CONECTING TO LOCALHOST (REPLICA SET)
const client = new MongoClient("mongodb://localhost:27018");
try{
// CONECTION
await client.connect();
// EXECUTING MY WATCHER
console.log("Watching ...");
await myWatcher(client, 15000);
} catch (e) {
// ERROR MANAGEMENT
console.log(`Error > ${e}`);
} finally {
// CLOSING CLIENT CONECTION ???
await client.close(); << ????
}
}main().catch(console.error);
// MY WATCHER. LISTENING CHANGES FROM MY DATABASE
async function myWatcher(client, timeInMs, pipeline = []) {
// TARGET TO WATCH
const watching = client.db("myDatabase").collection("myCollection").watch(pipeline);
// WATCHING CHANGES ON TARGET
watching.on("change", (next) => {
console.log(JSON.stringify(next));
console.log(`Doing my things...`);
});
// CLOSING THE WATCHER ???
closeChangeStream(timeInMs, watching); << ????
}
// CHANGE STREAM CLOSER
function closeChangeStream(timeInMs = 60000, watching) {
return new Promise((resolve) => {
setTimeout(() => {
console.log("Closing the change stream");
watching.close();
resolve();
}, timeInMs);
});
}
So, the goal is to keep always myWatcher function in an active state, to watch any database changes and for example, send an user notification when is detected some updating. The closeChangeStream function close myWatcher function in X seconds after any database changes. So, to keep the myWatcher always active, do you recomment not to use the closeChangeStream function ??
Another thing. With this goal in mind, to keep always myWatcher function in an active state, if I keep the await client.close();, my code emits an error: Topology is closed, so when I ignore this await client.close(), my code works perfectly. Do you recomment not to use the await client.close() function to keep always myWatcher function in an active state ??
Im a newbee in this topics !
thanks for the advice !
Thanks for help !
MongoDB change streams are implemented in a pub/sub paradigm.
Send your application to a friend in the Sudan. Have both you and your friend run the application (that has the change stream implemented). If you open up mongosh and run db.getCollection('myCollection').updateOne({_id: ObjectId("6220ee09197c13d24a7997b7")}, {FirstName: Bob}); both you and your friend will get the console.log for the change stream.
This is assuming you're not running localhost, but you can simulate this with two copies of the applications locally.
The issue comes from going into production and suddenly you have 200 load bearers, 5 developers, etc. running and your watch fires a ton of writes around the globe.
I believe, the practice is to functionize it. Wrap your watch in a function and fire the function when you're about to do a write (and close after you do your associated writes).

What is the difference between emit() and send() in flask socketio? [duplicate]

What's the difference between these two?
I noticed that if I changed from socket.emit to socket.send in a working program, the server failed to receive the message, although I don't understand why.
I also noticed that in my program if I changed from socket.emit to socket.send, the server receives a message, but it seems to receive it multiple times. When I use console.log() to see what the server received, it shows something different from when I use socket.emit.
Why this behavior? How do you know when to use socket.emit or socket.send?
With socket.emit you can register custom event like that:
server:
var io = require('socket.io').listen(80);
io.sockets.on('connection', function (socket) {
socket.emit('news', { hello: 'world' });
socket.on('my other event', function (data) {
console.log(data);
});
});
client:
var socket = io.connect('http://localhost');
socket.on('news', function (data) {
console.log(data);
socket.emit('my other event', { my: 'data' });
});
Socket.send does the same, but you don't register to 'news' but to message:
server:
var io = require('socket.io').listen(80);
io.sockets.on('connection', function (socket) {
socket.send('hi');
});
client:
var socket = io.connect('http://localhost');
socket.on('message', function (message) {
console.log(message);
});
Simple and precise (Source: Socket.IO google group):
socket.emit allows you to emit custom events on the server and client
socket.send sends messages which are received with the 'message' event
TL;DR:
socket.send(data, callback) is essentially equivalent to calling socket.emit('message', JSON.stringify(data), callback)
Without looking at the source code, I would assume that the send function is more efficient edit: for sending string messages, at least?
So yeah basically emit allows you to send objects, which is very handy.
Take this example with socket.emit:
sendMessage: function(type, message) {
socket.emit('message', {
type: type,
message: message
});
}
and for those keeping score at home, here is what it looks like using socket.send:
sendMessage: function(type, message) {
socket.send(JSON.stringify({
type: type,
message: message
}));
}
socket.send is implemented for compatibility with vanilla WebSocket interface. socket.emit is feature of Socket.IO only. They both do the same, but socket.emit is a bit more convenient in handling messages.
In basic two way communication systems, socket.emit has proved to be more convincing and easy to use (personal experience) and is a part of Socket.IO which is primarily built for such purposes
https://socket.io/docs/client-api/#socket-send-args-ack
socket.send // Sends a message event
socket.emit(eventName[, ...args][, ack]) // you can custom eventName

socket.io duplicate emit events on browser refresh

I'm running into an issue with my socket.io implementation and don't know how to solve it. I'm using pg_notify with LISTEN so when a certain value is modified in the db, it emits 'is_logged_in' to a certain client.
That in itself is working fine - my issue is when I refresh the page, socket.io disconnects the current socket_id, creates a new socket_id as usual, but when this happens, it's creating a second pgsql client instance and duplicating requests - fires the "logged_in" event 2x.
If I refresh the page again, and then manually fire the pg "logged_in" trigger, it will now emit 3 times etc. I have a leak.
const io = require('socket.io')();
const pg = require('pg');
io.on('connection', (socket) => {
const pgsql = new pg.Client({
(host, port, user, pass, db)
})
pgsql.connect()
pgsql.query("LISTEN logged_in");
pgsql.on('notification', function (data) {
socket.to(json.socket_id).emit('is_logged_in', { status:'Y' });
});
socket.on('disconnect', () => {
//pgsql.end();
});
});
I've tried killing the pgsql instance (in the socket.on disconnect) but for some reason the LISTEN stops working when I do that.
I've also tried moving the new pg.Client outside the io.on connection but when I refresh the page, the old socket_id disconnects, the new one connects, and it never executes the code to recreate the pg client.
Any ideas?
These are creating problems probably:
The pgsql instance is created on each socket connection request and is not being destroyed on disconnection
notification handler is not being removed on disconnection
I'm not much familiar with postgres, but I have worked extensively with socket. so, something like this should fix your issue:
const io = require('socket.io')();
const pg = require('pg');
const pgsql = new pg.Client({
(host, port, user, pass, db)
})
pgsql.connect();
io.on('connection', (socket) => {
pgsql.query("LISTEN logged_in");
const handler = function (data) {
socket.to(json.socket_id).emit('is_logged_in', { status:'Y' });
// You could also do pgsql.off('notification', handler) here probably
// or check if pgsql.once method is available as we need to call this handler only once?
}
pgsql.on('notification', handler);
socket.on('disconnect', () => {
pgsql.off('notification', handler);
//pgsql.end(); // Call it in server termination logic
});
});

ngrx/effects - How to test with a promise

I'm using ngrx/effects with marbles testing. I have a service that uses promises. I want my effect to call the service and handle both successful and error states. I have code like this
Effect:
.mergeMap(() =>
this.service.getThings()
.map((things) => new SetThingsAction(things))
.catch((error) =>
of(new HandleAPIErrorAction(error))
)
)
.catch((error) =>
of(new HandleAPIErrorAction(error))
);
Service:
public geThings() {
return Observable.fromPromise(this.promiseBasedThing.getTheThings());
}
Then a test:
actions = hot("a", { a: new GetThingsAction() });
const response = cold("-#", {});
service.getThings.and.returnValue( response );
const expected = cold("-b", { b: new HandleAPIErrorAction("error") });
expect(effects.getThings$).toBeObservable(expected);
This actually all works. However the double catch in the effect seems clearly bad and probably suggests I don't understand how Observables work. In the real world only the later catch is effective. In a test the first is effective.
Based on this it seems like Promises don't work with marbles tests. This SO question gives an idea on error handling but it seems impossible to test because it has a Promise.
How can I use ngrx/effects with error handling, promises, and testing.
Can answer my own after further research.
https://jsfiddle.net/btroncone/upy6nr6n/
Basically I needed to do the catch in the getThings instead of in the effect.
getThings() {
return Observable.fromPromise(this.promiseBasedThing.getTheThings())
.catch((error) => Observable.of(error));
}
Also learned that it's much easier to solve these problems with a simple rsjx example instead of trying to solve it while using ngrx/effects too. This still has two catch statements, but the test mocking now matches how it works in reality.