Redux-saga and socket subscription causes Uncaught TypeError: Converting circular structure to JSON - sockets

I am having trouble subscribing to a socketcluster (http://socketcluster.io/) channel when using a redux-saga generator in my chat app. The socketcluster backend is setup in a way where any messages are saved in the database then published into the receiving user's personal channel, which is named after the user's id. For example, User A has an id '123abc' and would subscribe to the channel named '123abc' for their realtime messages.
The code below does receive new messages that are published to a channel but it throws a "TypeError: Converting circular structure to JSON" onload and breaks all of my other redux-saga generators in the app. I've done digging in Chrome Devtools and my theory is that it has something to do with queue created in the createChannel function. Also, I've tried returning a deferred promise in the subscribeToChannel function but that also caused a Circular Conversion Error, I can post that code on request.
I referred to this answer at first: https://stackoverflow.com/a/35288877/5068616 and it helped me get the below code in place but I cannot find any similar issues on the internet. Also something to note, I am utilizing redux-socket-cluster (https://github.com/mattkrick/redux-socket-cluster) to sync up the socket and state, but I don't think it is the root of the problem
sagas.js
export default function* root() {
yield [
fork(startSubscription),
]
}
function* startSubscription(getState) {
while (true) {
const {
userId
} = yield take(actions.SUBSCRIBE_TO_MY_CHANNEL);
yield call(monitorChangeEvents, subscribeToChannel(userId))
}
}
function* monitorChangeEvents(channel) {
while (true) {
const info = yield call(channel.take) // Blocks until the promise resolves
console.log(info)
}
}
function subscribeToChannel(channelName) {
const channel = createChannel();
const socket = socketCluster.connect(socketConfig);
const c = socket.subscribe(channelName);
c.watch(event => {
channel.put(event)
})
return channel;
}
function createChannel() {
const messageQueue = []
const resolveQueue = []
function put(msg) {
// anyone waiting for a message ?
if (resolveQueue.length) {
// deliver the message to the oldest one waiting (First In First Out)
const nextResolve = resolveQueue.shift()
nextResolve(msg)
} else {
// no one is waiting ? queue the event
messageQueue.push(msg)
}
}
// returns a Promise resolved with the next message
function take() {
// do we have queued messages ?
if (messageQueue.length) {
// deliver the oldest queued message
return Promise.resolve(messageQueue.shift())
} else {
// no queued messages ? queue the taker until a message arrives
return new Promise((resolve) => resolveQueue.push(resolve))
}
}
return {
take,
put
}
}
Thanks for the help!

Related

Is it necessary to close a Mongodb Change Stream?

I coded the next Node/Express/Mongo script:
const { MongoClient } = require("mongodb");
const stream = require("stream");
async function main() {
// CONECTING TO LOCALHOST (REPLICA SET)
const client = new MongoClient("mongodb://localhost:27018");
try{
// CONECTION
await client.connect();
// EXECUTING MY WATCHER
console.log("Watching ...");
await myWatcher(client, 15000);
} catch (e) {
// ERROR MANAGEMENT
console.log(`Error > ${e}`);
} finally {
// CLOSING CLIENT CONECTION ???
await client.close(); << ????
}
}main().catch(console.error);
// MY WATCHER. LISTENING CHANGES FROM MY DATABASE
async function myWatcher(client, timeInMs, pipeline = []) {
// TARGET TO WATCH
const watching = client.db("myDatabase").collection("myCollection").watch(pipeline);
// WATCHING CHANGES ON TARGET
watching.on("change", (next) => {
console.log(JSON.stringify(next));
console.log(`Doing my things...`);
});
// CLOSING THE WATCHER ???
closeChangeStream(timeInMs, watching); << ????
}
// CHANGE STREAM CLOSER
function closeChangeStream(timeInMs = 60000, watching) {
return new Promise((resolve) => {
setTimeout(() => {
console.log("Closing the change stream");
watching.close();
resolve();
}, timeInMs);
});
}
So, the goal is to keep always myWatcher function in an active state, to watch any database changes and for example, send an user notification when is detected some updating. The closeChangeStream function close myWatcher function in X seconds after any database changes. So, to keep the myWatcher always active, do you recomment not to use the closeChangeStream function ??
Another thing. With this goal in mind, to keep always myWatcher function in an active state, if I keep the await client.close();, my code emits an error: Topology is closed, so when I ignore this await client.close(), my code works perfectly. Do you recomment not to use the await client.close() function to keep always myWatcher function in an active state ??
Im a newbee in this topics !
thanks for the advice !
Thanks for help !
MongoDB change streams are implemented in a pub/sub paradigm.
Send your application to a friend in the Sudan. Have both you and your friend run the application (that has the change stream implemented). If you open up mongosh and run db.getCollection('myCollection').updateOne({_id: ObjectId("6220ee09197c13d24a7997b7")}, {FirstName: Bob}); both you and your friend will get the console.log for the change stream.
This is assuming you're not running localhost, but you can simulate this with two copies of the applications locally.
The issue comes from going into production and suddenly you have 200 load bearers, 5 developers, etc. running and your watch fires a ton of writes around the globe.
I believe, the practice is to functionize it. Wrap your watch in a function and fire the function when you're about to do a write (and close after you do your associated writes).

Google pubsub listener not receiving all the messages

I'm using Google Cloud Storage for storing objects, with the bucket associated to a topic and subscription id. The flow is such that a Java application requests for the upload link(s), and upload object(s) using those upload link(s). I also have a pubsub listener implemented in Java, which receives the upload notification message, and does something on every successful upload. This is the snippet that handles the event listening.
public void eventListener() {
MessageReceiver messageReceiver = (message, consumer) -> {
final Map<String, Object> uploadMetaDataMap = getUploadDataMap(message);
LOGGER.info("Upload event detected => {} ", uploadMetaDataMap);
// do something
consumer.ack();
};
Subscriber subscriber = null;
Subscriber finalSubscriber = subscriber;
/* To ensure that any messages already being handled by receiveMessage run to completion */
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
finalSubscriber.stopAsync().awaitTerminated();
}
});
try {
subscriber = Subscriber.newBuilder(subscription, messageReceiver)
.setCredentialsProvider(FixedCredentialsProvider.create(creds)).build();
subscriber.addListener(new Subscriber.Listener() {
#Override
public void failed(Subscriber.State from, Throwable failure) {
// Handle failure. This is called when the Subscriber encountered a fatal error and is shutting down.
LOGGER.error(String.valueOf(failure));
}
}, MoreExecutors.directExecutor());
subscriber.startAsync().awaitRunning();
subscriber.awaitTerminated();
} finally {
if (subscriber != null) {
subscriber.stopAsync().awaitTerminated();
}
}
}
I'm storing the objects in this format => bucket/uuid/objectName.extension and on every successful upload, LOGGER.info("Upload event detected => {} ", uploadMetaDataMap); logs messages like this
2020-08-03 16:12:14,686 [Gax-1] INFO listener.AsynchronousPull - Upload event detected => {size=85, uuid=6dff9a20-3995-4f28-93e9-79e6c3cf613d, bucket=bucketName}
The issue I'm facing now is, not all the successful upload events send out notification message. I can see the folder structure created in the GCS with the respective object inside it, but notification related to that upload is nowhere to be found in the logs printed by pubsub listener. It's been bothering me for a while now, and could really use some help with this.

Vert.x: How to wait for a future to complete

Is there a way to wait for a future to complete without blocking the event loop?
An example of a use case with querying Mongo:
Future<Result> dbFut = Future.future();
mongo.findOne("myusers", myQuery, new JsonObject(), res -> {
if(res.succeeded()) {
...
dbFut.complete(res.result());
}
else {
...
dbFut.fail(res.cause());
}
}
});
// Here I need the result of the DB query
if(dbFut.succeeded()) {
doSomethingWith(dbFut.result());
}
else {
error();
}
I know the doSomethingWith(dbFut.result()); can be moved to the handler, yet if it's long, the code will get unreadable (Callback hell ?) It that the right solution ? Is that the omny solution without additional libraries ?
I'm aware that rxJava simplifies the code, but as I don't know it, learning Vert.x and rxJava is just too much.
I also wanted to give a try to vertx-sync. I put the dependency in the pom.xml; everything got downloaded fine but when I started my app, I got the following error
maurice#mickey> java \
-javaagent:~/.m2/repository/co/paralleluniverse/quasar-core/0.7.5/quasar-core-0.7.5-jdk8.jar \
-jar target/app-dev-0.1-fat.jar \
-conf conf/config.json
Error opening zip file or JAR manifest missing : ~/.m2/repository/co/paralleluniverse/quasar-core/0.7.5/quasar-core-0.7.5-jdk8.jar
Error occurred during initialization of VM
agent library failed to init: instrument
I know what the error means in general, but I don't know in that context... I tried to google for it but didn't find any clear explanation about which manifest to put where. And as previously, unless mandatory, I prefer to learn one thing at a time.
So, back to the question : is there a way with "basic" Vert.x to wait for a future without perturbation on the event loop ?
You can set a handler for the future to be executed upon completion or failure:
Future<Result> dbFut = Future.future();
mongo.findOne("myusers", myQuery, new JsonObject(), res -> {
if(res.succeeded()) {
...
dbFut.complete(res.result());
}
else {
...
dbFut.fail(res.cause());
}
}
});
dbFut.setHandler(asyncResult -> {
if(asyncResult.succeeded()) {
// your logic here
}
});
This is a pure Vert.x way that doesn't block the event loop
I agree that you should not block in the Vertx processing pipeline, but I make one exception to that rule: Start-up. By design, I want to block while my HTTP server is initialising.
This code might help you:
/**
* #return null when waiting on {#code Future<Void>}
*/
#Nullable
public static <T>
T awaitComplete(Future<T> f)
throws Throwable
{
final Object lock = new Object();
final AtomicReference<AsyncResult<T>> resultRef = new AtomicReference<>(null);
synchronized (lock)
{
// We *must* be locked before registering a callback.
// If result is ready, the callback is called immediately!
f.onComplete(
(AsyncResult<T> result) ->
{
resultRef.set(result);
synchronized (lock) {
lock.notify();
}
});
do {
// Nested sync on lock is fine. If we get a spurious wake-up before resultRef is set, we need to
// reacquire the lock, then wait again.
// Ref: https://stackoverflow.com/a/249907/257299
synchronized (lock)
{
// #Blocking
lock.wait();
}
}
while (null == resultRef.get());
}
final AsyncResult<T> result = resultRef.get();
#Nullable
final Throwable t = result.cause();
if (null != t) {
throw t;
}
#Nullable
final T x = result.result();
return x;
}

MassTransit 3 How to send a message explicitly to the error queue

I'm using MassTransit with Reactive Extensions to stream messages from the queue in batches. Since the behaviour isn't the same as a normal consumer I need to be able to send a message to the error queue if it fails an x number of times.
I've looked through the MassTransit source code and posted on the google groups and can't find an anwser.
Is this available on the ConsumeContext interface? Or is this even possible?
Here is my code. I've removed some of it to make it simpler.
_busControl = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri("rabbitmq://localhost/"), h =>
{
h.Username("guest");
h.Password("guest");
});
cfg.UseInMemoryScheduler();
cfg.ReceiveEndpoint(host, "customer_update_queue", e =>
{
var _observer = new ObservableObserver<ConsumeContext<Customer>>();
_observer.Buffer(TimeSpan.FromMilliseconds(1000)).Subscribe(OnNext);
e.Observer(_observer);
});
});
private void OnNext(IList<ConsumeContext<Customer>> messages)
{
foreach (var consumeContext in messages)
{
Console.WriteLine("Content: " + consumeContext.Message.Content);
if (consumeContext.Message.RetryCount > 3)
{
// I want to be able to send to the error queue
consumeContext.SendToErrorQueue()
}
}
}
I've found a work around by using the RabbitMQ client mixed with MassTransit. Since I can't throw an exception when using an Observable and therefore no error queue is created. I create it manually using the RabbitMQ client like below.
ConnectionFactory factory = new ConnectionFactory();
factory.HostName = "localhost";
factory.UserName = "guest";
factory.Password = "guest";
using (IConnection connection = factory.CreateConnection())
{
using (IModel model = connection.CreateModel())
{
string exchangeName = "customer_update_queue_error";
string queueName = "customer_update_queue_error";
string routingKey = "";
model.ExchangeDeclare(exchangeName, ExchangeType.Fanout);
model.QueueDeclare(queueName, false, false, false, null);
model.QueueBind(queueName, exchangeName, routingKey);
}
}
The send part is to send it directly to the message queue if it fails an x amount of times like so.
consumeContext.Send(new Uri("rabbitmq://localhost/customer_update_queue_error"), consumeContext.Message);
Hopefully the batch feature will be implemented soon and I can use that instead.
https://github.com/MassTransit/MassTransit/issues/800

Rxjava User-Retry observable with .cache operator?

i've an observable that I create with the following code.
Observable.create(new Observable.OnSubscribe<ReturnType>() {
#Override
public void call(Subscriber<? super ReturnType> subscriber) {
try {
if (!subscriber.isUnsubscribed()) {
subscriber.onNext(performRequest());
}
subscriber.onCompleted();
} catch (Exception e) {
subscriber.onError(e);
}
}
});
performRequest() will perform a long running task as you might expect.
Now, since i might be launching the same Observable twice or more in a very short amount of time, I decided to write such transformer:
protected Observable.Transformer<ReturnType, ReturnType> attachToRunningTaskIfAvailable() {
return origObservable -> {
synchronized (mapOfRunningTasks) {
// If not in maps
if ( ! mapOfRunningTasks.containsKey(getCacheKey()) ) {
Timber.d("Cache miss for %s", getCacheKey());
mapOfRunningTasks.put(
getCacheKey(),
origObservable
.doOnTerminate(() -> {
Timber.d("Removed from tasks %s", getCacheKey());
synchronized (mapOfRunningTasks) {
mapOfRunningTasks.remove(getCacheKey());
}
})
.cache()
);
} else {
Timber.d("Cache Hit for %s", getCacheKey());
}
return mapOfRunningTasks.get(getCacheKey());
}
};
}
Which basically puts the original .cache observable in a HashMap<String, Observable>.
This basically disallows multiple requests with the same getCacheKey() (Example login) to call performRequest() in parallel. Instead, if a second login request arrives while another is in progress, the second request observable gets "discarded" and the already-running will be used instead. => All the calls to onNext are going to be cached and sent to both subscribers actually hitting my backend only once.
Now, suppouse this code:
// Observable loginTask
public void doLogin(Observable<UserInfo> loginTask) {
loginTask.subscribe(
(userInfo) -> {},
(throwable) -> {
if (userWantsToRetry()) {
doLogin(loinTask);
}
}
);
}
Where loginTask was composed with the previous transformer. Well, when an error occurs (might be connectivity) and the userWantsToRetry() then i'll basically re-call the method with the same observable. Unfortunately that has been cached and I'll receive the same error without hitting performRequest() again since the sequence gets replayed.
Is there a way I could have both the "same requests grouping" behavior that the transformer provides me AND the retry button?
Your question has a lot going on and it's hard to put it into direct terms. I can make a couple recommendations though. Firstly your Observable.create can be simplified by using an Observable.defer(Func0<Observable<T>>). This will run the func every time a new subscriber is subscribed and catch and channel any exceptions to the subscriber's onError.
Observable.defer(() -> {
return Observable.just(performRequest());
});
Next, you can use observable.repeatWhen(Func1<Observable<Void>, Observable<?>>) to decide when you want to retry. Repeat operators will re-subscribe to the observable after an onComplete event. This particular overload will send an event to a subject when an onComplete event is received. The function you provide will receive this subject. Your function should call something like takeWhile(predicate) and onComplete when you do not want to retry again.
Observable.just(1,2,3).flatMap((Integer num) -> {
final AtomicInteger tryCount = new AtomicInteger(0);
return Observable.just(num)
.repeatWhen((Observable<? extends Void> notifications) ->
notifications.takeWhile((x) -> num == 2 && tryCount.incrementAndGet() != 3));
})
.subscribe(System.out::println);
Output:
1
2
2
2
3
The above example shows that retries are aloud when the event is not 2 and up to a max of 22 retries. If you switch to a repeatWhen then the flatMap would contain your decision as to use a cached observable or the realWork observable. Hope this helps!