I am using the Monitoring service plugin to save one-to-one chats in the db (mysql). But I observe that there is some delay of about 1 minute between the time when message is received and when it gets stored in the db.
In case if user refreshes the page means the latest messages are not displayed.
How to avoid this delay?
Related
I would like to know if this scenario can be implemented with Firebase Cloud Messaging:
My flutter app needs to receive messages with certain updates generated by a server.
Each time data changes, a new message is sent and all connected users that are subscribed to the topic receive it. The messages are not very frequent, lets say 1 every 30 minutes.
However, when a new client connects, that client only must receive the current status of the topic (the last sent message)
Imagine, as an example, a weather app, that receives weather updates every 30 minutes, but when you first open the app, it must be able to request the current weather status without waiting for the next update.
How this feature can be implemented with FCM?
Thanks!
we are building chat application similar to messenger. There is required behavior:
User log in
User should see last N messages, and he should be able to load older messages
New messages should be appended as well
My solution:
I would like to use websockets for this purpose with combination of REST. My idea was that client application decide by message id which messages need. So REST will be used for initial fetching of messages and fetching older messages.
New messages will received by websockets
Possible issue which I should handle:
Application starts subscribing websocket channel for new messages and send request for old messages without initial message id
There is chance that after calling GET request new message come, and will be stored in DB
Client application started subscribing websocket channel so message will received by websockets.
GET request didn't know about this message and fetch last N messages where this new messages will occured and client application will have duplicate record and have to filtered this messages
Can you give me advice if there is some elegant way how to handle this case? Thank you.
I would resolve your task having in mind the following:
The client application should know only about the topic to which to listen. And not the ID of the message starting from which to listen.
It is up to the server to decide what to return (even time should always be tracked server-side).
The WebSocket is used as a transport for STOMP (simply to not reinvent the wheel). The WebSocket connection could be opened once the client application is loaded and not when it is entering the "listen for messages" state. But topic subscription should be performed when necessary.
You can always send GET request and initiate a STOMP subscription simultaneously (almost simultaneously, well with a delay of 1-2 nano-second). And those always should be processed in different promises. But I would align those in the following way: first, the STOMP subscription is initiated, And a specific message on subscription with the initial timestamp of the start of subscription is delivered; second, REST request to get previous 10-100 messages for the TOPIC prior to a specific timestamp (received from STOMP) is performed.
Getting the last 10 messages (which are prior to subscription moment) could be delivered as by REST as by STOMP approach: you can always react to a subscription event on your server-side, and deliver client-specific messages.
Regarding the problem of multiple identical messages from different "data channels", it is easily resolvable: your client (hope that is not jquery, but rather Angular or React or Vue or anything else) will be storing all the data in a single collection in a controller, and filtering and checking by message-id that only unique entries are stored is easy.
BUT if your system will produce hundreds of thousands of messages per second: I guess HTTP-based protocols are not your choice in this case.
We're building an app that uses the API v2 to interact with Watson Assistant. We're aware that the "state" of the conversation (among others: the position in the dialog tree) is now kept on the service side using the session_id key.
The problem: the session expires (5 to 60 minutes depending on the pricing plan).
Is there a way to either resurrect an expired session or save the conversation state so that it can be restored ?
We've tried to save and restore the global & skills contexts but they don't hold the conversation state.
Thanks for your help.
The current inactivity timeout period is plan-specific
- lite and standard 5 minutes
- plus and premium 1 hour
In the coming days, you will be able to change that value for plus and premium up to 24 hours. Lite and Standard you will only be able to decrease to a lower value if you want to close sessions faster.
You can always save context at the application level but currently there is not a way under the V2 API to save where the user is in the dialog so that you can pass it back after exceeding the allowed session inactivity timeout period.
Complementing what #oscar.ny mentioned, it's also plan-specific and you could potentially change the timeout timing on the Settings -> Timeout limit field -> Change the value and close, it saves automatically.
Something that I've done before in the past was sending an empty message when the event of 5min inactive happened. This event would call the function that would hit the API message method to send an "Are you still here, I was talking about xyz". Where xyz was the latest message sent to the user to maintain the session.
Ref:
change Timeout limit
I need data of server event logs. But whenever I google "data of server's event logs", instead of fetching me the data, Google shows me 'approaches' to handle those (server) event logs.
But I need the data of server event logs.
Let me know the sources from where I can have them, other than Windows.
I mean to say like:-
a. data of event logs of Amazon server,
b. data of event logs of
google server,
c. data of event logs of Apache server,
and so
Currently my application runs and inserts events into a protected PostgreSQL DB. That's cool and it allows for audit of user login and such.
What I would like to do is be able to take failed login events after they reach a certain threshold and report those via SNMP Message to another service (like a snmp server). I just can't seem to wrap my head around how.
I thought of maybe using POST to a failed page and inside of that PHP script a system to post to PostgreSQL and query events by user by time but it seems brutal. Maybe Python? I have options but I can't think of a good implementation. Help?
I would suggest following approach:
on insert trigger on table where login attempts were logged to check number of attempts and then
when login attempts threshold achieved send NOTIFY notification
finally external service LISTEN for notification and
upon it receives one send SNMP message broadcast
For more info see:
NOTIFY and LISTEN