CouchDB _changes message queue - service

I want to have a service that listens to CouchDB database changes via _changes feed and i was wondering what will be the best way to pick a change that was missed maybe because, the DB changes listener service was down.
I know this can be done by specifying since=seq_no, but i want something that can pull changes even if it happened while the listener service is down.

The follow node js library does exactly this :)

Related

Eclipse milo - OPCUA - What is the best practice to inform server (value/node) changes to the client to trigger a refresh?

I am getting started with OPCUA and eclipse milo and I am trying to understand how to best inform the client that a value or node has changed in the server.
So far my guess is that I need to trigger an event in the node that has changed, and then the client should monitor/subscribe to events in such node. Am I right on this?
If my understanding is correct, which event is most appropriated to trigger for this purpose?
I am using a free UI OPCUA client to test my server changes, and I need to refresh manually to observe my changes. I was expecting that by triggering the correct (OPCUA standard) event I would indicate the client to refresh automatically, is this possible?
Thanks!
You don't need Events to notify a client of an attribute change - that's the entire point of Subscriptions and MonitoredItems.
The client creates a MonitoredItem for the Value attribute (or any other attribute) and the server will report changes when that attribute changes.
As far as what you need to do as a user of the Milo Server SDK - see the ExampleNamespace. Your namespace implements the onDataItemCreated and other related methods to be notified that a client has created a MonitoredItem and you should start sampling values for it.

How to rollback on micro service when some http requests are successful and some http requests are fail by using jdbcTemplate

I have spring boot projects built on micro service and use KONG as api-gateway. All services are in Docker container.
In my situation, I use serviceA loop 20 times to request to delete records in serviceB by using jdbcTemplate. The first 10 requests are successful. So 10 records are deleted from postgresql database in serviceB. But the 11th request is error. So I would like to rollback all 10 records that were deleted successfully from database.
My question is that could I rollback in this situation? If it is possible to rollback, how can I do? and which technology should I use? Could I use Spring cloud stream and Kafka in this situation to rollback?
One option is to use distributed transactions, which is quite heavy approach...
Other then that you can change architecture, which is also not perfect advise.
Going to some real advises.
General question here is, if that is the only, problematic case. If so - that is quite easy - extend your API in the way that allows multi delete in one operation. Please look at Oracle/Scim API. So changing single group is atomic. Problem starts, when someone with to move user from one group to the other. So maybe you can deal with problematic cases by adding special method - like presented patch?
Other then all of that. You can use command design pattern and have revert for each operation. That is still tricky since not all reverts are possible, but that highly depends on your case.
UPDATE
There is something like Saga pattern. For particular operations there is revert operation prepared. And there is manager who knows what went wrong, and which reverts are required. Here is article for that. Sometimes it works, but... reversals are really problematic operations - like sending email. :)

RethinkDB - How to stream data to the browser

Context
Greetings,
One day I randomly found RethinkDB and I was really fascinated by the whole real-time changes thing. In order to learn how to use this tool I quickly spinned up a container running RethinkDB and i started making a small project. I wanted to make something very simple therefore i thought about creating a service in which speakers can create room and the audience can ask questions. Other users can upvote questions in order to let the speaker know which one are the best. Obviously this project has a lot of realtime needs that i believe are best satisfied by using RethinkDB.
Design
I wanted to use a vary specific set of tools for this. The backend would be made in Laravel Lumen, the frontend in Vue.JS and the database of course would be RethinkDB.
The problem
RethinkDB as it seems is not designed to be exposed to the end user directly despite the fact that no security concern exists.
Assuming that the user only needs to see the questions and the upvoted in real time, no write permissions are needed and if a user changed the room ID nothing bad will happen since the rooms are all publicly accessible.
Therefore something is needed in order to await data updates and push it through a socket to the client (socket.io for example or pusher).
Given the fact that the backend is written in PHP i cannot tell Lumen to stay awake and wait for data updates. From what i have seen from the online tutorials a secondary system should be used that should listen for changes and then push them. (lets say a node.js service for example)
This is understandable however i strongly believe that this way of transferring the data to the user is inefficient and it defeats the purpose of RethinkDB.
If I have to send the action from the client's computer (user asks a question), save it to the database, have a script that listens for changes, then push the changes to socket.io and finally have the client (vue.js) act when a new event arrives, what is the point of having a real-time database in the first place?
I could avoid all this headache simply by having the Lumen app push the event directly to socket.io and user any other database system instead.
I really cant understand the point of all this. I am not experienced with no-sql databases by any means but i really want to experiment with them.
Thank you.
This is understandable however i strongly believe that this way of transferring the data to the user is inefficient and it defeats the purpose of RethinkDB.
RethinkDB has no built in mechanism to transfer data to end-users. It has no access control (in the conventional sense) as well. The common way, like you said, is to spin up one / multiple node instance(s) running socket.io. On each instance you can listen on your RethinkDB change streams and use socket.io's broadcast functionality. This would be a common way, but as RethinkDB's streams are pretty optimized, you could also open a change stream for every incoming socket.io connection.

Track Database table changes with Sails.js

My Goal:
Then database table was externally changed, I want to send WebSocket notification to clients.
Question:
Is there a "native" Sails.js way to track changes in database table populated via Model?
I only dabble in sails but I'm not aware of a way. You might make a "model-listener" service that utilizes your adapter of choice's socket/channel capabilities. You'll have to start the listeners at some point via a hook or in the bootstrap file.
The problem you're going to run into is determining if the event(create, update, drop/delete) was external or sails. I'm more familiar with PGSQL and know you can provide an application name to your connection and could include it in your publish message so your listener/subscribe handler can ignore non-sails related events.
PGSQL trigger/notify/listen
Event Triggers
Notify
Listen
MongoDB
Capped Collections
Tailable Cursors
Of course waterline supports more adapters than the two I've listed here but I tried to pick the two I assume are the most popular. I know this might be the answer you had hoped for but it might give you some ideas to try.
Sorry, I'm a new poster so I'll try to provide some links in comments if it will stackoverflow will allow me.

Handling the 'Faulted' state of a Workflow

I'm wondering how best to handle the Faulted state in a WF4 workflow service host. I'm using a console self-hosted service. I understand one approach is to implement the IErrorHandler interface, but does anybody know how I then configure this on my service? i.e. How do I add to the Behaviors collection?
Additionally, I wonder if anybody had any thoughts/advice on how best to handle a 'restart' scenario (or indeed if it's possible??) once the workflow service host has entered the Faulted state. My understanding is that once the service host enters the faulted state then it is end game and the application is in effect terminated. Can anybody give me a possible strategy for this? I'm thinking maybe a management service on top that handles failed instances of the workflow service host console application - though I'd be interested to hear from people who've faced this dilemma before, before I attempt anything.
EDIT:
Also, I'm working in a clustered environment. When the cluster enters a fail-over state, the workflow appears to lose connectivity with the database for a period of (no more than) one minute. Has anybody dealt with this scenario specifically?
Thanks in advance
Ian
We have a solution with Microsoft.Activities v1.8.4 see WorkflowService Configuration Based Extensions which allows you to add extensions using a service behavior and some config.