I have a distributed REST application, written in C++, with an integrated SQLite DB. The application is self contained - no apache or iis server, and no external mysql. The application is the logic behind a hardware sensor: the application monitors sensor(s), identifying and storing data of interest, and generating "events" when data of interest repeats. The creation of data of interest is synchronized across the Internet to multiple instances of the application using REST to communicate the synchronization.
Using basic authentication over https, each instance maintains a local key/value store of remote instances' user/pass authentication data. This is necessary because each communication with a remote instance of the application requires authentication.
My question is how to handle the situation when the human operator changes either the username or password in the application, while the application is in active synchronization with remote instances.
I'm thinking this is really no different than any other material application data changing - when a local username / password changes, a REST communication is posted to each synchronization instance containing the changed data for that remote's local key/value store. Any communications that fail get queued for when that remote is back, as that is material information the remote needs to maintain synchronization.
Because the communications occur over https, the fact that authentication data is being passed around is okay.
I thought I might need special logic to handle the race condition where one instance tries to communicate with another, but the other has just changed its authentication fields. The sender will queue with my current logic, and when the remote sends it's updated authentication data, the locally queued failed communications will start succeeding. So that does not appear to be an issue.
I guess this is a request for anyone that's been here before, what did you do? Maybe my search terms are weak here, because I'm not finding discussion of this issue.
Related
Over the past couple of weeks I have been prototyping out some examples in symmetric DS. Looking for some guidance and examples because I am really running into some walls here. I have used the server and android examples successfully, don't need any assistance with setup on getting the basics working. It is a complex tool and I;m still learning it as well.
So I am trying to setup an environment where all the clients that run on android device sync up to a server. So I know it's fairly straight forward to do a setup where its 1 MASTER -> <- multiple clients, as the example that they provide do.
What I am trying to do is multiple masters to multiple clients. Essentially I want a database on the server for each client. Ill attach a diagram to try to help explain but I want a database for each store so store #1 has a master DB on the server and it syncs both ways with the client device.
server-diagram
SymmetricDS requires having a central node to store the configuration. I would recommend to have a central node with bunch of databases that connect to the central database. Connect each android application to another database. This topology will allow configuring what data syncs from the central node to the bunch of databases and what goes back
On the router from client to server you can set the target catalog to be a variable : $(sourceExternalId). This will use the clients external id as the database name on your server.
If you also need to replicate data back down you can set the external select on the triggers at the server. This would need to be an expression on your server database that would evaluate the current database. This would fire when a change occurs on the server database and populate the external_data column on sym_data during capture with the database that the change occurred in. You would then adjust the router from server to client to be a column match router type. Your expression then for the router would be: EXTERNAL_DATA=:EXTERNAL_ID. This would ensure that this data only be sent to the appropriate client.
I'm maintaining SReview, a mojolicious-based webapp which needs to run a lot of background tasks that will need to change database state. These jobs require a large amount of CPU time and there are many of them, so depending on the size of the installation it may be prudent to have multiple machines run these background jobs. Even so, the number of machines that have access to the database is rather limited, so currently I'm having them access the database directly, using a direct PostgreSQL connection.
This works, but sometimes the background jobs may need to run somewhere on the other side of a hostile network, and therefore it may be less desirable to require an extra open network port just for database access. As such, I was thinking of implementing some sort of web based RPC protocol (probably something with JSON), and to protect the access with OAuth2. However, I've never worked with that protocol in detail before, and could use some guidance as to which grant flow to use.
There are two ways in which the required credentials can be provided to the machine that runs these background jobs:
The job dispatcher has the ability to specify environment variables or command line options to the background jobs. These will then be passed on to the machines that actually run the jobs in a way that can be assumed to be secure. However, that would mean that in some cases the job dispatcher itself would need to be authenticated with OAuth2, too, preferably in a way that it can be restarted at will without having to authenticate again and again.
As the number of machines running jobs is likely to be fairly limited, it should be possible to create machine credentials for each machine. In that case, it would be important to be able to run multiple sessions in parallel on the sale machine, however.
Which grant flow would support either of those models best?
From overview of your scenario it is clear that interactions occur among system to system. There is no end user (a human) user interaction.
First, given that your applications are executed in a secure environment (closed) they can be considered as confidential clients. OAuth 2.0 client types explain more on this. With this background, you can issue each distributed application component a client id and a client secret.
Regarding the grant type, first I welcome you to get familiarize with all available options. This can be done by going through Obtaining Authorization section. In simple words it explain different ways an application can obtain tokens (specially access token) that can be used to invoke OAuth 2.0 protected endpoint (in your case RPC endpoint).
For you, the best grant type will be client credential grant. It is designed for clients which has a pre-established trust with OAuth 2.0 protected endpoint. Also it does not require a browser (user agent) or an end user compared to other grant types.
Finally, you will require to use a OAuth 2.0 authorization server. This will registered different distributed clients and issue client id, secrets to them. And when a client require to obtain tokens, they will consume token endpoint. And each client invocation of your RPC endpoint will contain a valid access token which you can validate using token introspection (or any specific desired method).
I'm using PostgreSQL's NOTIFY command to send async events to inform external programs of the changes happening inside a database. It works perfect but now I've got a new scenario. I need to have several databases within an instance of PostgreSQL.
As I've read the documentation and tested it myself, NOTIFY does not go beyond the borders of a database (to other databases within the PostgreSQL instance).
Whenever the command NOTIFY channel is invoked, either by this session
or another one connected to the same database, all the sessions
currently listening on that notification channel are notified, and
each will in turn notify its connected client application.
Which means I have to listen to notifications of each database separately. And since I'm planning to provide my users with the capability to instantiate their own database on-demand, it means I have to make new listener connections for each new database as well. It poses a challenge and I really prefer if I can have a constant number of listener connections, regardless of the number of databases.
Does anyone know how to send notifications across databases in PostrgeSQL or some other feature I can use?
I have a RESTful web-service application that I developed using the Netbeans IDE. The application uses MySQL server as its back end server. What I am wondering now is how often a client application that uses my RESTful application would refresh to reflect the data change in the server.
Are there any default pull intervals that clients get from the RESTful application? Does the framework(JAX-RS) do something about it Or is that my business to take care of.
Thanks in advance
#Abraham
There are no such rules. Only thing you can use for properly implementing this is HTTP's caching capabilities. Service must include control information how long representation of a particular resource can be cached, revalidated, never cached etc...
On client application side of things each client may decide it's own path how it will keep itself in sync with a service. It can be done by locally storing data and serve end user from local cache etc... Service can not(and shouldn't know) how clients are implemented, only thing service can do is to include caching information in response messages as i already mentioned above.
It is your responsibility to schedule the service to execute again and again. We can set time out interval but there is no pull interval.
I have a perl web application (CGI::Application with ModPerl::Registry) which connects to a authenticated custom server over a socket and exchanges data (command/response) with it. Currently the web application connects to the server, authenticates and disconnects on every page request - even for the same user.
Is there some way I can use the same socket over multiple page requests which share a common session id? Creating a separate daemon that proxies connections and makes them persistent is an option I am exploring, but would like to know if there are any simpler solutions.
I have no control over the design of the custom server unfortunately.
Looks like the same question was asked on PerlMonks. The responses there point in the right direction, but the issue seems to be that you want one cached connection per session, not one cached connection per session per httpd thread/process. You might have to resort to a separate proxy process to get the behaviour you want.