I am working on a desktop application that requires synchronization between several clients. Basically, a group of people (let's say between 2 and 10) all run the same application. One of them hosts a server and the other clients connect to that server. The client that hosts the server also connects to his own server.
The applications should stay synchronized between all clients, meaning all clients see the same data in the application. Specifically, the data in question I can define in two separate forms:
A simple property with a certain value (this value must stay synchronized)
A list of properties (the items in the list and their values must stay synchronized)
Simple examples of (1) could be: which item in a list does the client currently have selected, and what's the current location of the client's mouse pointer within the application window. These properties keep changing continuously but the number of these properties is constant and does not grow (e.g. defined during design time).
An example of (2) could be a list of chat messages. These lists will grow during runtime with no way to predict how many items there will be.
Here is an example code in C# for the state, client and chat messages:
public class State
{
// A single value shared between all clients
public int SimpleInteger {get;set;}
// List of connected clients and their individual states
public List<Client> Clients {get;set;}
// List of chat messages
public List<ChatMessage> Messages {get;set;}
}
public class Client
{
public string ClientId {get;set;}
public string Username {get;set;}
public ClientState ClientState {get;set;}
}
public class ClientState
{
public string ClientId {get;set;}
public int SelectedIndex {get;set;}
public int MouseX {get;set;}
public int MouseY {get;set;}
}
public class ChatMessage
{
public string ClientId {get;set;}
public string Message {get;set;}
}
I've been working on this on and off for a long time but whatever kind of state synchronization I came up with, it never worked well.
When I search for solutions, I only ever find solutions for games, but those are not very helpful because my requirements are different:
I cannot deal with "dropped updates", I cannot predict (interpolate or extrapolate) what the other clients are doing. Every client needs to receive every update to stay in sync.
On the other hand, I don't care about lag (within reason). It is fine if I see the updates of other client with about a second delay.
When a new client connects (or reconnects), a large portion of the state must be transfered (for example: the list of chat messages from example 2). Each client is required to know about the entire history of the chat so this must be downloaded when a client connects.
My current solution can be summarized as follows:
The server keeps track of the state, e.g. the source of truth.
The state contains the properties that require synchronizing.
The state also contains a list of connected users (and their usernames etc).
Clients also each keep a local copy of the state, which they can act upon immediately. For example, they update their mouse position in their local state continously.
Whenever a client updates his local state, this update is sent to the server.
Potential exceptions here are things that change too fast such as the mouse position, those I will only send in regular intervals.
The server also updates the common "source of truth" state.
Finally, the server updates all other clients with the new updated state.
The last two steps are where I'm struggling. I can think of two methods to synchronize the state, one is easy but probably not efficient and the other is efficient but prone to errors.
The server simply sends the entire state to all clients.
As soon as the server receives an update from the client, the update is applied to the state and the new state is broadcasted. Every other client replaces their local state.
I feel this will probably work, but the state can grow in size quickly due to the "list" items (for example chat messages). In my previous attempts, this quickly became a problem and sending the state back become much too slow.
The server re-sends the same update (that it received) to all other clients.
Each client then only applies the new update to their state locally to sync back with the server.
This is probably much more efficient and sending the entire state is only necessary when a client connects.
However, in the past I frequently ran into desync issues where clients were no longer in sync. I don't really know what caused it, probably conflicts between messages (for example server telling the client to update a value in the state, but the client just updated his local value, which has precedence?). After this happens, everything went completely wrong as the updates are now being applied to two different states and have different outcomes.
I'm looking for some guidance on general concepts on how to achieve this. I'm using several messaging libraries to achieve the actual communication between client and server and that part is not an issue I think. I can make sure in these libraries that every message is received for example (though I'm not sure if the order is guaranteed). Like I said before, lag is not an issue, but I must guarantee every state update is received both by the server and by every other client.
Any help would be great! Thanks.
This is a hard problem and there are enough tricky areas that I wouldn't want to build this myself. Authentication, conflicting updates, API management, network outages, single point of failure, and local persistence come to mind.
If you're up for using a cloud-based solution, Google Cloud Firestore takes care of those tricky areas and does what you need:
Clients save data to the database, by creating, updating, or deleting records. Example code.
Whenever a record is created, updated, or deleted, all clients get realtime notifications. Example code.
(After you follow the links above, make sure you click C# above the code boxes to see the C# code).
This is a complicated issue, with many moving parts, as you seem to understand. As I've been researching this, I've read a couple comments on questions like this one on a variety of Q&A sites, stating this kind of thing is a project all on it's own.
Disclaimer: I haven't done this myself, so I don't know how well this would work, but maybe you can take my suggestions and work with them, if you haven't already done so. I've worked on projects where this was implemented, but I wasn't part of that implementation directly.
Connection
Since you haven't said which library you are using for the connection, I'm going to assume you are using websockets or something similar. If not, I suggest you move to something like websockets. It allows for a (near) constant connection between client and server so that data can be pushed both directions, avoiding the client from having to poll and pull the data. The link below seems to have a decent walk-though on how to do it, so I won't try to. Because links die, here's the first example code they give, which seems pretty simple.
using System.Net.Sockets;
using System.Net;
using System;
class Server {
public static void Main() {
TcpListener server = new TcpListener(IPAddress.Parse("127.0.0.1"), 80);
server.Start();
Console.WriteLine("Server has started on 127.0.0.1:80.{0}Waiting for a connection...", Environment.NewLine);
TcpClient client = server.AcceptTcpClient();
Console.WriteLine("A client connected.");
}
}
https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/Writing_WebSocket_server
Client start up
Once you have a stable connection between server and client, you need to make sure the data is in sync. When the user starts the app, you can get the timestamp of the latest change in each table and compare that to the server. If they are exactly the same, you have a somewhat reasonable expectation that the table hasn't changed. I'm assuming each table has a column containing the timestamp for the last edit made to the row.
For the tables that have changed, you can have the server send the new and updated rows to the client based on the client's "last changed timestamp".
Since the internet isn't 100% guaranteed to be connected, you will also need to keep track of the times the client has been connected vs. when they've been on the app (unless the app just won't work without being connected to the server). This information also needs to be sent to the server to compare to data changed during intervals where the client hasn't been connected.
Once timestamp matching has been done, you need to compare the row counts. If they match, you can more reasonably assume the tables are the same. If they aren't, you can see about matching ID/primary keys. There's a variety of different ways to do this, including 1:1 matching (which is slowest but most reliable), or you can do some math with the IDs (assuming numerical IDs) and try to see what's different in batches of 100 rows (for example). Idea: If adding the sorted, auto-increment integer IDs for the first 100 rows is the same on the client and the server, all those rows exist on both servers, but if it doesn't match, you can try the 1:1 match to see what's missing. Because this can be lengthy for large databases, you may want to track this type of sync in another table, so it doesn't need to be done all the time.
Instead, you may want a table to track all the data not sent to a client. This would require a confirmation that the data sent was correctly inserted into the client DB. This could also work on the client side to track what hasn't been sent to the server. Of course, this kind of thing can get cumbersome quickly, even if you're just tracking keys, table names, and timestamps. You can rack up millions of rows quickly, if you don't remove old data periodically. This is why I suggest tracking unsent data, so that anything that becomes "sent" is no longer tracked by this table and removed.
If you don't want to code and manage all that, you can try for a library that does it. There are a variety out there. Even Microsoft has one, but it's on extended support to only 1/1/2021. What happens after that, I doubt even Microsoft knows, but it gets you 1.25 years to come up with a different solution.
Creating Synchronization Providers With The Sync Framework
The Sync Framework can be used to build apps that synchronize data from any data store using any protocol over a network. We'll show you how it works and get you started building a custom sync provider.
https://learn.microsoft.com/en-us/previous-versions/sql/synchronization/mt490616(v=msdn.10)
https://support.microsoft.com/en-us/lifecycle/search?alpha=Microsoft%20Sync%20Framework%202.1
Normal runtime
Once you have your data synced on startup (or in the background after startup), you can simply send the data to the server normally, as in when the user makes changes. Since you'll have a websocket type connection, any changes the server gets from other clients will be able to be pushed to all the other clients.
As far as changing the data in real time in your app, you may have to be constantly polling your local/client DB for timestamp changes so the UI can be appropriately updated. There may be something within C# that does this for you or another library you can find.
Conclusion
At this point, I'm out of ideas. It seems reasonable to me this would work, even though it's a lot of work. Hopefully you can take what I have and use it as a foundation to your own ideas on how to accomplish your task. It seems there's a lot of work ahead of you, so good luck!
Footnote
As I'm currently the only answer after several days of it being unanswered, I'm going to assume no one else has anything better to suggest. If they do, I'd encourage them to make their own answer instead of complaining about mine. People tweaking this answer is expected, but please remember community standards when making comments.
I'm only answering this because I haven't seen anyone else do it on this or other sites. It's only been bits and disconnected pieces here & there, with people still not being able to make sense of it as a whole.
This and similar questions have been asked before on this site and closed as "too broad". If you feel this same way as a reader, please vote so on the Question not this answer.
There are several solutions to your problem.
You could use a BizTalk server out-of-the box. This may not be what you have in mind.
If you want something more home-brewed, you could use WCF (Windows Communication Foundation) with MSMQ (Microsoft Message Queue). This would give you guaranteed message delivery, and durable messages (if you want). You would not have to worry about lost connections, and other errors occurring during messages transmission.
You can go down another level and use direct TCP and UDP protocols to transmit messages. But now, you have to take care of more error cases.
Any SQL DBMS implements one important part of your problem statement: it maintains shared state. Consider what ACID promises:
Consistency. At any one instant, all clients reading from the database are guaranteed to see the same information.
Atomicity. The client updating the database can use as many steps as needed. When the transaction is committed, the data are changed entirely or not at all.
Isolation. The server gives each client the illusion of interacting with it alone. It handles concurrent updates, and updates the database as though the updates arrived serially.
You may not care about durability for this application.
The mediation among the clients is, for my money, the most useful feature of the DBMS for your application. That will save you work, and headaches. Another, non-obvious, benefit is that it can enforce consistency rules for the state information; that can be remarkably useful to prevent an obsolete/corrupt client from munging the shared state.
The second part of your problem statement is notifying 2-10 clients of changed state. There are any number of ways to do that.
Some DBMSs can access OS services from triggers. You could have an update trigger issue a notification. Alternatively, the updating client could do that.
The actual notification mechanism could be quite simple. Clients could connect to a server (that you write) and block on read(2). The server itself listens on a port for update notifications. On receipt of one, it repeats it to all connected clients. When the client's read request returns, it's time to query the database for the updated state, and post a new read.
To prevent a kind of "thundering herd" problem when several updates arrive back-to-back, when a client reads the update message, it could keep reading updates until EWOULDBLOCK, and only then query the DBMS. OTOH, if it's important to see the intermediate states (to see every update, not just the current state), the DBMS is perfectly capable of storing and providing all versions and distinguishing them with a timestamp or serial number.
If you don't want to use TCP sockets directly, you might prefer ZeroMQ.
In this design, each client has three connections: the DBMS, the read-notify socket, and (maybe) the server-notify socket. The server has N+1 connections, for N clients and one listening socket. You have no locks to implement, very little tracking of participation, no problems re-synchronizing, and short windows inconsistency among clients as each one acts on its notification.
Related
I have been recently looking into event sourcing and have some questions about the interactions with clients.
So event-sourcing sounds great. decoupling all your microservices, keeping your information in immutable events and formulating a stored states off of that to fit your needs is really handy. Having event propagate through your system/services and reacting to events in their own way is all fine.
The issue i am having lies with understanding the client interaction.
So you want clients to interact with the system, but they need to do this now by events. They can not longer submit a state to mutate your existing one.
So the question is how do clients fire off specific event and interact with (not only an event based system) but a system based on event sourcing.
My understanding is that you no longer use the rest api as resources (which you can get, update, delete, etc.. handling them as a resource), but you instead post to an endpoint as an event.
So how do these endpoint work?
my second question is how does the user get responses back?
for instance lets say we have an event to place an order.
your going to fire off an event an its going to do its thing. Again my understanding is that you dont now validate the request, e.g. checking if the user ordering the order has enough money, but instead fire it to be place and it will be handled in the system.
e.g. it will not be
- order placed
- this will be picked up by the pricing service and it will either fire an reserved money or money exceeded event based on if the user can afford it.
- The order service will then listen for those and then mark the order as denied or not enough credit.
So because this is a async process and the user has fired and forgotten, how do you then show the user it has either failed or succeeded? do you show them an order confirmation page with the order status as it is (even if its pending)
or do you poll it until it changes (web sockets or something).
I'm sorry if a lot of this is all nonsense, I am still learning about this architecture and am very much in the mindset of a monolith with REST responses.
Any help would be appreciated.
The issue i am having lies with understanding the client interaction.
Some of the issue may be understanding, but I promise you a fair share of the issue is that the literature sucks.
In particular, the word "Event" gets re-used a lot of different ways. If you aren't paying very careful attention to which meaning is being used, you are going to get knotted.
Event Sourcing is really about persistence - how does a micro-server store its private copy of state for later re-use? Instead of destructively overwriting our previous state, we write new information that links back to the previous state. If you imagine each microservice storing each change of state as a commit in its own git repository, you are in the right ballpark.
That's a different animal from using Event Messages to communicate information between one microservice and another.
There's some obvious overlap, of course, because the one message that you are likely to share with other microservices is "I just changed state".
So how do these endpoint work?
The same way that web forms do. I send you a representation of a form, the client displays the form to you. You fill in your data and submit the form, the client processes the contents of the form, and sends back to me an HTTP request with a "FormSubmitted" event in the message body.
You can achieve similar results by sending new representations of the state, but its a bit error prone to strip away the semantic intent and then try to guess it again on the server. So you are more likely to instead see task based user interfaces, or protocols that clearly identify the semantics of the change.
When the outside world is the authority for some piece of data (a shopper's shipping address, for example), you are more likely to see the more traditional "just edit the existing representation" approach.
So because this is a async process and the user has fired and forgotten, how do you then show the user it has either failed or succeeded?
Fire and forget really doesn't work for a distributed protocol on an unreliable network. In most cases, at-least-once delivery is important, so Fire until verified is the more common option. The initial acknowledgement of the message might be something like 202 Accepted -- "We received your message, we wrote it down, here's our current progress, here are some links you can fetch for progress reports".
It doesnt seem to me that event-sourcing fits with the traditional REST model where you CRUD a resource.
Jim Webber's 2011 talk may help to prune away the noise. A REST API is a disguise that your domain model wears; you exchange messages about manipulating resources, and as a side effect your domain model does useful work.
One way you could do this that would look more "traditional" is to work with representations of the event stream. I do a GET /08ff2ec9-a9ad-4be2-9793-18e232dbe615 and it returns me a representation of a list of events. I append a new event onto the end of that list, and PUT /08ff2ec9-a9ad-4be2-9793-18e232dbe615, and interesting side effects happen. Or perhaps I instead create a patch document that describes my change, and PATCH /08ff2ec9-a9ad-4be2-9793-18e232dbe615.
But more likely, I would do something else -- instead of GET /08ff2ec9-a9ad-4be2-9793-18e232dbe615 to fetch a representation of the list of events, I'd probably GET /08ff2ec9-a9ad-4be2-9793-18e232dbe615 to fetch a representation of available protocols - which is to say, a document filled with hyper links. From there, I might GET /08ff2ec9-a9ad-4be2-9793-18e232dbe615/603766ac-92af-47f3-8265-16f003ce5a09 to obtain a representation of the data collection form. I fill in the details of my event, submit the form, and POST /08ff2ec9-a9ad-4be2-9793-18e232dbe615 the form data to the server.
You can, of course, use any spelling you like for the URI.
In the first case, we need something like an HTTP capable document editor; the second case uses something more like a web browser.
If there were lots of different kinds of events, then the second case might well have lots of different form resources, all submitting POST /08ff2ec9-a9ad-4be2-9793-18e232dbe615 requests.
(You don't have to have all of the forms submitting to the same URI, but there are advantages to consider).
In a non event sourcing pattern I guess that would be first put into the database, then the event gets risen.
Even when you aren't event sourcing, there may still be some advantages to committing events to your durable store before emitting them. See Pat Helland: Data on the Outside versus Data on the Inside.
So you want clients to interact with the system, but they need to do this now by events.
Clients don't have to. Client may even not be aware of the underlying event store.
There are a number of trade-offs to consider and decisions to take when implementing an event-sourced system. To start with you can try to name a few pre computer era examples of event-sourced systems and look at their non-functional characteristics.
So the question is how do clients fire off specific event
Clients don't send events. They rather should express an intent (a command). Then it is the responsibility of the event-sourced system to validate the intent and either reject it or accept and store the corresponding event. It would mean that an intent to change the system's state was accepted and the stored event confirms the change.
My understanding is that you no longer use the rest api as resources
REST is one of the options. You just consider different things as resources. A command can be a REST resource. An event-sourced entity can be a resource, to which you POST a command. If you like it async - you can later GET the command to check its status. You can GET an entity to know its current state. You cant GET events from a class of entities as a means of subscription.
If we are talking about an end user, then most likely it doesn't deal with the event store directly. There is some third tier in between, which does CQRS. From a user client perspective it can be provided with REST, GraphQL, SOAP, gRPC or event e-mail. Whatever transport solution you find suitable. Command-processing part from CQRS is what specifically domain-driven. It decides which intent to accept and which to reject.
Event store itself is responsible for the data consistency. I.e. it should not allow two concurrent event leading to invalid state be published. This is what pre-computer event-sourced systems are good at. You usually have some physical object as an entity, so you lock for update by just getting hand of it.
Then an end-user client usually reads from some prepared read model. The responsibility of a read (R in CQRS) component is to prepare read-optimised data for clients. This data may come from multiple event-sourced of the same or different classes. Again, client may interact with a read model with whatever transport is suitable.
While an event-store is consistent and consistent immediately, a read model is eventually consistent. But it's up to you to tune this eventuality.
Just try to throw REST out of the architecture for a while. Consider it a one of available transport options - that may help to look at the root.
Context
Greetings,
One day I randomly found RethinkDB and I was really fascinated by the whole real-time changes thing. In order to learn how to use this tool I quickly spinned up a container running RethinkDB and i started making a small project. I wanted to make something very simple therefore i thought about creating a service in which speakers can create room and the audience can ask questions. Other users can upvote questions in order to let the speaker know which one are the best. Obviously this project has a lot of realtime needs that i believe are best satisfied by using RethinkDB.
Design
I wanted to use a vary specific set of tools for this. The backend would be made in Laravel Lumen, the frontend in Vue.JS and the database of course would be RethinkDB.
The problem
RethinkDB as it seems is not designed to be exposed to the end user directly despite the fact that no security concern exists.
Assuming that the user only needs to see the questions and the upvoted in real time, no write permissions are needed and if a user changed the room ID nothing bad will happen since the rooms are all publicly accessible.
Therefore something is needed in order to await data updates and push it through a socket to the client (socket.io for example or pusher).
Given the fact that the backend is written in PHP i cannot tell Lumen to stay awake and wait for data updates. From what i have seen from the online tutorials a secondary system should be used that should listen for changes and then push them. (lets say a node.js service for example)
This is understandable however i strongly believe that this way of transferring the data to the user is inefficient and it defeats the purpose of RethinkDB.
If I have to send the action from the client's computer (user asks a question), save it to the database, have a script that listens for changes, then push the changes to socket.io and finally have the client (vue.js) act when a new event arrives, what is the point of having a real-time database in the first place?
I could avoid all this headache simply by having the Lumen app push the event directly to socket.io and user any other database system instead.
I really cant understand the point of all this. I am not experienced with no-sql databases by any means but i really want to experiment with them.
Thank you.
This is understandable however i strongly believe that this way of transferring the data to the user is inefficient and it defeats the purpose of RethinkDB.
RethinkDB has no built in mechanism to transfer data to end-users. It has no access control (in the conventional sense) as well. The common way, like you said, is to spin up one / multiple node instance(s) running socket.io. On each instance you can listen on your RethinkDB change streams and use socket.io's broadcast functionality. This would be a common way, but as RethinkDB's streams are pretty optimized, you could also open a change stream for every incoming socket.io connection.
I am writing an app for iOS that uses data provided by a web service. I am using core data for local storage and persistence of the data, so that some core set of the data is available to the user if the web is not reachable.
In building this app, I've been reading lots of posts about core data. While there seems to be lots out there on the mechanics of doing this, I've seen less on the general principles/patterns for this.
I am wondering if there are some good references out there for a recommended interaction model.
For example, the user will be able to create new objects on the app. Lets say the user creates a new employee object, the user will typically create it, update it and then save it. I've seen recommendations that updates each of these steps to the server --> when the user creates it, when the user makes changes to the fields. And if the user cancels at the end, a delete is sent to the server. Another different recommendation for the same operation is to keep everything locally, and only send the complete update to the server when the user saves.
This example aside, I am curious if there are some general recommendations/patterns on how to handle CRUD operations and ensure they are sync'd between the webserver and coredata.
Thanks much.
I think the best approach in the case you mention is to store data only locally until the point the user commits the adding of the new record. Sending every field edit to the server is somewhat excessive.
A general idiom of iPhone apps is that there isn't such a thing as "Save". The user generally will expect things to be committed at some sensible point, but it isn't presented to the user as saving per se.
So, for example, imagine you have a UI that lets the user edit some sort of record that will be saved to local core data and also be sent to the server. At the point the user exits the UI for creating a new record, they will perhaps hit a button called "Done" (N.B. not usually called "Save"). At the point they hit "Done", you'll want to kick off a core data write and also start a push to the remote server. The server pus h won't necessarily hog the UI or make them wait till it completes -- it's nicer to allow them to continue using the app -- but it is happening. If the update push to server failed, you might want to signal it to the user or do something appropriate.
A good question to ask yourself when planning the granularity of writes to core data and/or a remote server is: what would happen if the app crashed out, or the phone ran out of power, at any particular spots in the app? How much loss of data could possibly occur? Good apps lower the risk of data loss and can re-launch in a very similar state to what they were previously in after being exited for whatever reason.
Be prepared to tear your hair out quite a bit. I've been working on this, and the problem is that the Core Data samples are quite simple. The minute you move to a complex model and you try to use the NSFetchedResultsController and its delegate, you bump into all sorts of problems with using multiple contexts.
I use one to populate data from your webservice in a background "block", and a second for the tableview to use - you'll most likely end up using a tableview for a master list and a detail view.
Brush up on using blocks in Cocoa if you want to keep your app responsive whilst receiving or sending data to/from a server.
You might want to read about 'transactions' - which is basically the grouping of multiple actions/changes as a single atomic action/change. This helps avoid partial saves that might result in inconsistent data on server.
Ultimately, this is a very big topic - especially if server data is shared across multiple clients. At the simplest, you would want to decide on basic policies. Does last save win? Is there some notion of remotely held locks on objects in server data store? How is conflict resolved, when two clients are, say, editing the same property of the same object?
With respect to how things are done on the iPhone, I would agree with occulus that "Done" provides a natural point for persisting changes to server (in a separate thread).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have been working on a method to sync core data stored in an iPhone application between multiple devices, such as an iPad or a Mac. There are not many (if any at all) sync frameworks for use with Core Data on iOS. However, I have been thinking about the following concept:
A change is made to the local core data store, and the change is saved. (a) If the device is online, it tries to send the changeset to the server, including the device ID of the device which sent the changeset. (b) If the changeset does not reach the server, or if the device is not online, the app will add the change set to a queue to send when it does come online.
The server, sitting in the cloud, merges the specific change sets it receives with its master database.
After a change set (or a queue of change sets) is merged on the cloud server, the server pushes all of those change sets to the other devices registered with the server using some sort of polling system. (I thought to use Apple's Push services, but apparently according to the comments this is not a workable system.)
Is there anything fancy that I need to be thinking about? I have looked at REST frameworks such as ObjectiveResource, Core Resource, and RestfulCoreData. Of course, these are all working with Ruby on Rails, which I am not tied to, but it's a place to start. The main requirements I have for my solution are:
Any changes should be sent in the background without pausing the main thread.
It should use as little bandwidth as possible.
I have thought about a number of the challenges:
Making sure that the object IDs for the different data stores on different devices are attached on the server. That is to say, I will have a table of object IDs and device IDs, which are tied via a reference to the object stored in the database. I will have a record (DatabaseId [unique to this table], ObjectId [unique to the item in the whole database], Datafield1, Datafield2), the ObjectId field will reference another table, AllObjects: (ObjectId, DeviceId, DeviceObjectId). Then, when the device pushes up a change set, it will pass along the device Id and the objectId from the core data object in the local data store. Then my cloud server will check against the objectId and device Id in the AllObjects table, and find the record to change in the initial table.
All changes should be timestamped, so that they can be merged.
The device will have to poll the server, without using up too much battery.
The local devices will also need to update anything held in memory if/when changes are received from the server.
Is there anything else I am missing here? What kinds of frameworks should I look at to make this possible?
I've done something similar to what you're trying to do. Let me tell you what I've learned and how I did it.
I assume you have a one-to-one relationship between your Core Data object and the model (or db schema) on the server. You simply want to keep the server contents in sync with the clients, but clients can also modify and add data. If I got that right, then keep reading.
I added four fields to assist with synchronization:
sync_status - Add this field to your core data model only. It's used by the app to determine if you have a pending change on the item. I use the following codes: 0 means no changes, 1 means it's queued to be synchronized to the server, and 2 means it's a temporary object and can be purged.
is_deleted - Add this to the server and core data model. Delete event shouldn't actually delete a row from the database or from your client model because it leaves you with nothing to synchronize back. By having this simple boolean flag, you can set is_deleted to 1, synchronize it, and everyone will be happy. You must also modify the code on the server and client to query non deleted items with "is_deleted=0".
last_modified - Add this to the server and core data model. This field should automatically be updated with the current date and time by the server whenever anything changes on that record. It should never be modified by the client.
guid - Add a globally unique id (see http://en.wikipedia.org/wiki/Globally_unique_identifier) field to the server and core data model. This field becomes the primary key and becomes important when creating new records on the client. Normally your primary key is an incrementing integer on the server, but we have to keep in mind that content could be created offline and synchronized later. The GUID allows us to create a key while being offline.
On the client, add code to set sync_status to 1 on your model object whenever something changes and needs to be synchronized to the server. New model objects must generate a GUID.
Synchronization is a single request. The request contains:
The MAX last_modified time stamp of your model objects. This tells the server you only want changes after this time stamp.
A JSON array containing all items with sync_status=1.
The server gets the request and does this:
It takes the contents from the JSON array and modifies or adds the records it contains. The last_modified field is automatically updated.
The server returns a JSON array containing all objects with a last_modified time stamp greater than the time stamp sent in the request. This will include the objects it just received, which serves as an acknowledgment that the record was successfully synchronized to the server.
The app receives the response and does this:
It takes the contents from the JSON array and modifies or adds the records it contains. Each record get set a sync_status of 0.
I used the word record and model interchangeably, but I think you get the idea.
I suggest carefully reading and implementing the sync strategy discussed by Dan Grover at iPhone 2009 conference, available here as a pdf document.
This is a viable solution and is not that difficult to implement (Dan implemented this in several of its applications), overlapping the solution described by Chris. For an in-depth, theoretical discussion of syncing, see the paper from Russ Cox (MIT) and William Josephson (Princeton):
File Synchronization with Vector Time Pairs
which applies equally well to core data with some obvious modifications. This provides an overall much more robust and reliable sync strategy, but requires more effort to be implemented correctly.
EDIT:
It seems that the Grover's pdf file is no longer available (broken link, March 2015). UPDATE: the link is available through the Way Back Machine here
The Objective-C framework called ZSync and developed by Marcus Zarra has been deprecated, given that iCloud finally seems to support correct core data synchronization.
If you are still looking for a way to go, look into the Couchbase mobile. This basically does all you want. (http://www.couchbase.com/nosql-databases/couchbase-mobile)
Similar like #Cris I've implemented class for synchronization between client and server and solved all known problems so far (send/receive data to/from server, merge conflicts based on timestamps, removed duplicate entries in unreliable network conditions, synchronize nested data and files etc .. )
You just tell the class which entity and which columns should it sync and where is your server.
M3Synchronization * syncEntity = [[M3Synchronization alloc] initForClass: #"Car"
andContext: context
andServerUrl: kWebsiteUrl
andServerReceiverScriptName: kServerReceiverScript
andServerFetcherScriptName: kServerFetcherScript
ansSyncedTableFields:#[#"licenceNumber", #"manufacturer", #"model"]
andUniqueTableFields:#[#"licenceNumber"]];
syncEntity.delegate = self; // delegate should implement onComplete and onError methods
syncEntity.additionalPostParamsDictionary = ... // add some POST params to authenticate current user
[syncEntity sync];
You can find source, working example and more instructions here: github.com/knagode/M3Synchronization.
Notice user to update data via push notification.
Use a background thread in the app to check the local data and the data on the cloud server,while change happens on server,change the local data,vice versa.
So I think the most difficult part is to estimate data in which side is invalidate.
Hope this can help u
I have just posted the first version of my new Core Data Cloud Syncing API, known as SynCloud.
SynCloud has a lot of differences with iCloud because it allows for Multi-user sync interface. It is also different from other syncing api's because it allows for multi-table, relational data.
Please find out more at http://www.syncloudapi.com
Build with iOS 6 SDK, it is very up to date as of 9/27/2012.
I think a good solution to the GUID issue is "distributed ID system". I'm not sure what the correct term is, but I think that's what MS SQL server docs used to call it (SQL uses/used this method for distributed/sync'ed databases). It's pretty simple:
The server assigns all IDs. Each time a sync is done, the first thing that is checked are "How many IDs do I have left on this client?" If the client is running low, it asks the server for a new block of IDs. The client then uses IDs in that range for new records. This works great for most needs, if you can assign a block large enough that it should "never" run out before the next sync, but not so large that the server runs out over time. If the client ever does run out, the handling can be pretty simple, just tell the user "sorry you cannot add more items until you sync"... if they are adding that many items, shouldn't they sync to avoid stale data issues anyway?
I think this is superior to using random GUIDs because random GUIDs are not 100% safe, and usually need to be much longer than a standard ID (128-bits vs 32-bits). You usually have indexes by ID and often keep ID numbers in memory, so it is important to keep them small.
Didn't really want to post as answer, but I don't know that anyone would see as a comment, and I think it's important to this topic and not included in other answers.
First you should rethink how many data, tables and relations you will have. In my solution I’ve implemented syncing through Dropbox files. I observe changes in main MOC and save these data to files (each row is saved as gzipped json). If there is an internet connection working, I check if there are any changes on Dropbox (Dropbox gives me delta changes), download them and merge (latest wins), and finally put changed files. Before sync I put lock file on Dropbox to prevent other clients syncing incomplete data. When downloading changes it’s safe that only partial data is downloaded (eg lost internet connection). When downloading is finished (fully or partial) it starts to load files into Core Data. When there are unresolved relations (not all files are downloaded) it stops loading files and tries to finish downloading later. Relations are stored only as GUID, so I can easly check which files to load to have full data integrity.
Syncing is starting after changes to core data are made. If there are no changes, than it checks for changes on Dropbox every few minutes and on app startup. Additionaly when changes are sent to server I send a broadcast to other devices to inform them about changes, so they can sync faster.
Each synced entity has GUID property (guid is used also as a filename for exchange files). I have also Sync database where I store Dropbox revision of each file (I can compare it when Dropbox delta resets it’s state). Files also contain entity name, state (deleted/not deleted), guid (same as filename), database revision (to detect data migrations or to avoid syncing with never app versions) and of course the data (if row is not deleted).
This solution is working for thousands of files and about 30 entities. Instead of Dropbox I could use key/value store as REST web service which I want to do later, but have no time for this :) For now, in my opinion, my solution is more reliable than iCloud and, which is very important, I have full control on how it’s working (mainly because it’s my own code).
Another solution is to save MOC changes as transactions - there will be much less files exchanged with server, but it’s harder to do initial load in proper order into empty core data. iCloud is working this way, and also other syncing solutions have similar approach, eg TICoreDataSync.
--
UPDATE
After a while, I migrated to Ensembles - I recommend this solution over reinventing the wheel.
I'm wondering how you'd implement the following use-case in REST. Is it even possible to do without compromising the conceptual model?
Read or update multiple resources within the scope of a single transaction. For example, transfer $100 from Bob's bank account into John's account.
As far as I can tell, the only way to implement this is by cheating. You could POST to the resource associated with either John or Bob and carry out the entire operation using a single transaction. As far as I'm concerned this breaks the REST architecture because you're essentially tunneling an RPC call through POST instead of really operating on individual resources.
Consider a RESTful shopping basket scenario. The shopping basket is conceptually your transaction wrapper. In the same way that you can add multiple items to a shopping basket and then submit that basket to process the order, you can add Bob's account entry to the transaction wrapper and then Bill's account entry to the wrapper. When all the pieces are in place then you can POST/PUT the transaction wrapper with all the component pieces.
There are a few important cases that aren't answered by this question, which I think is too bad, because it has a high ranking on Google for the search terms :-)
Specifically, a nice propertly would be: If you POST twice (because some cache hiccupped in the intermediate) you should not transfer the amount twice.
To get to this, you create a transaction as an object. This could contain all the data you know already, and put the transaction in a pending state.
POST /transfer/txn
{"source":"john's account", "destination":"bob's account", "amount":10}
{"id":"/transfer/txn/12345", "state":"pending", "source":...}
Once you have this transaction, you can commit it, something like:
PUT /transfer/txn/12345
{"id":"/transfer/txn/12345", "state":"committed", ...}
{"id":"/transfer/txn/12345", "state":"committed", ...}
Note that multiple puts don't matter at this point; even a GET on the txn would return the current state. Specifically, the second PUT would detect that the first was already in the appropriate state, and just return it -- or, if you try to put it into the "rolledback" state after it's already in "committed" state, you would get an error, and the actual committed transaction back.
As long as you talk to a single database, or a database with an integrated transaction monitor, this mechanism will actually work just fine. You might additionally introduce time-outs for transactions, which you could even express using Expires headers if you wanted to.
In REST terms, resources are nouns that can be acted on with CRUD (create/read/update/delete) verbs. Since there is no "transfer money" verb, we need to define a "transaction" resource that can be acted upon with CRUD. Here's an example in HTTP+POX. First step is to CREATE (HTTP POST method) a new empty transaction:
POST /transaction
This returns a transaction ID, e.g. "1234" and according URL "/transaction/1234". Note that firing this POST multiple times will not create the same transaction with multiple IDs and also avoids introduction of a "pending" state. Also, POST can't always be idempotent (a REST requirement), so it's generally good practice to minimize data in POSTs.
You could leave the generation of a transaction ID up to the client. In this case, you would POST /transaction/1234 to create transaction "1234" and the server would return an error if it already existed. In the error response, the server could return a currently unused ID with an appropriate URL. It's not a good idea to query the server for a new ID with a GET method, since GET should never alter server state, and creating/reserving a new ID would alter server state.
Next up, we UPDATE (PUT HTTP method) the transaction with all data, implicitly committing it:
PUT /transaction/1234
<transaction>
<from>/account/john</from>
<to>/account/bob</to>
<amount>100</amount>
</transaction>
If a transaction with ID "1234" has been PUT before, the server gives an error response, otherwise an OK response and a URL to view the completed transaction.
NB: in /account/john , "john" should really be John's unique account number.
Great question, REST is mostly explained with database-like examples, where something is stored, updated, retrieved, deleted. There are few examples like this one, where the server is supposed to process the data in some way. I don't think Roy Fielding included any in his thesis, which was based on http after all.
But he does talk about "representational state transfer" as a state machine, with links moving to the next state. In this way, the documents (the representations) keep track of the client state, instead of the server having to do it. In this way, there is no client state, only state in terms of which link you are on.
I've been thinking about this, and it seems to me reasonable that to get the server to process something for you, when you upload, the server would automatically create related resources, and give you the links to them (in fact, it wouldn't need to automatically create them: it could just tell you the links, and it only create them when and if you follow them - lazy creation). And to also give you links to create new related resources - a related resource has the same URI but is longer (adds a suffix). For example:
You upload (POST) the representation of the concept of a transaction with all the information. This looks just like a RPC call, but it's really creating the "proposed transaction resource". e.g URI: /transaction
Glitches will cause multiple such resources to be created, each with a different URI.
The server's response states the created resource's URI, its representation - this includes the link (URI) to create the related resource of a new "committed transaction resource". Other related resources are the link to delete the proposed transaction. These are states in the state-machine, which the client can follow. Logically, these are part of the resource that has been created on the server, beyond the information the client supplied. e.g URIs: /transaction/1234/proposed, /transaction/1234/committed
You POST to the link to create the "committed transaction resource", which creates that resource, changing the state of the server (the balances of the two accounts)**. By its nature, this resource can only be created once, and can't be updated. Therefore, glitches committing many transactions can't occur.
You can GET those two resources, to see what their state is. Assuming that a POST can change other resources, the proposal would now be flagged as "committed" (or perhaps, not available at all).
This is similar to how webpages operate, with the final webpage saying "are you sure you want to do this?" That final webpage is itself a representation of the state of the transaction, which includes a link to go to the next state. Not just financial transactions; also (eg) preview then commit on wikipedia. I guess the distinction in REST is that each stage in the sequence of states has an explicit name (its URI).
In real-life transactions/sales, there are often different physical documents for different stages of a transaction (proposal, purchase order, receipt etc). Even more for buying a house, with settlement etc.
OTOH This feels like playing with semantics to me; I'm uncomfortable with the nominalization of converting verbs into nouns to make it RESTful, "because it uses nouns (URIs) instead of verbs (RPC calls)". i.e. the noun "committed transaction resource" instead of the verb "commit this transaction". I guess one advantage of nominalization is you can refer to the resource by name, instead of needing to specify it in some other way (such as maintaining session state, so you know what "this" transaction is...)
But the important question is: What are the benefits of this approach? i.e. In what way is this REST-style better than RPC-style? Is a technique that's great for webpages also helpful for processing information, beyond store/retrieve/update/delete? I think that the key benefit of REST is scalability; one aspect of that is not needing to maintain client state explicitly (but making it implicit in the URI of the resource, and the next states as links in its representation). In that sense it helps. Perhaps this helps in layering/pipelining too? OTOH only the one user will look at their specific transaction, so there's no advantage in caching it so others can read it, the big win for http.
I've drifted away from this topic for 10 years. Coming back, I can't believe the religion masquerading as science that you wade into when you google rest+reliable. The confusion is mythic.
I would divide this broad question into three:
Downstream services. Any web service you develop will have downstream services that you use, and whose transaction syntax you have no choice but to follow. You should try and hide all this from users of your service, and make sure all parts of your operation succeed or fail as a group, then return this result to your users.
Your services. Clients want unambiguous outcomes to web-service calls, and the usual REST pattern of making POST, PUT or DELETE requests directly on substantive resources strikes me as a poor, and easily improved, way of providing this certainty. If you care about reliability, you need to identify action requests. This id can be a guid created on the client, or a seed value from a relational DB on the server, it doesn't matter. For server generated ID's, use a 'preflight' request-response to exchange the id of the action. If this request fails or half succeeds, no problem, the client just repeats the request. Unused ids do no harm.This is important because it lets all subsequent requests be fully idempotent, in the sense that if they are repeated n times they return the same result and cause nothing further to happen. The server stores all responses against the action id, and if it sees the same request, it replays the same response. A fuller treatment of the pattern is in this google doc. The doc suggests an implementation that, I believe(!), broadly follows REST principals. Experts will surely tell me how it violates others. This pattern can be usefully employed for any unsafe call to your web-service, whether or not there are downstream transactions involved.
Integration of your service into "transactions" controlled by upstream services. In the context of web-services, full ACID transactions are considered as usually not worth the effort, but you can greatly help consumers of your service by providing cancel and/or confirm links in your confirmation response, and thus achieve transactions by compensation.
Your requirement is a fundamental one. Don't let people tell you your solution is not kosher. Judge their architectures in the light of how well, and how simply, they address your problem.
If you stand back to summarize the discussion here, it's pretty clear that REST is not appropriate for many APIs, particularly when the client-server interaction is inherently stateful, as it is with non-trivial transactions. Why jump through all the hoops suggested, for client and server both, in order to pedantically follow some principle that doesn't fit the problem? A better principle is to give the client the easiest, most natural, productive way to compose with the application.
In summary, if you're really doing a lot of transactions (types, not instances) in your application, you really shouldn't be creating a RESTful API.
You'd have to roll your own "transaction id" type of tx management. So it would be 4 calls:
http://service/transaction (some sort of tx request)
http://service/bankaccount/bob (give tx id)
http://service/bankaccount/john (give tx id)
http://service/transaction (request to commit)
You'd have to handle the storing of the actions in a DB (if load balanced) or in memory or such, then handling commit, rollback, timeout.
Not really a RESTful day in the park.
First of all transferring money is nothing that you can not do in a single resource call. The action you want to do is sending money. So you add a money transfer resource to the account of the sender.
POST: accounts/alice, new Transfer {target:"BOB", abmount:100, currency:"CHF"}.
Done. You do not need to know that this is a transaction that must be atomic etc. You just transfer money aka. send money from A to B.
But for the rare cases here a general solution:
If you want to do something very complex involving many resources in a defined context with a lot of restrictions that actually cross the what vs. why barrier (business vs. implementation knowledge) you need to transfer state. Since REST should be stateless you as a client need to transfer the state around.
If you transfer state you need to hide the information inside from the client. The client should not know internal information only needed by the implementation but does not carry information relevant in terms of business. If those information have no business value the state should be encrypted and a metaphor like token, pass or something need to be used.
This way one can pass internal state around and using encryption and signing the system can be still be secure and sound. Finding the right abstraction for the client why he passes around state information is something that is up to the design and architecture.
The real solution:
Remember REST is talking HTTP and HTTP comes with the concept of using cookies. Those cookies are often forgotten when people talk about REST API and workflows and interactions spanning multiple resources or requests.
Remember what is written in the Wikipedia about HTTP cookies:
Cookies were designed to be a reliable mechanism for websites to remember stateful information (such as items in a shopping cart) or to record the user's browsing activity (including clicking particular buttons, logging in, or recording which pages were visited by the user as far back as months or years ago).
So basically if you need to pass on state, use a cookie. It is designed for exactly the very same reason, it is HTTP and therefore it is compatible to REST by design :).
The better solution:
If you talk about a client performing a workflow involving multiple requests you usually talk about protocol. Every form of protocol comes with a set of preconditions for each potential step like perform step A before you can do B.
This is natural but exposing protocol to clients makes everything more complex. In order to avoid it just think what we do when we have to do complex interactions and things in the real world... . We use an Agent.
Using the Agent metaphor you can provide a resource that can perform all necessary steps for you and store the actual assignment / instructions it is acting upon in its list (so we can use POST on the agent or an 'agency').
A complex example:
Buying a house:
You need to prove your credibility (like providing your police record entries), you need to ensure financial details, you need to buy the actual house using a lawyer and a trusted third party storing the funds, verify that the house now belongs to you and add the buying stuff to your tax records etc. (just as an example, some steps may be wrong or whatever).
These steps might take several days to be completed, some can be done in parallel etc.
In order to do this, you just give the agent the task buy house like:
POST: agency.com/ { task: "buy house", target:"link:toHouse", credibilities:"IamMe"}.
Done. The agency sends you back a reference to you that you can use to see and track the status of this job and the rest is done automatically by the agents of the agency.
Think about a bug tracker for instance. Basically you report the bug and can use the bug id to check whats going on. You can even use a service to listen to changes of this resource. Mission Done.
You must not use server side transactions in REST.
One of the REST contraints:
Stateless
The client–server communication is further constrained by no client context being stored on the server between requests. Each request from any client contains all of the information necessary to service the request, and any session state is held in the client.
The only RESTful way is to create a transaction redo log and put it into the client state. With the requests the client sends the redo log and the server redoes the transaction and
rolls the transaction back but provides a new transaction redo log (one step further)
or finally complete the transaction.
But maybe it's simpler to use a server session based technology which supports server side transactions.
I think that in this case it is totally acceptable to break the pure theory of REST in this situation. In any case, I don't think there is anything actually in REST that says you can't touch dependent objects in business cases that require it.
I really think it's not worth the extra hoops you would jump through to create a custom transaction manager, when you could just leverage the database to do it.
In the simple case (without distributed resources), you could consider the transaction as a resource, where the act of creating it attains the end objective.
So, to transfer between <url-base>/account/a and <url-base>/account/b, you could post the following to <url-base>/transfer.
<transfer>
<from><url-base>/account/a</from>
<to><url-base>/account/b</to>
<amount>50</amount>
</transfer>
This would create a new transfer resource and return the new url of the transfer - for example <url-base>/transfer/256.
At the moment of successful post, then, the 'real' transaction is carried out on the server, and the amount removed from one account and added to another.
This, however, doesn't cover a distributed transaction (if, say 'a' is held at one bank behind one service, and 'b' is held at another bank behind another service) - other than to say "try to phrase all operations in ways that don't require distributed transactions".
I believe that would be the case of using a unique identifier generated on the client to ensure that the connection hiccup not imply in an duplicity saved by the API.
I think using a client generated GUID field along with the transfer object and ensuring that the same GUID was not reinserted again would be a simpler solution to the bank transfer matter.
Do not know about more complex scenarios, such as multiple airline ticket booking or micro architectures.
I found a paper about the subject, relating the experiences of dealing with the transaction atomicity in RESTful services.
I guess you could include the TAN in the URL/resource:
PUT /transaction to get the ID (e.g. "1")
[PUT, GET, POST, whatever] /1/account/bob
[PUT, GET, POST, whatever] /1/account/bill
DELETE /transaction with ID 1
Just an idea.