If there are two services A and B that are called by the C service at the same time, save the data. A saved successfully, B failed. You need to roll back the saved A, what is the simple way to handle it?
I hope my friends can give me the simplest way, thank you。
Related
We are working on a POC where we have our CouchDB instance and a pouchDB for each user.
We need to read the data from CouchDB use it in our CRM systems.We wanted to achieve this thorugh API where couch can post data to RestAPI and we take it forward from there.
Scenario:
seperate DB for each user
User1 - submits form and the data goes to couchDB
User2 - submits form and the data goes to CouchDB
Now we need to get the data from Couch whenever any inserts/updates to any database.
We had checked Change Notifications but that is something for one database.
In our case each user submits form will be a seperate database.So Can anyone throw some light on getting data out of CouchDB when any inserts/updates.
Without knowing the details of your data and the general concepts of your app its not easy to give a good advice.
If the data for each user is independent and you just want to collect the data later at in one database, you could consider using a filtered replication.
You can find more information here https://wiki.apache.org/couchdb/Replication#Filtered_Replication
If data must be merged or other advanced processing, you have to write a script to listen to the changes feeds of all user databases and if something changes do your logic to merge and write to the central database.
But beware you're kinda building your own sync protocol then which requires careful planning and experience.
In our company we are transitioning from a huge monolithic application to a micro-service architecture. The main technical drivers for this decision were the need to be able to scale services independently and the scalability of development - we've got ten scrum teams working in different projects (or 'micro-services').
The transition process is being smooth and we've already started to benefit from the advantages of this new technical and organizational structures. Now, on the other hand, there is a main point of pain that we are struggling with: how to manage the 'state' of the dependencies between these micro-services.
Let's put an example: one of the micro-services deals with users and registrations. This service (let's call it X) is responsible for maintaining identity information and thus is the main provider for user 'ids'. The rest of the micro-services have a strong dependency on this one. For example, there are some services responsible for user profile information (A), user permissions (B), user groups (C), etc. that rely on those user ids and thus there is a need for maintaining some data sync between these services (i.e. service A should not have info for a userId not registered in service X). We currently maintain this sync by notifying changes of state (new registrations, for example) using RabbitMQ.
As you can imagine, there are many Xs: many 'main' services and many more complicated dependencies between them.
The main issue comes when managing the different dev/testing environments. Every team (and thus, every service) needs to go through several environments in order to put some code live: continuous integration, team integration, acceptance test and live environments.
Obviously we need all services working in all these environments to check that the system is working as a whole. Now, this means that in order to test dependent services (A, B, C, ...) we must not only rely on service X, but also on its state. Thus, we need somehow to maintain system integrity and store a global & coherent state.
Our current approach for this is getting snapshots of all DBs from the live environment, making some transformations to shrink and protect data privacy and propagating it to all environments before testing in a particular environment. This is obviously a tremendous overhead, both organizationally and in computational resources: we have ten continuous integration environments, ten integration environments and one acceptance test environment that all need to be 'refreshed' with this shared data from live and the latest version of the code frequently.
We are struggling to find a better way to ease this pain. Currently we are evaluating two options:
using docker-like containers for all these services
having two versions of each service (one intended for development of that service and one another as a sandbox to be used by the rest of the teams in their development & integration testing)
None of these solutions ease the pain of shared data between services. We'd like to know how some other companies/developers are addressing this problem, as we think this must be common in a micro services architecture.
How are you guys doing it? Do you also have this problem? Any recommendation?
Sorry for the long explanation and thanks a lot!
This time I've read your question from different perspective, so here is a 'different opinion'. I know it may be too late but hope it helps with further development.
It looks like shared state is a result of wrong decoupling. In 'right' microservice architecture all microservices have to be isolated functionally rather than logically. I mean all three user profile information (A), user permissions (B), user groups (C) look functionally the same and more or less functionally coherent. They seem to be a single user service with a coherent storage, although it may not look as a micro-service. I don't see here any reasons of decoupling them (or at least you haven't told about them).
Starting from this point, splitting it into smaller independently deployable units may cause more cost and troubles than benefits. There should be a significant reason for that (sometimes political, sometimes just lack of product knowledge)
So the real problem is related to microservice isolation. Ideally each microservice can live as complete standalone product and deliver a well defined business value. When elaborating system architecture we break up it into tiny logical units (A, B, C, etc in your case, or even smaller) and then define functionally coherent subgroups. I can't tell you exact rules of how to do that, perhaps some examples. Complex communication/dependencies between the units, many common terms in their ubiquitous languages so it looks like such units belong to the same functional group and thus to a single service.
So from your example, since there is a single storage you have only way of managing its consistency as you did.
BTW I wonder what actual way you've solved your problem?
Let me try to reformulate the problem:
Actors:
X: UserIds (state of account)
provide service to get ID (based on credentials) and status of account
A: UserProfile
Using X to check status of a user account. Stores name along with link to account
provide service to get/edit name based on ID
B: UserBlogs
Using X in same way. Stores blog post along with link to account when user writes one
Using A to search blog post based on user name
provide service get/edit list of blog entries based on ID
provide service to search for blog post based on name (relies on A)
C: MobileApp
wraps features of X, A, B into a mobile app
provide all services above, relying on well-defined communication contract with all others (following #neleus statement)
Requirements:
Work of teams X, A, B, C need to be uncoupled
Integration environments for X, A, B, C need to be updated with latests features (in order to perform integration tests)
Integration environments for X, A, B, C need to have 'sufficient' set of data (in order to perform load tests, and to find edge cases)
Following #eugene idea: having mocks for each service provided by every team would allow 1) and 2)
cost is more development from the teams
also maintenance of the mocks as well as the main feature
impediment is the fact that you have a monolithic system (you do not have a set of clean well defined/isolated services yet)
Suggested solution:
What about having a shared environment with the set of master data to resolve 3)? Every 'delivered services' (i.e running in production) would be avalailable. Each teams could chose which services they would use from here and which one they would use from their own environment
One immediate drawback I can see is the shared states and consistency of data.
Let's consider automated tests ran against the master data, e.g:
B changes names (owned by A) in order to work on its blog service
might break A, or C
A changes the status of an account in order to work on some permission scenarios
might break X, B
C changes all of it on same accounts
breaks all others
The master set of data would quickly become inconsistent and lose its value for requirement 3) above.
We could therefore add a 'conventional' layer on the shared master data: anyone can read from the full set, but can only modify the objects that they have created ?
From my perspective only the objects uses the services should have the state. Let's consider your example: service X responsible for the user Id, service A responsible for the profile information, etc. Lets assume the user Y that has some security token (that may be created for example by using it's user name and password - should be unique) entries to the system. Then the client, contains the user information, sends the security token to the service X. The service X contains info about user ID linked to such token. In case the new user, the service X creates the new ID and stores it's token. Then the service X returns ID to the user object. The user object asks service A about the user profile by providing user ID. Service A takes the ID and asks service X if that ID exists. Service X sends the positive answer then service A may search the profile information by user ID or ask the user to provide such information in order to create it. The same logic should work with the B and C services. They have to talk each with other but they don't need to know about the user state.
Few words about the environments. I would suggest to use puppets. This is the way to automate the service deploy process. We are using the puppets to deploy the services on the different environments. The puppet script is reach and allows flexible configuration.
I'm developing an application that needs to be syncronized with remote database. The database is connected to the a web-based application that user able to modify some records on the web page.(add/remove/modify) User also able to modify the same records in mobile application. So each side (server - client) must be keep the SAME latest records when an user press the sync button in mobile app. Communication between server and client is provided by Web Serives.(SOAP) and i am not able to change it because of it is strict requirements. (i know this is the worst way that can be used). And another requirement is the clients are not able to delete the server records.
I already be familiar with communicating web service (NSURLConnection), receiving data (NSData) and parsing it. But i could not figure out how the syncronization procedure should be. I have already read this answer which about how i can modify server and client sides with some extra attributes (last_updated_date and is_sync)
Then i could imagine to solve the issue like:
As a first step, client keeps try to modify the server records with sending unsyncronized ones. New recoords are directly added into DB but modified records shoud be compared depending on last_updated_date. At the end of this step, server has the latest data.
But the problem is how can manage to modify the records in mobile app. I thought it in a two way:
is the dummiest way that create a new MOC, download all records into this and change with existing one.
is the getting all modified records which are not in client side, import them into a new MOC and combine these two. But in this point i have some concerns like
There could be two items which are replicated (old version - updated version)
Deleted items could be still located in the main MOCs.
I have to connect multiple relationships among the MOCs. (the new record could have more than 4 relationships with old records)
So i guess you guys can help me to have another ideas which is the best ??
Syncing data is a non-trivial task.
There are several levels of synchronization. Based on your question I am guessing you just need to push changes back to a server. In that case I would suggest catching it during the -save: of the NSManagedObjectContext. Just before the -save: you can query the NSManagedObjectContext and ask it for what objects have been created, updated and deleted. From there you can build a query to post back to your web service.
Dealing with merges, however, is far more complicated and I suggest you deal with them on the server.
As for your relationship question; I suggest you open a second question for that so that there is no confusion.
Update
Once the server has finished the merge it pushes the new "truth" to the client. The client should take these updated records and merge them into its own changes. This merge is fairly simple:
Look for an existing record using a uniqueID.
If the record exists then update it.
If the record does not exist then create it.
Ignoring performance for the moment, this is fairly straight forward:
Set up a loop over the new data coming in.
Set up a NSPredicate to identify the record to be updated/created.
Run your fetch request.
If the record exists update it.
If it doesn't then create it.
Once you get this working with a full round trip then you can start looking at performance, etc. Step one is to get it to work :)
I am writing an app for iOS that uses data provided by a web service. I am using core data for local storage and persistence of the data, so that some core set of the data is available to the user if the web is not reachable.
In building this app, I've been reading lots of posts about core data. While there seems to be lots out there on the mechanics of doing this, I've seen less on the general principles/patterns for this.
I am wondering if there are some good references out there for a recommended interaction model.
For example, the user will be able to create new objects on the app. Lets say the user creates a new employee object, the user will typically create it, update it and then save it. I've seen recommendations that updates each of these steps to the server --> when the user creates it, when the user makes changes to the fields. And if the user cancels at the end, a delete is sent to the server. Another different recommendation for the same operation is to keep everything locally, and only send the complete update to the server when the user saves.
This example aside, I am curious if there are some general recommendations/patterns on how to handle CRUD operations and ensure they are sync'd between the webserver and coredata.
Thanks much.
I think the best approach in the case you mention is to store data only locally until the point the user commits the adding of the new record. Sending every field edit to the server is somewhat excessive.
A general idiom of iPhone apps is that there isn't such a thing as "Save". The user generally will expect things to be committed at some sensible point, but it isn't presented to the user as saving per se.
So, for example, imagine you have a UI that lets the user edit some sort of record that will be saved to local core data and also be sent to the server. At the point the user exits the UI for creating a new record, they will perhaps hit a button called "Done" (N.B. not usually called "Save"). At the point they hit "Done", you'll want to kick off a core data write and also start a push to the remote server. The server pus h won't necessarily hog the UI or make them wait till it completes -- it's nicer to allow them to continue using the app -- but it is happening. If the update push to server failed, you might want to signal it to the user or do something appropriate.
A good question to ask yourself when planning the granularity of writes to core data and/or a remote server is: what would happen if the app crashed out, or the phone ran out of power, at any particular spots in the app? How much loss of data could possibly occur? Good apps lower the risk of data loss and can re-launch in a very similar state to what they were previously in after being exited for whatever reason.
Be prepared to tear your hair out quite a bit. I've been working on this, and the problem is that the Core Data samples are quite simple. The minute you move to a complex model and you try to use the NSFetchedResultsController and its delegate, you bump into all sorts of problems with using multiple contexts.
I use one to populate data from your webservice in a background "block", and a second for the tableview to use - you'll most likely end up using a tableview for a master list and a detail view.
Brush up on using blocks in Cocoa if you want to keep your app responsive whilst receiving or sending data to/from a server.
You might want to read about 'transactions' - which is basically the grouping of multiple actions/changes as a single atomic action/change. This helps avoid partial saves that might result in inconsistent data on server.
Ultimately, this is a very big topic - especially if server data is shared across multiple clients. At the simplest, you would want to decide on basic policies. Does last save win? Is there some notion of remotely held locks on objects in server data store? How is conflict resolved, when two clients are, say, editing the same property of the same object?
With respect to how things are done on the iPhone, I would agree with occulus that "Done" provides a natural point for persisting changes to server (in a separate thread).
My question is regarding the consuming event services which subscribes to the event published by commands in CQRS.
Say I have a document generation service which will generate some documents based on certain events, does the document generation service load the data from the domain via an aggregate root? If so, wouldn't the document generation service load data which may have been updated subsequently of the event being received by the generation service? How would you stop that from happening?
I guess I am assuming that the event will only pass the information received by command DTO and passing the whole domain model data to the event feels very wrong.
You really should build your read model from your events, unless you consider your documents a part of the domain (and you would have a CreateDocumentX command)
All i can say is that when you are speaking in cqrs you should describe the issue more deeply to solve or provide help to it properly.
However from what I've read is that you can have persistent storage on your write side, but ensure that you are not reaching out of your aggregate context.
Related issue reading-data-from-database-on-write-side-in-cqrs