I am able to implement my own sync framework for client-server synchronization by only using LastUpdateTime as the achor. I don't have to write anything like SyncKnowledge.
seems like Knowledge is helpful in p2p scenario.
what exactly am i missing??
if you're talking about database providers in Sync Framework, there are two types of providers: offline provider and peer-to-peer/collaboration provider.
the former uses anchors to store what has been sent and what has been received and is normally used in hub-spoke type of syncs. in this scenario, only the client keeps track of what has been synched.
the other type of provider uses "knowledge" to store what has been synched and from which replica. thus it can sync peer-to-peer since its tracking as well where the change came from. in this scenario, all replicas store the knowledge.
Sync Framework 4 CTP (which has been postponed) is targeted towards Silverlight, WP7 and other non-MS platform but is actually running on top of Sync Framework 2.1.
Related
We have a python web app that clients interact with and that web app directly interacts with a database. We now have the need to develop an API that merchants will use to get and post data from/in our database in JSON. Should we build the API as part of the web app, meaning that each request will pass through our python web app and then interact with the database or should it be separated?
Further considerations include scalability and the fact that in the future we’ll probably want to develop a mobile app or other services that will also need to communicate with the database. As such, we considered the possibility to build an API as the only point of interaction with the database.
However, we are deeply in the development of the flask web app and change it would mean an huge delay in our schedule and we just wanted to weight in the advantages and disadvantages of both solutions.
These two schemes summarize the options we are considering:
Option 1:
Option2:
As you said both options have advantages and disadvantages.
Option 1 gives you Separation of Concerns. The logic for interacting with your database is abstracted behind a single service. Changes to the type of database you use or the schema you use only requires code changes to a single service. For example, say your platform has expanded and you now wish to cache calls to your database. If you have an API, Web App, and Mobile App all communicating directly with the database they must all be updated to take advantage of the cache. These changes would likely also have to be orchestrated to be deployed at the same time. In reality this is going to involve downtime: most often you see this referred to as 'scheduled maintenance'.
However, Option 1 breaks the Single Responsibility Principle. A service should do a single thing and do it well. In Option 1 the service is responsible for both being an interface to the database and rendering HTML for the web app. Changes to the Web App require you to redeploy the service for the API even though the two are not connected.
The advantages and disadvantages for Option 2 are mostly just the opposites of the advantages and disadvantages for Option 1. Multiple services sharing a database can lead to inconsistency in the data, tight coupling (especially in deployment), and debugging being more difficult.
A common design pattern (which I'd recommend) is a slight modification of Option 1. An API sits in front of the database. This is the only service that interacts with the database. This setup should improve your scalability. It's easy to deploy duplicate APIs and then load-balance requests between them. Furthermore, caching database lookups or changing database technology entirely is a (relatively) simple task. Your Web App, or any other services you develop in the future, interact with the API instead of the database. Here you can reap the benefits of Single Responsibility. It is worth noting that with this design every request for your Web App will have to go through two services. However, the benefits of the design arguably outweigh a few extra milliseconds of latency.
One last thing: kudos for thinking about scalability this early on. You may take a hit now if your schedule is delayed but I think you'll be better off in the long term.
I'm developing a system with several client computers and one server that hosts the central database. Each client performs its CRUD operations directly against the database using Entity Framework, over the local network. The only real challenge I have with this is versioning (EF migrations).
On the other hand, I have heard about a similar architecture where only an application server talks to the database, and clients use a WCF for all CRUD operations and never access the database directly.
What would be the advantages of taking the WCF approach? It seems like there would be twice as much development work for not too much payoff, not to mention poorer performance. And as far as versioning goes, you can't escape it; you now have to migrate EF and version your WCF service. But people must choose this architecture for a reason and I'm curious as to why.
For me most important difference between centralized and distributed database access is possibility to optimize use of connection pooling (https://msdn.microsoft.com/pl-pl/library/8xx3tyca(v=vs.110).aspx).
SQL server has limited number of simultaneously open connections (https://msdn.microsoft.com/en-us/library/ms187030.aspx). If you use connection pool in each of your applications (it is by default in EF) once opened connection is returned to pool instead of close. Then you will end up with i.e. 10 opened connection in each of your working applications.
If not is there a way to build it on both devices, independent of the application, and then have it be accessible from inside the app?
The possible approach I see here is having the 2-part infrastructure based on API. For this purpose you need to be really eager to learn networking at least a bit, however it has many benefits for the future.
Unity app as a networking client sending API calls to your back-end server (currently Unity supports WebGL and UDP low level transport API https://docs.unity3d.com/Manual/UNetUsingTransport.html). These API calls can be in the form of data-interchange format like JSON or XML. JSON example: ["signup", {"name": "SomeUser", "pass": "SomePass123"}]
Back-end network server client, which receives the calls. Logic for writing to and reading from the database is purely in this back-end. It sends the result back to the client again in the form of some data-interchange format. JSON example: ["success"]
This way your Unity app will be absolutely independent from any kind of database. If you will want to change the database in the future, you just rework your database handling on the back-end while reusing same API calls! Or you can have multiple front-end client apps based on different technologies accessing the same API.
Now the approaches to Unity and networking for your custom back-end:
High level approach
For pure Unity-based networking you can use Photon
(https://www.photonengine.com/en/PUN) or Unity High Level networking
API (https://docs.unity3d.com/Manual/UNetUsingHLAPI.html).
Lower level approach
You can have your back-end based on entirely different technology
too! In that case for Unity client app use Unity's low level
networking Transport Layer
(https://docs.unity3d.com/Manual/UNetUsingTransport.html) and your
back-end's networking layer can be based e.g. on Python (e.g.
Twisted), JavaScript (e.g. NodeJS) or any other.
Even Lower Level approach
You can go even deeper and use .NET Sockets for Unity client (but
you need to have Unity Pro subscription, otherwise they don't let
you build your app) and any low level socketing libraries for
back-end, e.g. C# .NET Socket again, Python socket or, bit
higher level, socketserver.
WARNING!
Note that if you want to port your app for browser, you must use websocket, not TCP/UDP.
Neo4j on different back-ends:
C# (.NET) - https://neo4j.com/developer/dotnet/
JAVA - https://neo4j.com/developer/java/
JavaScript - https://neo4j.com/developer/javascript/
Python - https://neo4j.com/developer/python/
Not really, there is an older version of Neo4j ported to Android (Neo4j 1.5).
Otherwise there is a Swift/iOS driver (and of course java drivers) for Neo4j.
I'm new to CQRS but I understand the intent behind it. From the few documents that I went through, I did not understand how can I deploy a CQRS application/service in a shared asp.net hosting environment (hosting provided by GoDaddy or DiscountAsp.net). Does the hosting server need to have MSMQ or similar message processing application to have CQRS working? Or it can work via asynchronous communication which is available in MVC4, WCF Service or Asp.net application.
Feedback appreciated as well as any links that talk about the deployment aspect of CQRS in a shared environment.
Does the hosting server need to have MSMQ or similar message
processing application to have CQRS working?
Generally yes, but it of course depends on your definition of CQRS. Some would for instance consider a single data store architecture only using distinct read and write models as CQRS. In this case there is no need for explicit synchronization of read and write stores. In the the usual sense however, synchronization needs between and read and write is required and is usually implemented with a messaging system of which MSMQ is an instance.
Does the hosting server need to have MSMQ or similar message
processing application to have CQRS working?
Async in ASP.NET MVC is used for processing a single request asynchronously, not for performing background tasks or passing messages between processes and/or nodes such as, say, write and read stores.
Some suggestions:
Consider hosting the infrastructure required for CQRS on a provider such as Azure, AWS, AppHarbor, iron.io, etc.
Consider using something like RavenHQ for the data store since RavenDB can effectively support CQRS out of the box. In both this and the previous case you can still use GoDaddy or the like to host the front facing ASP.NET app, if absolutely neccessary.
Web-applications experienced a great paradigm shift over the last years.
A decade ago (and unfortunately even nowadays), web-applications lived only in heavyweighted servers, processing everything from data to presentation formats and sending to dumb clients which only rendered the output of the server (browsers).
Then AJAX joined the game and web-applications started to turn into something that lived between the server and the browser.
During the climax of AJAX, the web-application logic started to live entirely on the browser. I think this was when HTTP RESTful API's started to emerge. Suddenly every new service had its kind-of RESTful API, and suddenly JavaScript MV* frameworks started popping like popcorns. The use of mobile devices also greatly increased, and REST fits just great for these kind of scenarios. I say "kind-of RESTful" here because almost every API that claims to be REST, isn't. But that's an entirely different story.
In fact, I became a sort of a "REST evangelist".
When I thought that web-applications couldn't evolve much more, a new era seems to be dawning: Stateful persistent connection web-applications.
Meteor is an example of a brilliant framework of that kind of applications. Then I saw this video. In this video Matt Debergalis talks about Meteor and both do a fantastic job!
However he is kind of bringing down REST API's for this kind of purposes in favor of persistent real-time connections.
I would like very much to have real-time model updates, for example, but still having all the REST awesomeness.
Streaming REST API's seem like what I need (firehose.io and Twitter's API, for example), But there is very few info on this new kind of API's.
So my question is:
Is web-based real-time communication incompatible with REST paradigm?
(Sorry for the long introductory text, but I thought that this question would only make sense with some context)
Stateful persistent tcp/ip connections for web-applications are great, as long as you are not moving around.
I have developed a real-time web based framework and in my experience I found that when using a mobile device based web browser, the IP address kept changing as I moved from tower to tower, or, wi-fi to wi-fi.
When IP addresses keep changing, the notion that it is a persistent connection evaporates rather quickly.
The framework for real-time web-app has to be architected with the assumption that connections will be transient and the framework must implement its own notion of a session while the underlying IP connection to the back-end keeps changing.
Once a session has been defined and used in all requests and responses between clients and servers, one essentially has a 'web connection'. And now, one can develop real-time web based apps using the REST paradigm.
The back-end server of the framework has to be intelligent to queue up updates while IP addresses are undergoing transitions and then sync-up when tcp/ip connections has been re-established.
The short answer is, 'Yes, you can do real-time web based app using the REST paradigm'.
If you want to play with one, let me know.
I'm very interested in this subject too. This post has a few links to papers that discuss some of the troubles with poorly-designed RPC:
http://thomasdavis.github.com/2012/04/11/the-obligatory-refutation-of-rpc.html
I am not saying Meteor is poorly designed, because I do not know much about Meteor.
In any case, I think I want the best of both "world". I want to benefit from REST and all that it affords with the constrained generic interface, addressability, statelessness, etc.
And, I don't want to get left behind in this "real-time" web revolution either! It's definitely very awesome.
I am wondering if there is not a hybrid approach that can work:
RESTful endpoints can allow a client to enter a space, and follow links to related documents as HATEOAS calls for. But, for the "stream of updates" to a resource, perhaps the "subcription name" could itself be a URI, which when browsed to in a point-in-time single request, like through the web browser's address bar or curl, would return either a representation of the "current state", or a list of links with the href for prior states of the resource and/or a way to query the discrete "events" that have occured against the object.
In this way, if you state with the "version 1" of the entity, and then replay each of the events against it, you can mutate it up to its "current state", and this events could be streamed into a client that does not want to get complete representations just because one small part of an entity has changed. This is basically the concept of an "event store", which is covered in lots of the CQRS info out there.
As far as being REST-compatible, I believe this approach has been done (though I'm not sure about the streaming side of this), I cannot remember if it was in this book http://shop.oreilly.com/product/9780596805838.do (REST in Practice), or in a presentation I heard by Vaughn Vernon at this recorded talk in QCon 2010: http://www.infoq.com/presentations/RESTful-SOA-DDD.
He talked about a URI design something like this (I don't remember exactly)
host/entity <-- current version of a resource
host/entity/events <-- list of events that have happened to mutate the object into its current state
Example:
host/entity/events/1 <-- this would correspond to the creation of the entity
host/entity/events/2 <-- this would correspond to the second event ever against the entity
He may have also had something there for history, the complete monent-in-time state, like:
host/entity/version/2 <-- this would be the entire state of the entity after the event 2 above.
Vaughn recently published a book, Implementing Domain-Driven Design, which from the table of contents looks like it covers REST and event-driven architecture: http://www.amazon.com/gp/product/0321834577
I'm the author of http://firehose.io/, a realtime framework based on the premise that Streaming RESTful API's can and should exist. From the project website:
Firehose is a minimally invasive way of building realtime web apps
without complex protocols or rewriting your app from scratch. Its a
dirt simple pub/sub server that keeps client-side Javascript models in
sync with the server code via WebSockets or HTTP long polling. It
fully embraces RESTful design patterns, which means you'll end up with
a nice API after you build your app.
Its my hope that this framework prevents the Internet from going back into the RPC dark ages, but we'll see what happens. We do use Firehose.io in production at Poll Everywhere to push millions of messages per day to people on all sorts of different devices. It works.