Centralized/Distributed/Service oriented Architecture/Application - service

I am doing a system architecture and my knowledge from college doesn't help me when it comes to understand the subtle differences between centralized, distributed and service oriented architecture/application.
If I take a typical client/server architecture, the client sends requests to a server, the server then sends responses to the client. That is a centralized architecture.
An application that handles both server and client sides will be a distributed application (because working on different platforms), but that is still a centralized architecture.
Therefore, a distributed architecture must involve a distributed application.
Questions: am I right? What does all that become when it comes to service oriented architectures / applications?

Distributed: The whole process involved in a computation task is divided into pieces and assigned to multiple computational nodes. Each node when doing its part of the processing does not have access to the whole information of the system that is necessary to achieve a globally optimized result. The aggregate of the results from multiple nodes will converge towards a global optimal result through usually multiple iterations of computations distributed across multiple nodes.
A good example is a router system in which each router has only the information it exchanges with its neighbours. At the start the neighbours known only part of the whole network system. Once a router gets more information from its neighbours it incorporates the new information into its view of the whole system, then spreads its view to its neighbours. Through multiple iteration of these steps, each separately computed by individual routers, all routers would settle on a consistent global view of the whole network system.
Another example could be a web ordering system where the browser initially gets a list of commonly order goods. The browser may have logic to track user viewing behaviour and make a decision to fetch from the server a different category of goods list, but does not send all the user behaviour parameters to the server. In this imaged example, the browser knows something the serer does not know, and the server does too. Thus the whole application would be a distributed system. In addition to this part of distributed nature, the user authentication could be done on one of the servers, the inventory is done on another server, and reservation on yet another one. Each of the servers involved would not have the whole information of the specific user browsing and ordering instance. But the aggregate work from all these nodes will fulfill the business need to sell more goods and satisfy more customers.
Opposite to the distributed, is the centralized, in that the computation logic would be always able to get the information of the whole picture.
Given this view, a client-server application can be a viewed as a distributed system if you think the client side involves non-trivial decision making. Or can be a centralized system if you think the client is dumb.
The service-oriented term is more about how the functional processing power is integrated into the system. In a service-oriented system, new capability may be introduced into the system at runtime by new API functionality discovery, or new logic capability discovery behind an unchanged API. Think about it, you could build an application that initially has no much built-in capability, then it expands its capability by discovering and incorporating new capability on the service providers. In contrast a traditional system needs to be built at build time, typically as a consequence of human-involving discussion-design-documentation activity. A service-oriented design is a good fit to a distributed system.

Related

Microservices communication model

Consider microservices architecture, where you need to expose functionality to manage simple configuration shared with different microservices. Configuration is not changing often, but still, I would like to see changes whenever I ask for any value.
Using REST microservice seems easy, but it is adding latency.
Alternative could be RPC over messaging (i.e. RabbitMQ), but interface becomes more complicated.
What communication are you using for internal, simple services and what are pros and cons?
Any examples?
I tried with REST API, but it means a lot of "slow" requests, which add a latency to overall requests.
I've found that using RESTful APIs with some judicious implementation of cache-control headers actually works fairly well for this use case. The biggest challenge is ensuring that the HTTP client underneath your REST client actually respects the things.
It's fairly easy to implement, fits nicely into HTTP, and generally scales really well. It gives control to the client to decide if they want to respect the caching suggestions, allows server to optimize if it "knows" the configs haven't change (304 Not modified) to optimize if the client wants to ask for new versions.
You don't have to get into anything too complicated from a cache-invalidation, and you can leverage things like edge caching to further accelerate things in interesting ways.
The question to ask is ultimately the extent to which it is a requirement that a change to the configuration immediately affects everything.
If that's actually a requirement, then we're talking about strong consistency which implies some combination of:
all other processing must be effectively executed one-at-a-time against the (there can only ultimately be one: if there's multiple, then they will be affected at different times) component against which the change is made
all other processing must stop for the duration of time that it takes to propagate the change to all components
(these can be combined: you can have multiple instances depend on the configuration and stop for as long as it takes to update those and then you can execute things in parallel... an example of this is making it static configuration in the dependent services and taking them all down to update the configuration: if these updates are sufficiently rare, you can fit them into your error/downtime budget)
Needless to say, there's a (likely surprisingly small) consistency budget you're dealing with.
If you don't actually need strong absolute consistency like I've described (and the set of problems which actually need it is perhaps surprisingly small: anything to do with money for instance doesn't actually need strong consistency because it's only money), then it's a question of how much inconsistency is acceptable (typically you'll quantify this with some sort of bounded staleness and a liveness guarantee that you don't go back in time (unless there's a really good reason to go back in time...)). At this point, we've established that you want eventual consistency, we're just haggling over "how eventual?".
For this, propagating the configuration changes via durable publish-subscribe log (Kafka being the exemplar of this approach) is probably the place to start. Components subscribe to this log and update local state as it changes (and probably store the log position and the last value in some local store to prevent inadvertently going backward in time when they initially read the log). Then you can distribute the configuration so that it's in local memory of the subscribers, though during an update, there will be a window where different subscribers will have different views of that configuration.
A lot of solutions exist to externalize microservice configuration to a central location depending on what frameworks/programming languages you used to build your services. If it happened you would be using Spring, take a look at Spring Cloud Config. Off course Eureka is not the only solution tailored for this purpose.

How to use load balancer for monolithic server

If a monolithic back end application gets billions of requests can we add load balancer ?
If so , how it works to reduce the load ?
In order for a load balancer to be useful, it must be possible for your application to be spread across more than one "backend" server. The "purest" version of this setup is one where the backend servers are totally stateless and don't have any concept of a "connection" or "sessions" and each request will require approximately the same amount of work/resources. In this case, you can configure the loadbalancer to just randomly proxy requests to a pool of backend servers. An example of an application like this would be a static webserver.
Next, slightly less pure, would be those applications where the backend server doesn't need any particular state at the beginning of a "connection" or "session", but needs to maintain state while that sessions continues, and so each client needs to be assigned to the same server for the duration of that session. This slightly complicates things, as you then need "sticky" connections, and probably some way to pick the least-loaded servers to route new connections to, rather than doing it at random (since sessions will be of different lengths). An SMTP server is an example of this type.
The worst kind of application in this sense is one in which the backend server needs to maintain global state in order to be useful. A database server is the classic example. This kind of application is essentially impossible to load-balance without lots of trade-offs, and are usually the biggest, baddest servers that typical applications use, because it is often cheaper and easier, in terms of engineering, to simply buy the meanest, most expensive possible hardware, than it is to deal with the harsh realities of distributed systems, particularly if there are dependent systems (years of accumulated application code) that implicitly make assumptions about data integrity, etc. which cannot be met under, for example, the CAP theorem.

Where exactly is REST authentication used?

I've been reading for hours, yet failed to find clear and understandable explanation. Where exactly is REST authentication used?
Between browser and server (to replace something like PHP session/browser cookie combo)?
Between server and another server?
Between nodes/modules on the same server?
Let's say I'm developing a system from scratch and instead of having some monolith MVC on the server side I'd like to use twitter's example - make "all things REST" - system of distributed independent modules speaking to each other via REST. Can REST (authentication) then also be used between browser and server?
In order to further improve behavior for Internet-scale requirements,
we add layered system constraints (Figure 5-7). As described in
Section 3.4.2, the layered system style allows an architecture to be
composed of hierarchical layers by constraining component behavior
such that each component cannot "see" beyond the immediate layer with
which they are interacting. By restricting knowledge of the system to
a single layer, we place a bound on the overall system complexity and
promote substrate independence. Layers can be used to encapsulate
legacy services and to protect new services from legacy clients,
simplifying components by moving infrequently used functionality to a
shared intermediary. Intermediaries can also be used to improve system
scalability by enabling load balancing of services across multiple
networks and processors.
The primary disadvantage of layered systems is that they add overhead
and latency to the processing of data, reducing user-perceived
performance [32]. For a network-based system that supports cache
constraints, this can be offset by the benefits of shared caching at
intermediaries. Placing shared caches at the boundaries of an
organizational domain can result in significant performance benefits
[136]. Such layers also allow security policies to be enforced on data
crossing the organizational boundary, as is required by firewalls
[79].
The combination of layered system and uniform interface constraints
induces architectural properties similar to those of the uniform
pipe-and-filter style (Section 3.2.2). Although REST interaction is
two-way, the large-grain data flows of hypermedia interaction can each
be processed like a data-flow network, with filter components
selectively applied to the data stream in order to transform the
content as it passes [26]. Within REST, intermediary components can
actively transform the content of messages because the messages are
self-descriptive and their semantics are visible to intermediaries.
You should really read the layered system part of the Fielding dissertation.
Where exactly is REST authentication used?
It is used between a REST client and a REST service (the client sends requests - containing auth headers - to the service). A REST client can be on a browser, on another server, on your server (e.g. a load balancer), etc... It depends on the current context what is a REST client and what is a REST service. By REST you have a layer hierarchy in which the upper layer contains the clients which call the services of the next layer below, and so on... The components (clients, services) of this structure does not know of the existence of the layer hierarchy...
So for example it can happen, that a proxy relays the requests to the next layer without authorization, because authorization will be done by other components. It can happen that you authenticate your clients and add a secondary auth header with user identity, or permissions, so the layers below don't have to process username and password again. There are many options...
Just to talk about oauth. It is for authorizing access of 3rd party (non-trusted clients) to user accounts. So in that case the client runs on a different server, and it sends an access token (instead of username and password) registered by an user. This 3rd party client uses the allowed part of the permissions of that user. Many user can register the same 3rd party client with different access tokens ofc.
REST is an architectural style and REST has nothing to with Authentication/Authorization. That said, there are authentication/authorization mechanisms that provide a RESTFul APIs for REST style of consuming the services: OpenID and OAuth/OAuth2. They are designed to used between client and server, and more (you can read more about them).
Also you may be interested in reading 'whats-the-difference-between-openid-and-oauth'
Hope this help!

Interprocess messaging - MSMQ, Service Broker,?

I'm in the planning stages of a .NET service which continually processes incoming messages, which involves various transformations, database inserts and updates, etc. As a whole, the service is huge and complicated, but the individual tasks it performs are small, simple, and well-defined.
For this reason, and in order to allow for easy expansion in future, I want to split the service into several smaller services which basically perform part of the processing before passing it onto the next service in the chain.
In order to achieve this, I need some kind of intermediary messaging system that will pass messages from one service to another. I want this to happen in such a way that if a link in the chain crashing or is taken offline briefly, the messages will begin to queue up and get processed once the destination comes back online.
I've always used message queuing for this type of thing, but have recently been made aware of SQL Service Broker which appears to do something similar. Is SQLSB a viable alternative for this scenario and, if so, would I see any performance benefits by using that instead of standard Message Queuing?
Thanks
It sounds to me like you may be after a service bus architecture. This would provide you with the coordination and fault tolerance you are looking for. I'm most familiar and partial to NServiceBus, but there are others including Mass Transit and Rhino Service Bus.
If most of these steps initiate from a database state and end up in a database update, then merging your message storage with your data storage makes a lot of sense:
a single product to backup/restore
consistent state backups
a single high-availability/disaster recoverability solution (DB mirroring, clustering, log shipping etc)
database scale storage (IO capabilities, size and capacity limitations etc as per the database product characteristics, not the limits of message store products).
a single product to tune, troubleshoot, administer
In addition there are also serious performance considerations, as having your message store be the same as the data store means you are not required to do two-phase commit on every message interaction. Using a separate message store requires you to enroll the message store and the data store in a distributed transaction (even if is on the same machine) which requires two-phase commit and is much slower than the single-phase commit of database alone transactions.
In addition using a message store in the database as opposed to an external one has advantages like queryability (run SELECT over the message queues).
Now if we translate the abstract terms 'message store in the database as being Service Broker and 'non-database message store' as being MSMQ, you can see my point why SSB will run circles any time around MSMQ.
My recent experiences with both approaches (starting with Sql Server Service Broker) led me to the situation in which I cry for getting my messages out of SQL server. The problem is quasi-political but you might want to consider it: SQL server in my organisation is managed by a specialized DBA while application servers (i.e. messaging like NServiceBus) by developers and network team. Any change to database servers requires painful performance analysis from DBA and is immersed in fear that we might get standard SQL responsibilities down by our queuing engine living in the same space.
SSSB is pretty difficult to manage (not unlike messaging middleware) but the difference is that I am more allowed to screw something up in the messaging world (the worst that may happen is some pile of messages building up somewhere and logs filling up) and I can't afford for any mistakes in SQL world, where customer transactional data live and is vital for business (including data from legacy systems). I really don't want to get those 'unexpected database growth' or 'wait time alert' or 'why is my temp db growing without end' emails anymore.
I've learned that application servers are cheap. Just add message handlers, add machines... easy. Virtually no license costs. With SQL server it is exactly opposite. It now appears to me that using Service Broker for messaging is like using an expensive car to plow potato field. It is much better for other things.

REST service with load balancing

I've been considering the advantages of REST services, the whole statelessness and session affinity "stuff". What strikes me is that if you have multiple deployed versions of your service on a number of machines in your infrastructure, and they all act on a given resource, where is the state of that resource stored?
Would it make sense to have a single host in the infrastructre that utilises a distributed cache, and any state that is change inside a service, it simply fetches/puts to the cache? This would allow any number of deployed services for loading balancing reasons to all see the same state views of resources.
If you're designing a system for high load (which usually implies high reliability), having a single point of failure is never a good idea. If the service providing the consistent view goes down, at best your performance decreases drastically as the database is queried for everything and at worst, your whole application stops working.
In your question, you seem to be worried about consistency. If there's something to be learned about eBay's architecture, it's that there is a trade-off to be made between availability/redundancy/performance vs consistency. You may find 100% consistency is not required and you can get away with a little "chaos".
A distributed cache (like memcache) can be used as a backing for a distributed hashtable which have been used extensively to create scalable infrastructures. If implemented correctly, caches can be redundant and caches can join and leave the ring dynamically.
REST is also inherently cacheable as the HTTP layer can be cached with the appropriate use of headers (ETags) and software (e.g. Squid proxy as a Reverse proxy). The one drawback of specifying caching through headers is that it relies on the client interpreting and respecting them.
However, to paraphrase Phil Karlton, caching is hard. You really have to be selective about the data that you cache, when you cache it and how you invalidate that cache. Invalidating can be done in the following ways:
Through a timer based means (cache for 2 mins, then reload)
When an update comes in, invalidating all caches containing the relevant data.
I'm partial to the timer based approach as its simpler to implement and you can say with relative certainty how long stale data will live in the system (e.g. Company details will be updated in 2 hours, Stock prices will be updated in 10 seconds).
Finally, high load also depends on your use case and depending on the amount of transactions none of this may apply. A methodology (if you will) may be the following:
Make sure the system is functional without caching (Does it work)
Does it meet performance criteria (e.g. requests/sec, uptime goals)
Optimize the bottlenecks
Implement caching where required
After all, you may not have a performance problem in the first place and you may able to get away with a single database and a good back up strategy.
I think the more traditional view of load balancing web applications is that you would have your REST service on multiple application servers and they would retrieve resource data from single database server.
However, with the use of hypermedia, REST services can easily vertically partition the application so that some resources come from one service and some from another service on a different server. This would allow you to scale to some extent, depending on your domain, without have a single data store. Obviously with REST you would not be able to do transactional updates across these services, but there are definitely scenarios where this partitioning is valuable.
If you are looking at architectures that need to really scale then I would suggest looking at Greg Young's stuff on CQS Architecture (video) before attempting to tackle the problems of a distributed cache.