Difference between centralized and distributed computing - distributed-computing

Can anyone tell me the differences between centralized and distributed computing?

Centralized
A system with centralized multiprocessor parallel architecture.In the late 1980 s Centralized systems have been progressively replaced by distributed systems.
characteristics of centralized system
Non autonomous components
usually homogeneous technology
Multiple users share the same resources at all time
single point of control
single point of failure
Distributed
set of tightly coupled programs executing on one or more computers which are interconnected through a network and coordinating their actions. These programs know about one another and carry out tasks that none could carry out in isolation
characteristics of distributed system
autonomous components
Mostly build using heterogeneous technology
System components may be used exclusively
Concurrent processes can execute
Multiple point of failure
Requirement of distributed system
Scalability- possibility of adding new hosts
openness- easily extended and modified
Heterogeneity-supports various H/W S/w platforms
Resource sharing- H/w, S/W and data
fault tolerance- ability to function correctly even if faults occur

Centralized: all calculations are done on one particular computer (system). Example: you have a dedicated server for calculating data.
Distributed: the calculation is distributed to multiple computers. Example: when you have a large amount of data then you can divide it and send each part to particular computers which will make the calculations for their part.

Main basic differences are:
distrib-systems have no global state
no shared memory
no shared variables
distrib-systems have no shared time clock
therefore order of events is difficult
distrib-systems can have race conditions
race conditions see http://en.wikipedia.org/wiki/Race_condition
So "computing" in a distrubuted environment is very difficult. Do you have concret question about programing models or whatever?

Centralized Systems
"In Centralized Systems,several jobs are done on a particular central processing unit(CPU)"
Distributed Systems
"In Distributed Systems,jobs are distributed among several processor.The Processor are interconnected by a computer network"
(Sheheryar ,NUML)

Briefly, Centralized computing, as the name itself depicts, is concerned with just a single server. The particular operation is being held at this server location and nowhere else.
Distributed computing is held where the system requirement is quite large, and the job is distributed to several processors and the solutions are then combined together, keeping in mind that the processors are interconnected by a computer network.

centralized system:is a system which computing is done at central location using terminals attached to central computer in brief (mainframe and dump terminals all computation is done on the mainframe through terminals )
distributed system:is a collection of independent computers that appear to its users as single coherent system where hardware is distributed consisting of n processing elements (processor and memory )also software is distributed where no centralized os each processing element has its own os ,no physically centralized file system and inter-process communication via message passing at lowest level
Big Note:the main differences is reliability. in distributed system if one machine crashes,the system as a whole can still survive

METHOD OF ARBITRATION In all but the simplest systems, more than one module may
need control of the bus.
In a centralized scheme, a single hardware device, referred to as a bus controller or arbiter, is responsible for allocating time on the bus.
In a distributed scheme, there is no central controller. Rather, each module contains access
control logic and the modules act together to share the bus.

in centralized system in case the server fails it affects the whole system because the server controls the whole operation
in D.S system incase a system fails it doesn't affect the operations of the other computers because they are independent and distributed in operations

Let us try to understand this with an example.
Say you are carrying a large amount of money. You are in a crowded train, where your pocket may be picked and you might lose money. What is the ideal strategy for carrying money?
Put all money in a single pocket: In this case, it is easy for you to just put the money in the pocket and be done. When you go back home, you can simply take out money from the pocket and count it. But wait. What if your pocket is picked? You lose ALL the money (bankrupt? eh!). Seems like it is not the best idea to store all the money in a SINGLE pocket. Let us think what else we can do
Divide your money: Put some of it in the left pocket, put some in the right pocket and maybe put some in your bag (which has a limited capacity). You need to devise a strategy to divide the money with you. Also, when you go back home, you will have to spend time collecting money from different pockets and collecting it at one place. However, we are in a better situation now. If one of our pocket (or bag) is picked, we do not lose ALL of the money. The chances of your bag, left pocket and the right pocket, all being picked is fairly low. With a little overhead of dividing money, you can now avoid losing all of your money. Isn’t that better?
This is how distributed systems work. They divide the information (money in your case) and keep it on different machines (pockets and bags for us). This way if one of the machine goes down, we are not at a big loss. That is, we do not have a single point of failure
Another important thing that distributed systems implement is data replication. They put replicas of same information in multiple machines. This way, if one of the machines goes down, we do not lose the information. So, we now have something called as fault tolerance.

Related

What is meant by Distributed System?

I am reading about distributed systems and getting confused with what is really means?
I understand on high level, it means that set of different machines that work together to achieve a single goal.
But this definition seems too broad and loose. I would like to give some points to explain the reasons for my confusion:
I see lot of people referring the micro-services as distributed system where the functionalities like Order, Payment etc are distributed in different services, where as some other refer to multiple instances of Order service which possibly trying to serve customers and possibly use some consensus algorithm to come to consensus on shared state (eg. current Inventory level).
When talking about distributed database, I see lot of people talk about different nodes which possibly use to store/serve a part of user request like records with primary key from 'A-C' in first node 'D-F' in second node etc. On high level it looks like sharding.
When talking about distributed rate limiting. Some refer to multiple application nodes (so called distributed application nodes) using a single rate limiter, some other mention that the rate limiter itself has multiple nodes with a shared cache (like redis).
It feels that people use distributed systems to mention about microservices architecture, horizontal scaling, partitioning (sharding) and anything in between.
I am reading about distributed systems and getting confused with what is really means?
As commented by #ReinhardMänner, the good general term definition of distributed system (DS) is at https://en.wikipedia.org/wiki/Distributed_computing
A distributed system is a system whose components are located on different networked computers, which communicate and coordinate their actions by passing messages to one another from any system. The components interact with one another in order to achieve a common goal.
Anything that fits above definition can be referred as DS. All mentioned examples such as micro-services, distributed databases, etc. are specific applications of the concept or implementation details.
The statement "X being a distributed system" does not inherently imply any of such details and for each DS must be explicitly specified, eg. distributed database does not necessarily meaning usage of sharding.
I'll also draw from Wikipedia, but I think that the second part of the quote is more important:
A distributed system is a system whose components are located on
different networked computers, which communicate and coordinate their
actions by passing messages to one another from any system. The
components interact with one another in order to achieve a common
goal. Three significant challenges of distributed systems are:
maintaining concurrency of components, overcoming the lack of a global clock, and managing the independent failure of components. When
a component of one system fails, the entire system does not fail.
A system that constantly has to overcome these problems, even if all services are on the same node, or if they communicate via pipes/streams/files, is effectively a distributed system.
Now, trying to clear up your confusion:
Horizontal scaling was there with monoliths before microservices. Horizontal scaling is basically achieved by division of compute resources.
Division of compute requires dealing with synchronization, node failure, multiple clocks. But that is still cheaper than scaling vertically. That's where you might turn to consensus by implementing consensus in the application, or using a dedicated service e.g. Zookeeper, or abusing a DB table for that purpose.
Monoliths present 2 problems that microservices solve: address-space dependency (i.e. someone's component may crash the whole process and thus your component) and long startup times.
While microservices solve these problems, these problems aren't what makes them into a "distributed system". It doesn't matter if the different processes/nodes run the same software (monolith) or not (microservices), it matters that they are different processes that can't easily communicate directly (e.g. via function calls that promise not to fail).
In databases, scaling horizontally is also cheaper than scaling vertically, The two components of horizontal DB scaling are division of compute - effectively, a distributed system - and division of storage - sharding - as you mentioned, e.g. A-C, D-F etc..
Sharding of storage does not define distributed systems - a single compute node can handle multiple storage nodes. It's just that it's much more useful for a database that divides compute to also shard its storage, so you often see them together.
Distributed rate limiting falls under "maintaining concurrency of components". If every node does its own rate limiting, and they don't communicate, then the system-wide rate cannot be enforced. If they wait for each other to coordinate enforcement, they aren't concurrent.
Usually the solution is "approximate" rate limiting where components synchronize "occasionally".
If your components can't easily (= no latency) agree on a global rate limit, that's usually because they can't easily agree on a global anything. In that case, you're effectively dealing with a distributed system, even if all components just threads in the same process.
(that could happen e.g. if you plan to scale out but haven't done so yet, so you don't allow your threads to communicate directly.)

How to properly define and differentiate between nodes, processes, transactions & operations?

As part of my research I need to provide the reader with a comprehensive introduction to distributed systems. I am currently struggling with properly defining a number of the concepts that are recurring in literature on distributed systems and transactions. These are (a) nodes, (b) processes, (c) transactions and, (d) operations. I could really use some help in understanding their correlation, as I seem to continuously mix up nodes with processes and transaction with operations. Any input is appreciated!
I have already tried to grasp these concepts by researching the following literature:
Distributed Systems: Concepts and Design (G. Coulouris et al.)
A brief introduction to distributed systems (A.S. Tannenbaum)
I'm not sure what type of the ambiguity you exactly perceive in the defined terms and thus it's harder to put the right answer. These terms have the same meaning in the distributed systems terminology as any other part of the information technology science.
To be more concrete.
The node is usually "a machine" which runs one or multiple processes. The process executes operations. Operations may be grouped in a transaction (the transaction is composed from operations).
I just quickly searched in the resources you referred and there is said
A computing element, which we will generally refer to as a node, can
be either a hardware device or a software process.
The node runs processes. But the node itself can be a real hardware (a machine) or it could be a virtual machine - which is a process that runs on some machine (a real hardware).
From distributed system perspective you don't mind what the node is in reality (it's real as the HW or it's virtual as the SW) but it's a "container" for running processes.
Process is "a runtime". It processes something. It can process numbers, data, messages... The chunks of the work that is processed inside of the process are operations. E.g. you save data to a database and you do it as an operation.
The transaction defines a unit of work which consists of several operations. The transaction brings you guarantees over those operations. What are those guarantees depend on model you use. If you think about ACID transactions (as defined in paper Principles of Transaction-Oriented Database Recovery from 1983) then you are guaranteed that the all operation are successfully process or no of them is(A), consistency is maintained(C), parallel transactions do not interfere(I) and you are guaranteed that transaction outcome is persistent(D).

StarCounter and CAP

I have been reading about a database named Starcounter. It makes a claim that it can handle loads that a "NoSql"-database only can handle without dropping consistency. As far as I understand the CAP-theorem, if you keep consistency, you lose availability or partition tolerance. So what trick makes StarCounter work?
I can imagine that StarCounter is fast, but the claim that NoSql needs to drop consistency to keep up seems a little bit strange to me. Can anyone please explain?
Thanks in advance
Roland
The short answer
The CAP theorem (aka Brewers theorem) cannot be beaten for a single piece of information (like a consistent database). If you have a horizontally scaled database, you won't get consistency and performance. This conclusion comes from the laws of physics and can be deducted from Brewers theorem and Einsteins theories of relativity. You need to scale-in/up, not out. Not very "cloudy", but as the enemies of Galileo would probably confess if they were alive today, nature does a poor job at honouring human fashion.
Scaling consistent data
I'm sure there are other approaches, but Starcounter works by hosting the database image in RAM. Instead of moving database data to the application code, parts of the application code is moved to the database. Only data in the final response gets moved from the original place in RAM memory (where the data was in the first place). This makes most of the data stay put even if there are millions of requests processed every second. The downside is that the database needs to know the programming language of your application logic. The upside, however, is obvious if you have ever tried to serve millions of HTTP requests/sec, each requiring extensive database access.
A more thourough answer
The question is a good one. It is no wonder you find it strange as it was only a few years back that CAP was proven (turned into a theorem). Many developers are as disappointed as a kid would be when theoretical physicist tells him to stop looking for the perpetual motion machine because it cannot work. We still want the scale-out consistent database, don't we?
The CAP theorem
The CAP theorem gives that any piece of information cannot have consistency (C), availability (A) and partition tolerance (P). It applies to a unit of information (such as a database). You can of course have independent pieces of information that operates differently. One piece could be AP, another could be CA and a third could be CP. You just cant have the same information being CAP.
The problem with the impossibility of the 'P' in a consistent and available database results in how a scaled-out database MUST do signalling between the nodes. The conclusion must be, that even in a hundred years from now, CAP gives that a single piece of consistent data will have to live on hardware interconnected using hard wires or light beams.
The problem with the P in CAP
The problem lies in performance if you apply horizontal scaling to an available consistent database. A good performance was the very reason to do horizontally scaling in the first place, this is a very bad thing. As every node needs communicate with the other nodes whenever there is database access to achieve consistency, and given the fact that signalling is ultimately limited by the speed of light, you are left with sad but true fact that database scientist (as well as CPU scientists) are not just being stubborn for failing to see scale-out as a a magical silver bullet. It will not happen because it cannot happen (however, parts of your database could be placed in a AP set, so remember, we are talking about consistent data here). Adding the theories of Einstein to the CAP theorem and the small box wins of the cloudy data-center for consistent data.
Perpetual machines and CAP
The state of things in the database community is a little bit like the state of perpetual motion machines when horse and carriage was the way to get to work. Without any theoretical evidence against it, the patent offices granted hundreds of patents for impossible perpetual machines. Today, we may laugh at this, but we have a similar situations in the database industry with consistent scale-out databases. When you hear somebody claim that they have a scale-out ACID database, be cautious. It was only after the dot com crash mathematicians at MIT proved Brewer right at the CAP theorem was officially born, so the hunt for the impossible has unfortunately not died off just yet. You can compare this, if you want, to the way laggards kept trying to invent the perpetual machine for years after modern theoretical physics should reasonably have put a stop to it. Old habits die hard (my apologies to anyone on Stack overflow still making drawings of bearings and arms moving ad finitum on their own accord - I don't mean to be offensive).
CAP and performance
All is not lost however. Not all pieces of information needs to be consistent. Not all pieces needs to scale-out. You just have the accept Brewers theorem and make the best out of it.
For applications such as Facebook, consistency is dropped. This is okay as data is entered once and then is manipulated by a single users. Still we can experience the side effects in everyday Facebook usage such as things popping in and out of existence for a while.
However, in most business applications, data needs to be correct. The sum of all accounts in your bookkeeping needs to amount to zero. Your stock inventory must equal to 8 if you sold 2 out of 10 items even if there are multiple users buying from the same stock.
The problem with scaling out available data is that you have to make do without partition tolerance. This fancy word simply means that you have to signal between the nodes in your cloud at all times. And as it takes light a few nanoseconds to travel a single meter, this becomes impossible without making your scale-out result in less performance rather than more performance. Of course, this is only true for consistent data. The implications of this has been known by the engineers of Intel, AMD, Oracle et. al for a long time. It is not their scientist haven't heard of scale-out. It is just that they have come to accept the world as Einstein described it.
Some comfort in the gloom
If you do the math, you find that a single PC has instructions to spare on each human being living on Earth for each second it is running (google on 'modern CPU' and 'MIPS'). If you do some more math, like taking the total turnover of Amazon.com (you can find it at wwww.nasdaq.com) divided by the price of an average book, you will find that the total number of sales transactions can fit in RAM of a single modern PC. The cool thing is that the number of items, customers, orders, products etc. occupies the same amount of space in 2012 as it did in 1950. Images, video and audio has increased in size, but numeric and textual information does not grow per item. Sure the number of transactions grows, but not in the same phase as computer power grows. So the logical solution is to scale out read-only and AP data and "scale-in/up" business data.
"Scale-in" instead of "scale-out"
Database engines and business logic running in a VM (like the Java VM or the .NET CLR) typically use fairly effective machine code. This means that moving memory is the overshadowing bottleneck of total throughput for a consistent database. This is often referred to as the memory wall (wikipedia has some useful information).
The trick is to transfer code to the database image instead of data from the database image to the code (if using a MVC or a MVVM pattern). This means that the consuming code executes in the same address space as the database image and that data is never moved (and the disk is merely securing transactions and images). Data can stay in the original database image and does not have to be copied into the memory of the application. Instead of treating the database as a RAM database, the database is treated as primary memory. Everything stays put.
Only data that is part of the final user response is moved out of the database image. For a large scale applications with hundreds of millions of simultaneous users this typically amounts to only a few million requests per second, something that a single PC has no problem with handling given that the HTTP packaging is done on gateway servers. Fortunately, such servers scales out beautifully as they don't need to share data.
As it turns out, the disk is fast at sequential writes so a raided disk can persist terabytes or changes every minute.
Horizontal scaling in Starcounter
Normally you do not scale a Starcounter node. It scales-in rather than out. This works well for a few million simultaneous users. To go above that, you need to add more Starcounter nodes. They can be used to partition data (but then you lose consistency and Starcounter is not designed for partitioning so it is less elegant than solutions such as Volt DB). So a better alternative is to use the additional Starcounter nodes as gateway servers. These servers simple accumulates all incoming HTTP requests for a millisecond at a time. This might sound like a short amount of time, but it is enough to accumulate thousands of request if you decided you need to scale Starcounter. The batch of requests are then sent to the ZLATAN node (Zero LATency Atomicity Node) a thousand times a second. Each such batch can contain thousands of requests. In this way, a few hundred million user sessions can be served by a single ZLATAN node. Although you can have several ZLATAN nodes, there is only one active ZLATAN node at a time. This is how the CAP theorem is honored. To go above that, you need to consider the same tradeoff as Facebook and others.
Another important note is that the ZLATAN node does not serve applications with data. Instead, the applications controller code is run by the ZLATAN node. The cost of serializing/deserializing and sending data to an application is far greater than to process the controller logic cycles. I.e. the code is sent to the database instead of the other way around (a traditional approach is that the applications asks for data or sends data).
Making the "shared-everything" node faster by doing less
The use of the database as a "heap" for the programming language instead of a remote system for serialization and deserialization is a trick that Starcounter calls VMDBMS. If the database is in RAM, you should not move data from one place in RAM to another place in RAM which is the case with most RAM databases.
There is no 'trick'. Starcounter is talking about speed, while CAP/NoSQL are talking about scalability. There is a trade-off between features+scalability vs speed.
Sometimes it's OK to ignore scalability if you can prove there are bottlenecks elsewhere. For instance, a new startup shouldn't worry about their website scaling to a million users, they should worry about getting their first hundred users. (Does anyone remember how often Twitter was down in the early days?) Starcounter can be useful if their transaction rate is much greater than your web page hit rate.
On the other hand, I don't trust anyone who lumps all "NoSQL" Databases together. The various NoSQL databases are more different than alike. They have radically different architectures and properties. Some of them scale to thousands of nodes, some of them don't scale beyond one node. Sometimes adding scalability slows you down. Sometimes removing features speeds you up.
http://strata.oreilly.com/2010/12/strata-gems-mysql-handlersocket.html

Basics difference between distributed computing and interprocess communication?

I know the theritical definition for distributed computing and interproces communication.
But in real time I was not able to come to conclusion that when we go for distributed or interprocess.
Tell me some scenario where we can go for distributed computing or interprocess communication by example.
interprocess communication basically would mean comuniction b/t processes.
mostly this concept is used when studying parallel programming and studying the working of operating system.
this topic is to huge to explain, its a full subject, try googling interprocess communication and read the basic definations.
2)
my initial understanding is:-
imagine a office, why does it have several employees in one department? because many brains and men power is needed to bring one task to completion. one man can do the job but it might take days and what if he gets sick! so distributed...
now how to communicate between the porcesses/people doing there independent task of the job on different computers/different CPU's of the same computer/within different cabins of the same office building?
"shout!! hey i have done my work take the result and send more?? who is in charge here!! answer ****"
no right!
so here come the INTER PROCESS COMMUNICATION subject.
note:- please note i am also a learning person :-) so do not take the above as right without doing your own googling, i am not responsible for any .........
Interprocess communication is typically defined as communication between multiple processes on a single machine. Distributed computing is multiple processes being distributed across a network and executed on desired host boxes. To me it makes sense to implement a desired interprocess communication in the same fashion as the distributed processes transmit their results back to the distributor/ host. That way a weaker machine continues to be able to process data while a more powerful box runs a greater load.

Can a shared ready queue limit the scalability of a multiprocessor system?

Can a shared ready queue limit the scalability of a multiprocessor system?
Simply put, most definetly. Read on for some discussion.
Tuning a service is an art-form or requires benchmarking (and the space for the amount of concepts you need to benchmark is huge). I believe that it depends on factors such as the following (this is not exhaustive).
how much time an item which is picked up from the ready qeueue takes to process, and
how many worker threads are their?
how many producers are their, and how often do they produce ?
what type of wait concepts are you using ? spin-locks or kernel-waits (the latter being slower) ?
So, if items are produced often, and if the amount of threads is large, and the processing time is low: the data structure could be locked for large windows, thus causing thrashing.
Other factors may include the data structure used and how long the data structure is locked for -e.g., if you use a linked list to manage such a queue the add and remove oprations take constant time. A prio-queue (heaps) takes a few more operations on average when items are added.
If your system is for business processing you could take this question out of the picture by just using:
A process based architecure and just spawning multiple producer consumer processes and using the file system for communication,
Using a non-preemtive collaborative threading programming language such as stackless python, Lua or Erlang.
also note: synchronization primitives cause inter-processor cache-cohesion floods which are not good and therefore should be used sparingly.
The discussion could go on to fill a Ph.D dissertation :D
A per-cpu ready queue is a natural selection for the data structure. This is because, most operating systems will try to keep a process on the same CPU, for many reasons, you can google for.What does that imply? If a thread is ready and another CPU is idling, OS will not quickly migrate the thread to another CPU. load-balance kicks in long run only.
Had the situation been different, that is it was not a design goal to keep thread-cpu affinities, rather thread migration was frequent, then keeping separate per-cpu run queues would be costly.