Please navigate to the bottom of the page for Table of Contents

Monday, January 21, 2013

Distributed vs Parallel computing

In this post I will provide a very high level overview of Distributed versus Parallel computing.

Distributed computing refers to the study of distributed systems to solve complex or time consuming problems, broken down to small tasks, across multiple computers (nodes) each of which has its own memory and disk.
In addition, the distributed system has additional constraints such as fault tolerance (individual nodes may fail), unknown structure (the network topology, etc.  may not be known or well defined) and decoupled (individual nodes may not have knowledge of entire system). The key to distributed computing is that there are many small nodes processing and executing tasks without knowing the broader system.

Parallel computing, on the other hand, is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently ("in parallel").

The same system may be characterized both as "parallel" and "distributed"; the processors in a typical distributed system run concurrently in parallel. The key difference between the two is that in parallel computing, all processors may have access to a shared memory to exchange information between processors and in distributed computing, each processor has its own private memory and information is exchanged by passing messages between the processors.


  1. This is very incorrect of course, confusing parallel processing with presence or absence of shared memory. Most and best parallel computing use MPI which is exactly passing messages. While shared-memory techniques like OpenMP had limited success.

  2. Very good informative article. Thanks for sharing such nice article, keep on up dating such good articles.

    Best Digital Transformation Services | DM Services | Austere Technologies