Parallel Processing : What is Parallel Processing

Parallel processing is a method of simultaneously breaking up and running program tasks on multiple microprocessors, thereby reducing processing time. A parallel processing system provides concurrent data processing to increase the execution time. In this the system may have two or more ALU's and should be able to execute two or more instructions at the same time. The purpose of parallel processing is to speed up the computer processing capability and increase its throughput. Parallel processing is also called parallel computing.

In general, parallel processing means that at least two microprocessors handle parts of an overall task. The concept is pretty simple: A computer scientist divides a complex problem into component parts using special software specifically designed for the task. He or she then assigns each component part to a dedicated processor. Each processor solves its part of the overall computational problem. The software reassembles the data to reach the end conclusion of the original complex problem. Parallel processing can be viewed from various levels of complexity. At the lowest level, we distinguish between parallel and serial operations by the type of registers used. At the higher level of complexity, parallel processing can be achieved by using multiple functional units that perform many operations simultaneously.

Parallel computing is an evolution of serial computing where the jobs are broken into discrete parts that can be executed concurrently. Each part is further broken down to a series of instructions. Instructions from each part execute simultaneously on different CPUs. Michael Flynn has classified the computer systems based on two factors: the number of instruction streams and the number of data streams the computer handles.  Instruction streams are algorithms. An algorithm is just a series of steps designed to solve a particular problem. Data streams are information pulled from computer memory used as input values to the algorithms. The processor plugs the values from the data stream into the algorithms from the instruction stream. Then, it initiates the operation to obtain a result. The four classifications defined by Flynn are based upon the number of concurrent instruction (or control) and data streams available in the architecture:

SISD (Single Instruction Single Data) : SISD corresponds to the traditional mono-processor ( von Neumann computer). A single data stream is being processed by one instruction stream. A single-processor or uni-processor computer in which a single stream of instructions is generated from the program. Single control unit fetches single instruction stream from memory. The Control Unit then generates appropriate control signals to direct single processing element to operate on single data stream i.e., one operation at a time. The Examples of SISD architecture are the traditional uniprocessor machines like older personal computers and mainframe computers.

Figure : SISD

SIMD (Single Instruction Multiple Data) : In SIMD each instruction is executed on a different set of data by different processors i.e multiple processing units of the same type process on multiple-data streams. This group is dedicated to array processing machines and sometimes, vector processors can also be seen as a part of this group. The SIMD machines are the most cost effective platforms for applications with lots of data parallelism. In these machines, a single control unit broadcasts instructions to many processing elements in parallel.

Figure : SIMD

MISD (Multiple Instruction Single Data) : In MISD each processor executes a different sequence of instructions. In case of MISD computers, multiple instructions operate on one data stream. This is an uncommon architecture which is generally used for fault tolerance. Heterogeneous systems operate on the same data stream and must agree on the result. Examples include the Space Shuttle flight control computer.

Figure : MISD

MIMD (Multiple Instruction Multiple Data) : In case of MIMD each processor has a separate program, an instruction stream is generated from each program and each instruction operates on different data. The MIMD machine type builds the group for the traditional multi-processors. Several processing units operate on multiple-data streams. MIMD architectures include multi-core super scalar processors, and distributed systems, using either one shared memory space or a distributed memory space. Most multiprocessors today on the market are (shared memory) MIMD machines.

Figure : MIMD

While improving the performance of computer system using parallel processing following main topics are consider :

1. Pipeline processing or Pipelining
2. Vector processing
3. Array processing

Next Topic :

No comments:

Post a Comment