Difference between revisions of "Parallel Computing"
(Created page with " {|align=right |__TOC__ |} ==Parallel Computing== Parallel computing refers to running multiple computational tasks simultaneously. The idea behind it is based on the assu...") |
|||
Line 5: | Line 5: | ||
==Parallel Computing== | ==Parallel Computing== | ||
Parallel computing refers to running multiple computational tasks simultaneously. The idea behind it is based on the assumption that a big computational task can be divided into smaller tasks which can run concurrently. | Parallel computing refers to running multiple computational tasks simultaneously. The idea behind it is based on the assumption that a big computational task can be divided into smaller tasks which can run concurrently. | ||
+ | |||
+ | === Types of parallel computing === | ||
+ | |||
+ | Parallel computing is used only for the last row of below table; | ||
{| class="wikitable" | {| class="wikitable" | ||
Line 16: | Line 20: | ||
|} | |} | ||
− | + | In more details; | |
− | Parallel | + | * Data-parallel(SIMD): Same operations/instructions are carried out on different data items, simultaneously. |
+ | * Task Parallel(MIMD): Different instructions on different data carried out concurrently. | ||
+ | * SPMD: Single program, multiple data, not synchronized at individual operation level | ||
− | + | SPMD and MIMD are essentially the same because any MIMD can be made SPMD. SIMD is also equivalent, but in a less practical sense. MPI (Message Passing Interface) is primarily used for SPMD/MIMD. | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | == Shared Memory vs. Distributed Memory == | |
− | + | === Shared Memory === | |
− | + | Shared memory is the memory which all the processors can access. In hardware point of view it means all the processors have direct access to the common physical memory through bus based (usually using wires) access. These processors can work independently while they all access the same memory. Any change in the variables stored in the memory is visible by all processors because at any given moment all they see is a copy or picture of entire variables stored in the memory and they can directly address and access the same logical memory locations regardless of where the physical memory actually exists. | |
− | |||
− | + | Uniform Memory Access (UMA): | |
− | |||
− | + | * Most commonly represented today by Symmetric Multiprocessor (SMP) machines | |
− | + | * Identical processors | |
+ | * Equal access and access times to memory | ||
+ | * Sometimes called CC-UMA - Cache Coherent UMA. Cache coherent means if one processor updates a location in shared memory, all the other processors know about the update. Cache coherency is accomplished at the hardware level. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | Non-Uniform Memory Access (NUMA): | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | * Often made by physically linking two or more SMPs | |
− | + | * One SMP can directly access memory of another SMP | |
− | + | * Not all processors have equal access time to all memories | |
− | + | * Memory access across link is slower | |
− | + | * If cache coherency is maintained, then may also be called CC-NUMA - Cache Coherent NUMA | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | ==== | + | === Distributed Memory === |
− | |||
− | + | Distributed memory in hardware sense, refers to the case where the processors can access other processor's memory only through network. In software sense, it means each processor only can see local machine memory directly and should use communications through network to access memory of the other processors. | |
− | |||
− | === | + | === Hybrid === |
− | |||
− | |||
− | |||
− | + | Combination of the two kinds of memory is what usually is used in today's fast supercomputers. The hybrid memory system is basically a network of shared memories. Within each shades component, the memory is accessible to all the cpus, and in addition, they can access the tasks and information stored on other units through the network. | |
− |
Revision as of 21:56, 27 February 2015
Parallel Computing
Parallel computing refers to running multiple computational tasks simultaneously. The idea behind it is based on the assumption that a big computational task can be divided into smaller tasks which can run concurrently.
Types of parallel computing
Parallel computing is used only for the last row of below table;
Single Instruction | Multiple Instructions | Single Program | Multiple Programs | |
---|---|---|---|---|
Single Data | SISD | MISD | ||
Multiple Data | SIMD | MIMD | SPMD | MPMD |
In more details;
- Data-parallel(SIMD): Same operations/instructions are carried out on different data items, simultaneously.
- Task Parallel(MIMD): Different instructions on different data carried out concurrently.
- SPMD: Single program, multiple data, not synchronized at individual operation level
SPMD and MIMD are essentially the same because any MIMD can be made SPMD. SIMD is also equivalent, but in a less practical sense. MPI (Message Passing Interface) is primarily used for SPMD/MIMD.
Shared memory is the memory which all the processors can access. In hardware point of view it means all the processors have direct access to the common physical memory through bus based (usually using wires) access. These processors can work independently while they all access the same memory. Any change in the variables stored in the memory is visible by all processors because at any given moment all they see is a copy or picture of entire variables stored in the memory and they can directly address and access the same logical memory locations regardless of where the physical memory actually exists.
Uniform Memory Access (UMA):
- Most commonly represented today by Symmetric Multiprocessor (SMP) machines
- Identical processors
- Equal access and access times to memory
- Sometimes called CC-UMA - Cache Coherent UMA. Cache coherent means if one processor updates a location in shared memory, all the other processors know about the update. Cache coherency is accomplished at the hardware level.
Non-Uniform Memory Access (NUMA):
- Often made by physically linking two or more SMPs
- One SMP can directly access memory of another SMP
- Not all processors have equal access time to all memories
- Memory access across link is slower
- If cache coherency is maintained, then may also be called CC-NUMA - Cache Coherent NUMA
Distributed Memory
Distributed memory in hardware sense, refers to the case where the processors can access other processor's memory only through network. In software sense, it means each processor only can see local machine memory directly and should use communications through network to access memory of the other processors.
Hybrid
Combination of the two kinds of memory is what usually is used in today's fast supercomputers. The hybrid memory system is basically a network of shared memories. Within each shades component, the memory is accessible to all the cpus, and in addition, they can access the tasks and information stored on other units through the network.