BU/CLA CS-551
Parallel Computing: Models, Languages, and Architectures
Motives for offering CS-551
The emergence of different parallel computer architectures has led to
the adoption of various programming models such as dataflow, shared
memory, message passing, systolic, and data parallel models. These
different models pose a number of problems for parallel program
design. In particular,
- It is unclear whether the specification of parallel algorithms
transcends the differences between the architectures on which they
will be implemented and the application domains from which problems
are drawn.
- It is unclear whether the underlying architectural model
needs to be reflected through to the programmer, or whether the
programmer can write a machine independent program that would perform
well on different architectures.
- It is unclear what kinds of linguistic constructs should be
provided to the programmer to help in constructing efficient yet
understandable programs that can run either on a particular
architecture or on parallel machines in general.
Two radically different approaches have been adopted in
dealing with the aforementioned problems. In the first approach, the
burden for obtaining an efficient implementation of a
machine-independent program on a particular architecture is placed
upon the compiler. The premise for this approach is that optimizing
compilers can effectively discover inherent parallelism, using
elaborate transformations for example. While this approach offers the
potential for tailoring a given algorithm to a particular
architecture, it leaves the more difficult task of finding the
algorithm that would efficiently exploit the architecture to the
programmer. In the second approach, programming languages that expose
the underlying architecture are used and the burden for obtaining an
efficient implementation is placed on the programmer. The
premise for this approach is that by reflecting the architectural
features of the underlying hardware in the constructs of a programming
language, programmers can exploit far more parallelism than what an
optimizing compiler can achieve.
Topics covered in CS-551
The material covered will encompass topics in parallel computer
architectures, parallel programming models, and languages. Appropriate
examples for existing or proposed systems will be surveyed and
compared.
- Architecture-independent Programming :
- A notation and a proof system: UNITY
- Parallelizing compilers and program transformation
- The Data Flow Model :
- Fine grain dataflow: The Monsoon architecture
- Coarse grain dataflow: The Linda model and language
- Functional (dataflow) programming languages: ID
- The Data Parallel Model :
- High dimensionality architectures: The CM-2 and CM-5
- Low dimentionality architecture: The Maspar
- Low level data parallel programming languages: C-Paris
- Explicit data parallel programming languages: C* and MPL
- Implicit data parallel programming languages: CMFORTRAN
- The Pipelined Model :
- Vector processing: The Cray computer
- Systolic Architectures: The Warp Computer
- The Shared Memory Model :
- Fetch-and-op architectures: The NYU UltraComputer and RP3
- Cross-bar architectures: The BBN Butterfly
- Language support: The Butterfly Uniform system
- The Message Passing Model :
- Architectures: The J-machine and the NCUBE
- Object-oriented languages: Actors, ABCL/1, POOL
- Other Models :
- Logic-based parallel languages: Parlog
- Special-purpose architectures and silicon compilers
This document has been prepared by Professor Azer Bestavros
<best@cs.bu.edu> as the WWW Home Page for
CS-551, which is
part of the NSF-funded
undergraduate curriculum on parallel
computing at BU.
Date of last update: May 22, 1994.