Parallel Computing: Models, Languages, and Architectures

Motives for offering CS-551

The emergence of different parallel computer architectures has led to the adoption of various programming models such as dataflow, shared memory, message passing, systolic, and data parallel models. These different models pose a number of problems for parallel program design. In particular,

Two radically different approaches have been adopted in dealing with the aforementioned problems. In the first approach, the burden for obtaining an efficient implementation of a machine-independent program on a particular architecture is placed upon the compiler. The premise for this approach is that optimizing compilers can effectively discover inherent parallelism, using elaborate transformations for example. While this approach offers the potential for tailoring a given algorithm to a particular architecture, it leaves the more difficult task of finding the algorithm that would efficiently exploit the architecture to the programmer. In the second approach, programming languages that expose the underlying architecture are used and the burden for obtaining an efficient implementation is placed on the programmer. The premise for this approach is that by reflecting the architectural features of the underlying hardware in the constructs of a programming language, programmers can exploit far more parallelism than what an optimizing compiler can achieve.

Topics covered in CS-551

The material covered will encompass topics in parallel computer architectures, parallel programming models, and languages. Appropriate examples for existing or proposed systems will be surveyed and compared.

This document has been prepared by Professor Azer Bestavros <best@cs.bu.edu> as the WWW Home Page for CS-551, which is part of the NSF-funded undergraduate curriculum on parallel computing at BU.

Date of last update: May 22, 1994.