Research Home

Projects

Research Areas

Publications

Tools

Illinois Impact

 

 

 

Libraries

Participants: David Padua, Maria Garzara, James Brodman, Albert Sidelnik, Brandon Moore

Executive Summary

This project will investigate new ideas on data types (data structures and operators) that can be used to facilitate the representation of data parallel computations. We have chosen this notation because most parallel programs of importance can be represented as a sequence of data parallel operations.

Goals - Extended Description

Research directions that we will explore in this project include:

  1. New classes of collections. Data parallelism has been applied to dense arrays. However, sparse computations and operations on sets (perhaps other objects such as graphs) will be considered.
  2. Static and Dynamic Partitioning or Tiling. In our experience, tiling is a fundamental operation to express the granularity of parallelism and the locality. Our goal is to prove that algorithms described in terms of data parallel operators with tiled data structures provide a high level abstraction for the programmer and result in programs that are portable across different architectures. We have applied the concept of tiling to dense arrays, in the Hierarchically Tiled Arrays (HTAs), and want to use tiling with other data structures.
  3. Parallel operations with interactions. We will study patterns that have not been traditionally programmed using data parallel operators, such as operations on collections with interactions between the computations of the different elements, and that require the use of locks of other forms of synchronization for correctness.
  4. Control of determinacy. Whenever the parallel operators are implemented as pure functions, the programs will be guaranteed to be determinate. Non-determinacy can be encapsulated by allowing interaction as explained in the previous point.

Deliverables: To drive this research, the plan is to implement several symbolic and numerical algorithms and applications.

The numerical codes to be developed include:

  1. Spike algorithm, developed by Ahmed Samed from Purdue University. This algorithm is used in many areas of computational science and engineering computational mechanics (fluids, structures, and fluid-structure interaction) and computational nanoelectronics. These applications often give rise to very large narrow banded linear systems which can be dense or sparse within the band. For this to work, we will extend HTAs to work on sparse arrays.
  2. Codes generated by Spiral and that include DFT, DCT and many other signal transform algorithms. Together with researchers from the SPIRAL team in CMU (David Padua has been working with them for more than 10 years), we are studying the specification of the codes generated in SPIRAL using the notation that we developed in HTAs, to generate codes that are portable across platforms. Currently SPIRAL generates codes for uniprocessors, SIMD devices, shared memory multiprocessors, distributed memory machine and GPU-type processors such as Cell.

We will also work on non-numerical codes. For that purpose, we are extending our library to work also on Tiled Sets. Applications that we will study include: search algorithms for discrete optimization and planning, as well as data mining algorithms.

As part of this work, we will also deliver an HTA library for GPU-type of processors and a library for other classes of aggregates, such as Tiled Sets.

Autotuning Gluon: Interface for
Trusted Programming
Interactive Porting Libraries Optimizations for Power
Refactoring Safe Parallel Prog. Languages Scheduling Verification & Validation