Illinois Impact

The I2PC Agenda

Join the I2PC Mailing List


Visiting I2PC at the University of Illinois

 

 

Illinois Impact

Parallel computing is in our blood.

The University of Illinois has been a leader in defining the landscape of parallel computing research for over four decades. Illinois has made, and continues to make, contributions that benefit the research agendas of industry, government labs/agencies, and global academic partner institutions. Illinois innovation in parallel computing began with generations of the ILLIAC, the CEDAR machine, and the Illinois Cache Coherence Protocol. Today, parallel computing at Illinois includes the world’s first petascale computer, a global cloud computing testbed, a powerful collection of supercomputers, and progressive efforts to usher in a new era of parallel computing for mobile, client, and desktop systems.

Impact of Illinois on Parallel Computing Advances

ILLIAC IV: The ILLIAC IV, designed at the University of Illinois by Daniel Slotnick, was the world's first supercomputer and the first to use parallel computation. A key part of the ILLIAC IV design was its fairly high parallelism for that time. The machine worked on large data sets in what would later be known as SIMD processing. ILLIAC IV was for many years the most powerful machine in existence and is one of the most influential systems in the history of parallel computing. The parallel programming languages IVTRAN, TRANQUIL, and Glypnir were developed at Illinois for the ILLIAC IV.

CEDAR: This experimental shared-memory multiprocessor prototype was built by a team of Illinois researchers led by David Kuck, Edward Davidson, Duncan Lawrie, and Ahmed Sameh. The project made seminal contributions to parallel system design. CEDAR embodied advances in interconnection networks, multiprocessor memory hierarchies, control unit support of parallelism, optimizing compilers, and parallel algorithms and applications.

Cedar Fortran: David Padua developed this predecessor to OpenMP for the CEDAR multiprocessor.

OpenMP: Efforts led by David Kuck to consolidate parallel  programming directives used by supercomputer vendors culminated in the design of OpenMP, the most widely used shared memory API.

PATH Pascal: Developed by Roy Campbell in 1977, an early language for expressing concurrency in a disciplined manner.

Illinois Cache Coherence Protocol (MESI): The Illinois protocol, developed by Janak Patel in 1983, became the IEEE MESI standard and is used today by virtually all cache coherent shared memory multi-processors.

Autoparallelization: David Kuck and his students pioneered techniques to translate conventional code into parallel code. Autoparallelization systems such as the Analyzer (ca. 1970), Parafrase (ca. 1976), Parafrase 2 (led by Constantine Polychronopolous, ca. 1988), and Polaris (led by David Padua, ca. 1992) were instrumental in the development of dependence analysis, vectorization, parallelization, and locality enhancement techniques which are incorporated today in all widely-used compilers.

IMPACT: Wen-Mei Hwu's "superblock" and "hyperblock" structures enable compilers to parallelize code across complex control structures. His work has become part of the technology base of new compilers in major corporations.

MPI: Co-developed by Marc Snir and Bill Gropp, MPI (Message Passing Interface) became the leading paradigm for distributed memory computing.

ParaGraph: Michael Heath developed this graphical display tool for visualizing the behavior and performance of parallel systems that use MPI.

Chare Kernel: This message-driven parallel programming system developed by Laxmikant Kale is used for state-space search and other applications. It has been acknowledged as one of the influences on Intel's Thread Building Blocks (TBB).

CHARM++ and AMPI: CHARM++ is a parallel programming system developed by Laxmikant Kale that allows expression of parallel programs without reference to processors, and automates resource management. It has been used by several scalable parallel Computational Science and Engineering applications. 15% and 20% of the cycles in one year at NCSA and the Pittsburgh Supercomputing Center respectively ran applications (including NAMD) using Charm++. It also supports an Adaptive MPI implementation called AMPI. Charisma and MSA are new deterministic parallel languages developed by Kale that work with CHARM++'s automatic resource management.

Race Detection Techniques: David Padua developed these pioneering strategies to find programming defects and synchronization errors.

Thread-Level Speculation: Josep Torrellas' work on this architectural technology for parallelization and programmability has helped ensure that it is used in commercial systems, such as those from SUN and Azul Systems, and in a prototype from IBM.

IBM/DARPA PERCS: Josep Torrellas, David Padua, and Marc Snir made contributions from 2002 to 2006 to this DARPA-funded IBM multiprocessor. This design has evolved into IBM's Power 7. A petascale-level Power 7 multiprocessor will be installed at NCSA in 2010.

AVIO: Yuanyuan Zhou's innovative approach to detecting atomicity violations in parallel programs is based on a novel observation called access interleaving invariant that detects atomicity violations without requiring programmers' annotations and specifications on synchronizations. Currently this technology is in the process of being  transferred to Intel.

LLVM: Vikram Adve's group developed LLVM, a novel system for "lifelong compilation" of programs from any source language. LLVM is a versatile compiler infrastructure used to build diverse tools including static compilers, JIT optimizers of graphics shaders, hardware synthesis tools, bug finding tools, and many others. It is in active use by numerous groups across academia and industry for both research and teaching. It is also being used in commercial products by Apple, Adobe, Cray, and several other companies.

Java and C++ Memory Consistency Models: Sarita Adve co-developed the memory consistency models for the Java and C++ languages, building on the foundation of data-race-free models proposed in her PhD thesis.

High-speed Switching Networks: Janak Patel's 1981 paper on performance of interconnections for multiprocessors, and Marc Snir's papers on this topic from 1982 to 1984, are considered "foundation" papers for the entire field on the performance of high-speed switching networks. A large body of research on multiprocessor interconnections is based on the techniques presented in these papers.

Actors: Gul Agha's paradigm of concurrent computation, with rigorously defined formal semantics, has been realized in a number of parallel programming languages such as Erlang, E, Ptolemy, Thal, Scala and SALSA.