1996 Fiscal Year Final Research Report Summary
Auto-Parallelizing Compiler for Massive Parallel Computers
Project/Area Number |
06044147
|
Research Category |
Grant-in-Aid for international Scientific Research
|
Allocation Type | Single-year Grants |
Section | Joint Research |
Research Institution | Kyushu University (1996) Nara Institute of Science and Technology (1994-1995) |
Principal Investigator |
ARAKI Keijiro Kyushu University, 大学院・システム情報科学研究科, 教授 (40117057)
|
Co-Investigator(Kenkyū-buntansha) |
HAGHIGHAT Mohammad Intel Corporation, 主任研究員
VEIDENBAUM Alex University of Illinois at Chicago, 電気工学/計算機科学科, 準教授
POLYCHRONOPO コンスタンチン イリノイ大学, 電気計算機工学科, 準教授
YAMAMOTO Kazuhiko Nara Institute of Science and Technology, 情報科学研究科, 助手 (50263439)
SASAKURA Mariko Okayama University, 工学部, 助手 (30284087)
OKAMURA Kouji Kobe University, 総合情報処理センター, 助手 (70252830)
SATO Syuuko Kyushu University, 大学院・システム情報科学研究科, 助教授 (20225999)
SAISSYO Keizo Nara Institute of Science and Technology, 情報科学研究科, 助教授 (50170486)
HIRABARU Masaki Nara Institute of Science and Technology, 情報科学研究科, 助教授 (10192717)
FUKUDA Akira Nara Institute of Science and Technology, 情報科学研究科, 教授 (80165282)
POLYCHRONOPOULOS Constan University of Illinois at Urbana-Chamgpain
|
Project Period (FY) |
1994 – 1996
|
Keywords | Parallelizing Compiler / Data Partitioning / Visualization / Scheduling / Performance Evaluation / Instruction Level Parallelism / Distributed Processing / Intermediate Representation |
Research Abstract |
During 1996, we had four research topics : 1) Visualization for parallelizing compilers, 2) Estimating Paralled Execution of loops with Loop-carried Dependences, 3) A parallelizing compiler for distributed memory parallel computers, and 4) Performance oriented parallelizing compilers. 1.We have developed Nara View, which is a 3D visualization system for the support of parallel programming and parallelizing compilers. Nara View provides 3D views for the structure of parallel programs in the sense of program flow, parallelism and loop nests, and for representation of data dependence by showing loop iteration and memory allocation of shared data simultaneouusly. 2.We have proposed a method to estimate parallel execution time of loops with loop-carried dependences, and validated it by enough number of experiment. The main advantage of this method is that the computational cost of the esitimation is independent to the number of iteration of the loops with loop-carried dependences. We have achieved this method by reducing the problem into an integer linear programming problem. 3.We have developed a tool which translates parallel programs for shared memory paralle computers into other parallel forms for distributed systems. It means that the combination of a parallelizing compiler, Parafrase-2, and our tool construct a parallel and distributed compiler. Now we are investigating its optimization such as data distribution. 4.We have developed an analytic model, called Semi-Markov Memory and Cache Coherence Interference Model (SMCI model), which can predict the performance of cache coherent parallel computers with extremely inexpensive computational cost. The model can be applied to both invalidate and update broadcast based cache coherence protocol. The SMCI model is a key technique to construct a performance oriented parallelizing compiler.
|