Project/Area Number |
12680327
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
計算機科学
|
Research Institution | University of Tsukuba |
Principal Investigator |
BOKU Taisuke Institute of Information Sciences and Electronics, Associate Professor, 電子・情報工学系, 助教授 (90209346)
|
Co-Investigator(Kenkyū-buntansha) |
NAKAMURA Hiroshi University of Tokyo, Research Center for Advanced Science and Technology, Associate Professor, 先端科学技術センター, 助教授 (20212102)
|
Project Period (FY) |
2000 – 2001
|
Project Status |
Completed (Fiscal Year 2001)
|
Budget Amount *help |
¥3,800,000 (Direct Cost: ¥3,800,000)
Fiscal Year 2001: ¥1,800,000 (Direct Cost: ¥1,800,000)
Fiscal Year 2000: ¥2,000,000 (Direct Cost: ¥2,000,000)
|
Keywords | hybrid programming / SMP cluster / MPI / OpenMP / parallelization paradigm |
Research Abstract |
In this research, we constructed a PC cluster connecting multiple SMP-based PC nodes with various network interfaces. Such an SMP cluster system has both shared memory and distributed memory architectures together, and there are several possibility for programming like message passing, shared memory and mixture of them. Our experimental cluster contains 4-way and 2-way SMP of Pentium-III processors and two types of interconnection network are available, Myrinet800 and Fast Ethernet. We have evaluated and analyzed the performance of hybrid programming to mixture both MPI and OpenMP, and message passing programming with MPI only targeting NAS Parallel Benchmarks as basic benchmarking and SPAM (Smoothed Particle Applied Mechanics) particle code for actual scientific program. Against to the preliminary estimation, MPI-only programming achieved better performance in most of these programs. To analyze these results, we applied the cache hit ratio measurement for the research on cache behavior according to the programming style. As a result, we have cleared the hybrid program with MPI for inter-node communication and OpenMP for intra-node multithreading often breaks well-tuned cache utilization on MPI-only program. This research concludes that the performance of hybrid program is strongly affected by data access pattern, and it is not always the best scheme to apply OpenMP multithreading after we finish to write the MPI-only program. In some cases, the advantage in communication time of direct access to shared memory on SMP is overcome by the disadvantage of such cache inefficiency. However, when the program naturally has load imbalancing with a certain size of granularity, there is a possibility of the performance of hybrid program overcomes the MPI-only one with dynamic load balancing feature. SPAM particle code is one of such applications.
|