• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Applying the Shared-Memory Parallel Programming Model to the Highly Parallel Cluster Computing Environment

Research Project

Project/Area Number 17500049
Research Category

Grant-in-Aid for Scientific Research (C)

Allocation TypeSingle-year Grants
Section一般
Research Field Computer system/Network
Research InstitutionKyoto Sangyo University

Principal Investigator

NIIMI Haruo  Kyoto Sangyo University, Faculty of Engineering, Professor (40144331)

Project Period (FY) 2005 – 2007
Project Status Completed (Fiscal Year 2007)
Budget Amount *help
¥3,500,000 (Direct Cost: ¥3,200,000、Indirect Cost: ¥300,000)
Fiscal Year 2007: ¥1,300,000 (Direct Cost: ¥1,000,000、Indirect Cost: ¥300,000)
Fiscal Year 2006: ¥900,000 (Direct Cost: ¥900,000)
Fiscal Year 2005: ¥1,300,000 (Direct Cost: ¥1,300,000)
KeywordsShared-memory Parallel Programs / Cluster Systems / Distributed-memory Parallel Systems / OpenMP / UPC / MPI / 共有ルモリ型並列プログラム
Research Abstract

The aim of this study was to make them consistent in highly parallel cluster computing environments, one of which is to reduce parallel programming cost, and the other is to get enough efficiency in parallel execution, by employing shared-memory parallel programming paradigm.
At first, we built a cluster system consisting of eight nodes as a platform to carry out various experiments. And we studied OpenMP and MPI, the former is the representative of the shared-memory parallel programming model, while the latter is the most popular message passing library. We then planned to develop a translator which converts an OpneMP program to an MPI distributed-memory parallel program. The difference of OpenMP and MPI models is focused to each data attribute. In other words, in the former, there are shared data to be shared between threads, and that becomes the big factor to reduce the programmer's burden. On the other hand, in MPI, there are only private data which are local to each process, and it was the greatest problem how to manage this difference. We also examined the UPC as another example of the shared-memory parallel programming models, but, including the default data attribute being private, reached the conclusion that the UPC cannot surpass OpenMP from a point of the degree of standardization and the popularity.
We used the translator, which we developed in this study, to convert several kinds of OpenMP sample programs into MPI programs and evaluated their parallel execution efficiency. As a result, for the programs which have a high level locality, we could confirm that our translator showed enough performance on the parallel execution efficiency that was even higher than the virtual implementation methods of distributed-shared-memory systems.

Report

(4 results)
  • 2007 Annual Research Report   Final Research Report Summary
  • 2006 Annual Research Report
  • 2005 Annual Research Report

URL: 

Published: 2005-04-01   Modified: 2016-04-21  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi