Budget Amount *help |
¥13,900,000 (Direct Cost: ¥13,900,000)
Fiscal Year 2001: ¥3,500,000 (Direct Cost: ¥3,500,000)
Fiscal Year 2000: ¥4,800,000 (Direct Cost: ¥4,800,000)
Fiscal Year 1999: ¥5,600,000 (Direct Cost: ¥5,600,000)
|
Research Abstract |
First of all, we have built the heterogeneous parallel processing environment consisting of four workstations (WS), one of which is an SMP machine with 4 processors, and ten personal computers (PC) interconnected via 100Mbps Ethernet. While the WSs employ SPARC processors and run Solaris operating system (OS), the PCs employ various models of Intel x86 family processors and run different kinds of Linux OS distributions. To verify this parallel processing environment, we installed PVM and MPI message passing libraries and developed some parallel programs utilizing these libraries. Since we primarily emphasized the performance aspects of parallel programs, we prefer the message-passing paradigm using MPI functions. Through the development of several SPMD (Single Program, Multiple Data-streams) parallel programs, we analyzed how the data should be distributed and how the derived communication patterns should be optimized, and investigated the effectiveness and the adequacy of this programming paradigm. These investigations convinced us that the SPMD parallel programming is efficient to a certain extent, however, the productivity of middle or large-scale SPMD programs is intolerably low and the readability of those is rather poor. On the other hand, we noticed effectiveness of the physically distributed, logically shared memory paradigm, especially OpenMP as the best candidate of the de facto standard of this paradigm. We also proposed the novel schemes for dynamic load balancing (and/or optimizing) among many processes within a parallel program, which utilize some kinds of load information, such as the load growing rate and its acceleration. Finally, we have developed a new message passing library which can decrease the communication overheads within the TCP/IP protocol and some communication functions specified in the MPI. Simulation results showed that this new library can exhibit fairly good performance comparing to some existing MPI libraries.
|