A Study of a High-Speed and Highly-Functional Instruction Feeding Mechanism for the VLSI Architecture
Project/Area Number |
12680325
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
計算機科学
|
Research Institution | Miyagi National College of Technology (2001-2002) Tohoku University (2000) |
Principal Investigator |
SUZUKI Ken-ichi (2001-2002) Miyagi National College of Technology, Department of Information and Design, Lecturer, 情報デザイン学科, 講師 (50300520)
小林 広明 (2000) 東北大学, 大学院・情報科学研究科, 助教授 (40205480)
|
Co-Investigator(Kenkyū-buntansha) |
NAKAMURA Tadao Tohoku University, Graduate School of Information Sciences, Professor, 大学院・情報科学研究科, 教授 (80005454)
鈴木 健一 宮城工業高等専門学校, 情報デザイン学科, 講師 (50300520)
|
Project Period (FY) |
2000 – 2002
|
Project Status |
Completed (Fiscal Year 2002)
|
Budget Amount *help |
¥3,500,000 (Direct Cost: ¥3,500,000)
Fiscal Year 2002: ¥1,600,000 (Direct Cost: ¥1,600,000)
Fiscal Year 2001: ¥1,200,000 (Direct Cost: ¥1,200,000)
Fiscal Year 2000: ¥700,000 (Direct Cost: ¥700,000)
|
Keywords | Compute architecture / VLIW / Cache memory / Memory / VLIWアーキテクチャ / 命令発行機構 / 命令キャッシュ / MULHIキャッシュ |
Research Abstract |
The VLIW architecture, that is the most promising for the implementation of the next generation microprocessors, executes many instructions in parallel, requiring a high performance memory system to supply a huge number of instructions in short time from the main memory to its functional units. We introduce a high performance instruction cache mechanism devoted to the VLIW architecture, named the MULHI (MULtiple HIt) cache. A MULHI cache achieves high cache hit ratio by eliminating unnecessary "nop" instructions from its cache memory array, that enables to create a high-bandwidth memory system. The MULHI cache is based on the same concept with the COMPRESS cache and the SILO cache, at the point of eliminating nops from their data array. However, only the MULHI cache could apply a cache associativity to its cache management policy to acquire a higher cache hit ratio. Using software simulations, we evaluate the MULHI cache miss ratio that show it achieve a higher (OPC Operations Per Cycle) than the other cache mechanisms. Moreover, we make a detailed hardware design, that show the overhead of the MULHI cache control logic circuits is significantly small. Consequently, the MULHI cache architecture is much feasible for implementing a high speed memory system for VLIW processors. At last, as a new application of cache memory, we evaluate a real-time ray tracing system, that is remarkably powerful for rendering images.
|
Report
(4 results)
Research Products
(7 results)