Budget Amount *help |
¥2,500,000 (Direct Cost: ¥2,500,000)
Fiscal Year 2004: ¥600,000 (Direct Cost: ¥600,000)
Fiscal Year 2003: ¥600,000 (Direct Cost: ¥600,000)
Fiscal Year 2002: ¥1,300,000 (Direct Cost: ¥1,300,000)
|
Research Abstract |
The overall subject of this project is the investigation of an appropriate method for practical use of asymptotically fast algorithms for computer algebra and the development of efficient software, with major emphasis upon the basic operations. More concretely, adopting vectors and matrices as a computing target, we developed efficient software, by making use of various algorithms and programming techniques. First, we investigated the Strassen-type fast algorithm for matrix multiplication, to clarify the conditions and the reason of its fastness. Even with detailed analysis done by counting the number of arithmetic operations, the incredible fastness revealed in some cases cannot be explained completely, however, we noticed instead that computing time is closely related with space complexity. As an experience rule, we have learnt that the fast algorithm takes effect if the multiplication of matrix elements is much costlier than the additive operations, and additive operations do not ch
… More
ange the sparseness of the expressions and therefore their cost for multiplication. Modular arithmetics is a typical example of this kind. Our experiment strengthened the necessity, we have long been aware of, for the development of efficient library of basic linear operations, like BIAS, for modular arithmetics, which would have a wide variety of applications. Our second topic is the development of this software, called MBLAS. We defined a set of subroutines for linear algebra, in analogy with BLAS, and designed an appropriate interface for various applications, including the application to dense univariate polynomial arithmetics. With modular arithmetics, removal of division is a key to speed-up, especially for the case of matrix or vector operations. We have explored several techniques for speedup, such as use of tables, conversionof division by multiplication, use of overflow as a manageable value. Also, we experimented vector processing, using short-vector SIMD instructions for streaming data. Our extensive empirical study indicated that the effect of these techniques heavily depends on the hardware specification. One more knowledge we have obtained is the fact that the fast matrix-multiplication algorithm is not suited for sparse matrices. This is true even with matrix representation tailored for sparse matrices. Throughout this experiment, we noticed that the use of array for vectors and matrices is of almost no effect, as far as symbolic computation is concerned. Investigation of appropriate matrix representation has become the third subject in our project and is left for a topic of future study. Another investigator developed some high-level algorithms and realized efficient software for algebraic computation, besides the maintenance and the improvement of a computer algebra system Risa/Asir as a chief development staff. His work includes the continuous effort for improving the Groebner-basis package, development of algorithm and realization for factorization of multivariate polynomials over finite fields, and modular method for dynamic evaluation. Less
|