• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to project page

1996 Fiscal Year Final Research Report Summary

Rule extraction by a structural learning of neural networks

Research Project

Project/Area Number 07680404
Research Category

Grant-in-Aid for Scientific Research (C)

Allocation TypeSingle-year Grants
Section一般
Research Field Intelligent informatics
Research InstitutionKyushu Institute of Technology

Principal Investigator

ISHIKAWA Masumi  Kyushu Institute of Technology Dept.of Control Engineering & Science, Professor, 情報工学部, 教授 (60222973)

Co-Investigator(Kenkyū-buntansha) ZHANG Hong  Kyushu Institute of Technology Dept.of Control Engineering & Science, Assistant, 情報工学部, 助手 (30235709)
Project Period (FY) 1995 – 1996
Keywordsneural networks / rule extraction / structural learning / information criterion
Research Abstract

In extracting rules from continuous valued inputs, the balance between mean square output error and the complexity of rules is important. Information criteria such as AIC represents this trade-off. In a structural learning with forgetting (SLF), the amount of forgetting is determined by minimizing AIC.However, SLF alone cannot produce rules of appropriate complexity. To overcome this difficulty, neural networks of various degrees of, complexity are trained. The degree of complexity, here, is defined by the maximum number of incoming connections to each hidden unit. From among these, the one with the smallest AIC is selected as optimal. Since outputs of hidden units are binary owing to the learning with hidden units clarification, incoming connection weights to each hidden unit determine the corresponding discriminating hyperplane. A logical combination of these hyperplanes provides rules. Furthermore, comparison with C4.5 popular in machine learning. Also comparison is made with KT met … More hod proposed by Fu. C4.5 and KT method can only produce rules with only one attribute at each term. On the other hand, the proposed method can produce rules of various complexities.
The first task is to divide a two-dimensional plane into two categories. In this case, rules with only one attribute at each term is not a natural representation. C4.5 generates many simple rules, but the proposed method can explain all data by 6 rules with two attributes. The second task is the classification of irises into 3 categories : setosa, versicolor, and virginica. Three rules with at most 3 attributes can explain 148 samples out of 150. The third task is the diagnosis of thyroid functioning into 3 classes : normal, hypo and hyper functioning. In this case 4 rules with at most 2 attributes can explain all 215 samples. Furthermore, the number of classification errors is smaller than those by C4.5 and KT method.
Concerning rule extraction from both continuous and discrete inputs and that from continuous inputs and outputs, satisfactory results are not yet obtained due to inherent difficlty. These are left for further study. Less

  • Research Products

    (14 results)

All Other

All Publications (14 results)

  • [Publications] Masumi Ishikawa: "Structural learning with forgetting" Neural Networks. 9. 509-521 (1996)

    • Description
      「研究成果報告書概要(和文)」より
  • [Publications] Masumi Ishikawa: "Neural networks approach to rule extraction" ANNES'95. 6-9 (1995)

    • Description
      「研究成果報告書概要(和文)」より
  • [Publications] Masumi Ishikawa: "Structural learning and knowledge acquisition" International Conference on Neural Networks(ICNN'96),Plenary,Panel and Special Sessions. 100-105 (1996)

    • Description
      「研究成果報告書概要(和文)」より
  • [Publications] Masumi Ishikawa: "Rule extraction by successive regularization" International Conference on Neural Networks(ICNN'96). 1139-1143 (1996)

    • Description
      「研究成果報告書概要(和文)」より
  • [Publications] Masumi Ishikawa: "Structural learning and Bayesian regularization in neural networks" Progress in Neural Information Processing(ICONIP'96). 1377-1380 (1996)

    • Description
      「研究成果報告書概要(和文)」より
  • [Publications] Masumi Ishikawa, Hiroki Ueda: "Structural learning approach to rule discovery,from data with continuous valued inputs" Progress in Connectionist-Based Information Systems,Proceedings of the 1997 International Conf.on Neural Information,Processing. 2. 898-901 (1997)

    • Description
      「研究成果報告書概要(和文)」より
  • [Publications] Masumi Ishikawa: "Brain-Like Computing and Intelligent Information Systems" Springer, 20 (1998)

    • Description
      「研究成果報告書概要(和文)」より
  • [Publications] Masumi Ishikawa: "Structural learning with forgetting" Neural Networks. Vol.9, No.3. 509-521 (1996)

    • Description
      「研究成果報告書概要(欧文)」より
  • [Publications] Masumi Ishikawa: "Neural networks approach to rule extraction" ANNES'95. 6-9 (1995)

    • Description
      「研究成果報告書概要(欧文)」より
  • [Publications] Masumi Ishikawa: "Structural learning and knowledge acquisition" International Conference on Neural Networks (ICNN'96), Plenary, Panel and Special Sessions. 100-105 (1996)

    • Description
      「研究成果報告書概要(欧文)」より
  • [Publications] Masumi Ishikawa: "Rule extraction by successive regularization" International Conference on Neural Networks (ICNN'96). 1139-1143 (1996)

    • Description
      「研究成果報告書概要(欧文)」より
  • [Publications] Masumi Ishikawa: "Structural learning and Baysian regularization in neural networks" Progress in Neural Information Processing (ICONIP'96). 1377-1380 (1996)

    • Description
      「研究成果報告書概要(欧文)」より
  • [Publications] Masumi Ishikawa and Hiroki Ueda: Structural learning approach to rule discovery, from data with continuous valued inputs Progress in Connectionist-Based Information Systems, Proceedings of the 1997 International Conference on Neural Information, Processing and Intelligent Information Systems, Vol.2. Springer, 898-901 (1997)

    • Description
      「研究成果報告書概要(欧文)」より
  • [Publications] Masumi Ishikawa: Structural Learning and Rule Discovery from Data S.Amari and N.Kasabov Eds.Brain-Like Computing and Intelligent Information Systems. Springer, 396-415 (1998)

    • Description
      「研究成果報告書概要(欧文)」より

URL: 

Published: 1999-03-16  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi