Project/Area Number |
26330287
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Multi-year Fund |
Section | 一般 |
Research Field |
Soft computing
|
Research Institution | Shonan Institute of Technology |
Principal Investigator |
|
Project Period (FY) |
2014-04-01 – 2017-03-31
|
Project Status |
Completed (Fiscal Year 2016)
|
Budget Amount *help |
¥3,640,000 (Direct Cost: ¥2,800,000、Indirect Cost: ¥840,000)
Fiscal Year 2016: ¥910,000 (Direct Cost: ¥700,000、Indirect Cost: ¥210,000)
Fiscal Year 2015: ¥1,170,000 (Direct Cost: ¥900,000、Indirect Cost: ¥270,000)
Fiscal Year 2014: ¥1,560,000 (Direct Cost: ¥1,200,000、Indirect Cost: ¥360,000)
|
Keywords | ニューラルネットワーク / 学習アルゴリズム / 勾配法 / 準ニュートン法 / 大規模データ / 勾配学習アルゴリズム / 凸化誤差関数 / 分割学習 / 分散学習 |
Outline of Final Research Achievements |
In this research, it is a purpose to enable the approximation model by the feedforward neural networks for the function or the system with the highly nonlinear behavior and huge data by the following studies. Specifically, “Distribution of large scale data including highly nonlinear characteristics using statistical method”, and “Improvement of robustness of training algorithm by convexity of error function and its decentralization”. Aimed at the development of the proposed algorithm to solve the complexity and scale of the training problem was not feasible with conventional methods. Furthermore, this approach is useful for the circuit modeling for the design and optimization, where analytical formulas are not available or original model is computationally too expensive. A neural model is trained once, and can be used again and again. This avoids repetitive circuit simulations where a change in the physical dimension requires a re-simulation of the circuit structure.
|