2007 Fiscal Year Final Research Report Summary
Nonlinear dynamic optimization theory on stochastic model and its application to mathematical finance
Project/Area Number |
17540121
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
General mathematics (including Probability theory/Statistical mathematics)
|
Research Institution | Kochi University |
Principal Investigator |
OHTSUBO Yoshio Kochi University, Faculty of Science, Professor (20136360)
|
Co-Investigator(Kenkyū-buntansha) |
YASUDA Masami Chiba University, Faculty of Science, Professor (00041244)
IWAMOTO Seiichi Kyushu University, Faculty of Economics, Professor (90037284)
NOMAKUCHI Kentaro KOCHI UNIVERSITY, Faculty of Science, Professor (60124806)
|
Project Period (FY) |
2005 – 2007
|
Keywords | nonlinear criteria / optimization theory / Markov decision process / Fuzzy decision process / dynamic programming / shortest path problem / statistical inference / Golden ratio |
Research Abstract |
The summary of research results is as follows. 1. We consider multistage decision processes where a criterion function is an expectation of minimum function and formulate it as Markov decision processes with imbedded parameters. The policy depends upon a history including past imbedded parameters and the rewards at each stage is random and depends upon a current state, a current action and a next state. We give an optimality equation by using operators and show that there exist a right continuous deterministic Markov policy which depend upon a current state and an imbedded parameter. 2. We consider Markov decisions processes with a target set, where criterion function is an expectation of minimum function. We formulate the problem as an infinite horizon case with a recurrent class. We show under some conditions that an optimal value function is a unique solution to an optimality equation and there exists an stationary optimal policy. Also we give a policy improvement method. 3. We conside
… More
r a stochastic shortest path problem with associative criteria in which for each node of a graph we choose a probability distribution over the set of successor nodes so as to reach a given target node optimally. We formulate such a problem as an associative Markov decision processes. We show that an optimal value function is a unique solution to an optimality equation and find an optimal stationary policy. Also we give a value iteration method and a policy improvement method. 4. We consider utility-constrained Markov decision processes. The expected utility of the total discounted reward is maximized subject to multiple expected utility constraints. By introducing a corresponding Lagrange function, saddle-point theorem of the utility constrained optimization is derived. The existence of a constrained optimal policy is characterized by optimal action sets specified with a parametric utility. 5. We consider an inequality condition where one side is greater than or equal to a multiple of the other side and an equality holds if and only if one value is a multiple of the other variable. We show a cross-duality between four pairs of Golden inequalities for one-variable functions. Less
|
Research Products
(51 results)