There are many methods
by which we change the vector U from one iteration to next as follow :-
1) 1st
gradient method ,and their modification
2) Optimal gradient method
Where:
is optimal step size length which can be obtained
from next items
3) Reduced optimal gradient
4) Optimal search direction
5)
Newton climbing method
Which also called 2nd order
gradient method
6) The reduced Hessian method
7) Conjugate direction method
8) P-Q decomposition method
negative reduced
gradient vector in the direction of steepest descent.
9) Improve of 2nd order
method
given the best results.
10) Using the Han-Powell method
and quadratic programming problem.
where S is obtained
from
11)
Decomposition and
system reduction
Where DV obtained from reduced
quadratic programming.
12) Modified O. G. M.
Where:
is gradient vector and
used the diagonal direction for energy six or seven iteration and
instead of
hence the speed of convergence is faster.
1-The Algorithm Of conjugate
i) Start with initial arbitrary point x1.
ii) Set the 1st search direction
iii) Find the
point x2 according to
where is optimal step size length.
If Xi+1 is optimal, stop the process, otherwise set the value of i =
i+1 and repeat steps (iv), (v), (vi) until the convergence is achieved .This
procedure is indicated in flow chart shown in the following figure:
13) Parallel tangent: for
minimization of f(x):
- Start with initial point X1.
- Find 1st search
direction
- Take new search direction as:
The new value of point is
In order to minimize the round off error, we used
Newton method since at the min
solution X* for continuously differentiable function f(x) satisfied the
necessary condition
instead of the used form where
m = n+1 is no. of variables in spite of this, the
Fletcher-Reeves is vastly superior to the steepest descent method and pattern-search method (Parallel tangent) but it turn out be rather than the Quasi-Newton and variables metric methods but in Newton and Metric method, we need to storage a matrix of order n×n, hence if the storage is one of main consideration hence Fletcher-Reeves is not to be ignored.
Fletcher-Reeves is vastly superior to the steepest descent method and pattern-search method (Parallel tangent) but it turn out be rather than the Quasi-Newton and variables metric methods but in Newton and Metric method, we need to storage a matrix of order n×n, hence if the storage is one of main consideration hence Fletcher-Reeves is not to be ignored.
15) Quasi-Newton method:
2-Improved Newton Method
In some systems, theWhich has the following advantages:
-It reduces to no. of iterations.
-It find, the optimal point for all systems.
But the disadvantages are:
-It's required to storage matrix with
order n×n ---- [H]
-It's required to compute the elements
of H and it's inverse to
each iteration.
16) Variables Metric method
(Davion-Fletcher powell method)
This is the best general
purpose optimization technique:
i.) Start with initial
point
and n×n
+ve definite symmetrical (Matrix H1(may be unit matrix)).
0 comments:
Post a Comment