1. 程式人生 > 實用技巧 >最優化演算法【共軛梯度法】

最優化演算法【共軛梯度法】

特點:具有超線性收斂速度,只需要計算梯度,避免計算二階導數

演算法步驟

\(step0:\)
給定初始值\(x_0\),容許誤差\(\epsilon\)

\(step1:\)
計算梯度\(g_k=\nabla f(x_k)\),if \(norm(g_k)<=\epsilon\)\(break;\)
輸出當前值\(x_k\)
else \(to step2\)

\(step2:\)

\[\begin{cases} d_k=-g_k, & \text {$k$=0} \\ d_k=-g_k+\beta_{k-1}d_{k-1}, & \text {$k$>=1} \end{cases} \]

\[\beta_{k-1}=\frac{g_k^Tg_k}{g_{k-1}^Tg_{k-1}} \]

\(step3:\)
利用線搜尋技術確定\(\alpha_k\)

\[x_{k+1}=x_k+\alpha_kd_k \]

\(k=k+1\),to step 1;

matlab code

function [x,val,fun_t] = conjugate_gradient(fun,gfun,x0,max_ite)
%myFun - Description
%
% Syntax: [x,val,fun_t] = myFun(fun,gfun,x0)
%
% conjugate gradient algorithm
    maxk=max_ite;
    rho=0.6;Sigma=0.4;
    k=0;epsilon=1e-4;
    n=length(x0);
    fun_t=zeros(1,max_ite);

    while k<maxk

        g=gfun(x0);
        itern=k-(n+1)*floor(k/(n+1));
        itern=itern+1;
        if(itern==1)
            d=-g;
        else
            Beta=(g'*g)/(g0'*g0);
            d=-g+Beta*d0;
            gd=g'*d;
    
            if gd>=0.0
                d=-g;
            end
        end
        if norm(g)<epsilon
            break;
        end
        m=0;mk=0;
        while m<20
            if fun(x0+rho^m*d)<fun(x0)+Sigma*rho^m*g'*d
                mk=m;
                break;
            end
            m=m+1;
        end
        x0=x0+rho^mk*d;
        g0=g;d0=d;
        
        k=k+1;
        fun_t(1,k)=fun(x0);
    end

    x=x0;
    val=fun(x0);
end

main code

%%%%%%%%conjugate  gradient algorithm
clc;
close all;
fun=@(x) 100*(x(1)^2-x(2))^2+(x(1)-1)^2;
gfun=@(x) [400*(x(1)^2-x(2))*x(1)+2*(x(1)-1);-200*(x(1)^2-x(2))];
x0=[0;0];
max_ite=200; %%number of iterations

[x,val,fun_t] = conjugate_gradient(fun,gfun,x0,max_ite);

disp(x);
disp(val);
figure(1);
plot(1:max_ite,fun_t);
set(get(gca, 'XLabel'), 'String', 'number of iterations');
set(get(gca, 'YLabel'), 'String', 'function value');

result

conclusion

共軛梯度演算法介於梯度下降和牛頓法之間,快於線性收斂,只需要梯度,不用計算二階導數;

reference

《最優化方法及其matlab程式設計》