Adaptive gradient methods for some classes of nonsmooth optimization problems

Бесплатный доступ

We propose several adaptive algorithmic methods for problems of non-smooth convex optimization. The first method is based on a special artificial inexactness. For its implementation the corresponding analogue of the concept of an abstract inexact model of the objective functional is proposed. For this concept, analogues of gradient and fast gradient methods with adaptive tuning of some parameters of this model are proposed and an assessment of the quality of the solution is obtained. It is shown that it is possible for nonsmooth problems to modify the proposed method to guarantee the convergence in the function with a close to optimal rate. A similar concept of an inexact model is introduced for variational inequalities and saddle point problems. Estimate of the convergence rate of the corresponding adaptive version of the proximal mirror method is presented. Analogues of switching subgradient schemes are proposed for convex programming problems. In this case, assumptions close to the recently proposed condition of the relative Lipschitz continuity are considered, which allows us to obtain an estimate of the quality of the solution with relative accuracy for the problem of minimizing a homogeneous convex functional using fairly general assumptions.

Еще

Gradient method, fast gradient method, adaptive method, lipschitz continuous gradient, nonsmooth optimization, mirror descent, relative lipschitz continuity, relative accuracy

Короткий адрес: https://sciup.org/142223094

IDR: 142223094

Статья научная