SESOP_PACK: Matlab Tool for Sequential Subspace Optimization methods
SESOP [1,4] is a method for large-scale smooth unconstrained optimization, which can be considered as a natural extension of the conjugate gradients. At each iteration we search for a minimum of objective function over a subspace spanned by the current gradient and by directions of few previous steps. We can also include into this subspace direction from the starting point to the current point, and the weighted sum of all previous gradients, following [Nemirovski, 1982]. This safeguard measure provides an optimal worst case error decay of order 1/k^2 (for convex problems), where k is the iteration count. There is an important class of problems, where subspace optimization can be implemented extremely fast. This happens when the objective function is a combination of expensive linear mappings with cheap non-linear functions. This is typical situation in such applications, like tomography, signal and image denoising with basis pursuit, pattern recognition with support vector machines and many others.
SESOP-TN method [5] combines ideas of SESOP with those of the Truncated Newton (TN) method [2]. Replacing TN line search with subspace optimization, we allow Conjugate Gradient (CG) iterations to stay matched through consequent TN steps. This resolves the problem of TN sensitivity to an early break of the CG process. For example, when an objective function is quadratic, the SESOP-TN trajectory coincides with the trajectory of CG as applied directly to the objective. Standard TN lacks this property and converges more slowly. Numerical experiments illustrate the effectiveness of the method.
MATLAB
CODE: Newest version 07.06.2010
Old: Version 10.05.2010 Version 08.08.2008 Version 04.02.2009
Implemented optimization methods (directory SESOPTN2)
· Methods for L1-L2 optimization [4,6], see directory Sesop_SPmagazine
1. PCD (Parallel Coordinate Descent); PCD-SESOP; PCD-CG
2. SSF (Separable Surrogate Functions); SSF-SESOP; SSF-CG
3. FISTA (Fast Iterative Soft-Thresholding)
Sub-directories in SESOPTN2
· Sesop_SPmagazine - Compressive Sensing, Image Deblurring and Tomography via L1-L2 optimization [6]
My collaborators at various stages of the work on SESOP and its applications
References:
6. Michael Zibulevsky and Michael Elad,"L1-L2 Optimization in Signal and Image Processing: Iterative Shrinkage and Beyond" , IEEE Signal Processing Magazine, to appear
*********************************************************************
APPENDIX:
Brief description of SESOP-TN algorithm
We minimize a function f(x), which has gradient g(x) and Hessian H(x). Note that the Hessian is not computed explicitly: just Hessian-vector products are used.
Iteration of SESOP-TN
1. TN step: solve Newton system Hd=-g approximately, using limited number of Conjugate Gradient (CG) steps.
2. Optimize f(x) over affine subspace spanned by:
3. Goto Step 1 (TN), feeding the resulting subspace optimization step into CG starting condition.
Presented procedure allows resolving the typical TN problem, related to the early break of CG process. For example, when f(x) is quadratic, SESOP-TN trajectory is the same as one of CG applied to f(x). Standard TN would converge slower in this case.
Note that subspace optimization step 2 can be carried very efficiently, if f(x)= f1(Ax)+f2(x), where Ax is computationally heavy, and f1(.) and f2(.) are inexpensive (see [1] for the details).