The basic algorithm that we discussed last time is the iterative shrinkage/thresholding algorithm (ISTA) that can be specified as follows
where is the iteration number, is the measurement vector, is the measurement matrix that models the acquisition system, is a step-size of the algorithm that we can always set to the inverse of the largest eigenvalue of to ensure convergence (i.e., set with ), is the regularization parameter that controls the sparsity of the final solution (larger leads to a sparser solution), and finally is a scalar thresholding function applied in a component-wise fashion. One of the most popular thresholding functions is the soft-thresholding defined as
where returns the positive part of its argument, and is a signum function that returns if its argument is positive and when it is negative.
ISTA is a very well understood method, and it is well known that its rate of convergence corresponds to that of the gradient-descent method, which is .
Let us considering the following simple iteration
where . When for all the iteration corresponds to ISTA, however, an appropriate choice of leads to a faster convergence, which is crucial for larger scale problems where one tries to reduce the amount of matrix-vector products with and . The faster version of ISTA was originally proposed by Beck & Teboulle 2009 and is widely known as fast ISTA (FISTA).
So, what is that appropriate choice of ?
Beck & Teboulle proposed to initialize and then setting the rest iteratively as follows
A short note on the Python demo. It was done as an IPython notebook in a fully self-contained way. The first two cells create and save two files matrixmodel.py and algorithm.py that are re-usable as stand-alone files.
The switch from ISTA to FISTA is done in cell 8:
# Reconstruct with ISTA [fhatISTA, costISTA] = fistaEst(y, forwardObj, tau, numIter, accelerate=False) # Reconstruct with FISTA [fhatFISTA, costFISTA] = fistaEst(y, forwardObj, tau, numIter, accelerate=True)
The output of the demo is the following figure plotting the results of both algorithms within 100 iterations: