Stochastic Average Gradient Accelerated Method¶
The Stochastic Average Gradient Accelerated (SAGA) [Defazio2014] follows the algorithmic framework of an iterative solver with one exception.
The default method (defaultDense
) of SAGA algorithm is a particular case of the iterative solver method with the batch size \(b = 1\).
Details¶
Algorithmic-specific transformation \(T\), the set of intrinsic parameters \(S_t\) defined for the learning rate \(\eta\), and algorithm-specific vector \(U\) and power \(d\) of Lebesgue space are defined as follows:
\(S_t\) is a matrix of the gradients of smooth terms at point \(\theta_t\), where
\(t\) is defined by the number of iterations the solver runs
\(G_i^t\) stores the gradient of \(f_i(\theta_t)\)
\(T(\theta_{t-1}, F_j'(\theta_{t-1}), S_{t-1}, M(\theta_{t-1}))\):
\(W_t = \theta_{t-1} - \eta_j \left[ F_j'(\theta_{t-1}) - G_j^{t-1} + \frac{1}{n} \sum_{i=1}^{n} G_i^{t-1}\right]\)
\(\theta_t = \mathrm{prox}_{\eta}^{M} (W_t)\)
Update of the set of intrinsic parameters \(S_t\):
Note
The algorithm enables automatic step-length selection if learning rate \(\eta\) was not provided by the user.
Automatic step-length will be computed as \(\eta = \frac{1}{L}\),
where \(L\) is the Lipschitz constant returned by objective function.
If the objective function returns nullptr
to numeric table with lipschitzConstant
Result ID,
the library will use default step size \(0.01\).
Convergence checks:
\(U = \theta_t - \theta_{t - 1}\), \(d = \infty\)
\(|x|_{\infty} = \underset{i \in [0, p]} \max(|x^i|)\), \(x \in R^p\)
Computation¶
The stochastic average gradient (SAGA) algorithm is a special case of an iterative solver. For parameters, input, and output of iterative solvers, see Iterative Solver > Computation.
Algorithm Input¶
In addition to the input of the iterative solver, the SAGA optimization solver has the following optional input:
OptionalDataID |
Default Value |
Description |
|
Not applicable |
A numeric table of size \(n \times p\) which represents \(G_0\) matrix that contains gradients of \(F_i(\theta)\), \(1, \ldots, n\) at the initial point \(\theta_0 \in R^p\). This input is optional: if the user does not provide the table of gradients for \(F_i(\theta)\), \(1, \ldots, n\), the library will compute it inside the SAGA algorithm. Note This parameter can be an object of any class derived from |
Algorithm Parameters¶
In addition to parameters of the iterative solver, the SAGA optimization solver has the following parameters:
Parameter |
Default Value |
Description |
---|---|---|
|
|
The floating-point type that the algorithm uses for intermediate computations. Can be |
|
|
Performance-oriented method. |
|
\(1\) |
A numeric table of size \(\mathrm{nIterations} \times 1\) with 32-bit integer indices of terms in the objective function. If no indices are provided, the implementation generates random index on each iteration. Note This parameter can be an object of any class derived from |
|
Not applicable |
The numeric table of size \(1 \times \mathrm{nIterations}\) or \(1 \times 1\) that contains learning rate for each iterations is first case, otherwise constant step length will be used for all iterations. It is recommended to set diminishing learning rate sequence. If Note This parameter can be an object of any class derived from |
|
SharedPtr<engines::mt19937::Batch<> |
Pointer to the random number generator engine that is used internally for generation of 32-bit integer index of term in the objective function. |
Algorithm Output¶
In addition to the output of the iterative solver, the SAGA optimization solver calculates the following optional result:
OptionalDataID |
Default Value |
Description |
|
Not applicable |
A numeric table of size \(n \times p\) that represents matrix \(G_t\) updated after all iterations. This parameter can be an object of any class derived from |
Examples¶
Batch Processing:
Note
There is no support for Java on GPU.
Batch Processing:
Batch Processing: