Algorithms
GCPDecompositions.AbstractGCPAlgorithm — Type
AbstractGCPAlgorithmAbstract type for GCP algorithms.
Concrete types ConcreteAlgorithm <: AbstractGCPAlgorithm should implement _gcp!(rng, M, X, loss, constraints, algorithm::ConcreteAlgorithm) that modifies the initialization M and returns the modified version.
GCPDecompositions.gcp_objective — Function
gcp_objective(M::CPD, X::AbstractArray, loss)Compute the GCP objective function for the model tensor M, data tensor X, and loss function loss.
GCPDecompositions.gcp_grad_U! — Function
gcp_grad_U!(GU, M::CPD, X::AbstractArray, loss)Compute the GCP gradient with respect to the factor matrices U = (U[1],...,U[N]) for the model tensor M, data tensor X, and loss function loss, and store the result in GU = (GU[1],...,GU[N]).
GCPDecompositions.gcp_stoch_objective — Function
gcp_stoch_objective([rng=default_rng()], M::CPD, X::AbstractArray, loss, sampler)Compute stochastic estimate of the GCP objective function for the model tensor M, data tensor X, and loss function loss using the sampler sampler with random number generator rng.
GCPDecompositions.gcp_stoch_grad_U! — Function
gcp_stoch_grad_U!([rng=default_rng()], GU, M::CPD, X::AbstractArray, loss, sampler)Compute stochastic estimate of the GCP gradient with respect to the factor matrices U = (U[1],...,U[N]) for the model tensor M, data tensor X, and loss function loss using the sampler sampler with random number generator rng, and store the result in GU = (GU[1],...,GU[N]).
GCPDecompositions.AbstractGCPSampler — Type
AbstractGCPSamplerAbstract type for samplers to use in stochastic evaluation of the objective and gradients.
Concrete types ConcreteSampler <: AbstractGCPSampler should implement
gcp_stoch_objective(rng, M, X, loss, sampler::ConcreteSampler)gcp_stoch_grad_U!(rng, GU, M, X, loss, sampler::ConcreteSampler)
GCPDecompositions.GCPSampleOnce — Type
GCPSampleOnce(X::AbstractArray, sampler::AbstractGCPSampler)Wrapped sampler that samples entries from X using sampler only the first time, then reuses the same indices every time after that. For use with gcp_stoch_objective.
The internal field cache stores cached values - the particular choice of what is stored is an implementation detail defined by each sampler.
GCPDecompositions.UniformGCPSampler — Type
UniformGCPSampler(numsamples::Int)Uniform sampling of numsamples entries with replacement.
GCPDecompositions.StratifiedGCPSampler — Type
StratifiedGCPSampler(num_nonzeros::Int, num_zeros::Int)Stratified sampling of num_nonzeros nonzero entries and num_zeros zero entries with replacement. For SparseArrayCOO tensors, stored entries are all treated as nonzero.
GCPDecompositions.SemistratifiedGCPSampler — Type
SemistratifiedGCPSampler(num_nonzeros::Int, num_zeros::Int)Semistratified sampling of num_nonzeros nonzero entries and num_zeros assumed "zero" entries with replacement. For SparseArrayCOO tensors, stored entries are all treated as nonzero.
GCPDecompositions.CP_ALS — Type
CP_ALSAlternating Least Squares. Workhorse algorithm for LeastSquares loss with no constraints.
Algorithm parameters:
maxiters::Int: max number of iterations (default:200)
GCPDecompositions.CP_FastALS — Type
CP_FastALSFast Alternating Least Squares.
Efficient ALS algorithm proposed in:
Fast Alternating LS Algorithms for High Order CANDECOMP/PARAFAC Tensor Factorizations. Anh-Huy Phan, Petr Tichavský, Andrzej Cichocki. IEEE Transactions on Signal Processing, 2013. DOI: 10.1109/TSP.2013.2269903
Algorithm parameters:
maxiters::Int: max number of iterations (default:200)
GCPDecompositions.GCP_Adam — Type
GCP_AdamStochastic gradient-based optimization with adaptive moment estimation.
Brief description of standard Adam parameters:
α::Float64: step size (default:0.001)β1::Float64: exponential decay rate for first moment estimate (default:0.9)β2::Float64: exponential decay rate for second moment estimate (default:0.999)ϵ::Float64: offset added to denominator for numerical stability (default:1e-8)
We employ a few common modifications with the following parameters:
epochiters::Int: iterations per epoch (default:1000)maxepochs::Int: max number of epochs (default:1000)faildecay::Float64: step size decay rate for failure to decrease (default:0.1)maxfails::Int: max number of failures (default:1)
And the final parameter defines what sampler to use:
fsampler::AbstractGCPSampler: sampler to use for function valuegsampler::AbstractGCPSampler: sampler to use for gradients
GCPDecompositions.GCP_LBFGSB — Type
GCP_LBFGSBLimited-memory BFGS with Box constraints.
Algorithm parameters:
m::Int: max number of variable metric corrections (default:10)factr::Float64: function tolerance in units of machine epsilon (default:1e7)pgtol::Float64: (projected) gradient tolerance (default:1e-5)maxfun::Int: max number of function evaluations (default:15000)maxiter::Int: max number of iterations (default:15000)iprint::Int: verbosity (default:-1)iprint < 0means no outputiprint = 0prints only one line at the last iteration0 < iprint < 99printsfand|proj g|everyiprintiterationsiprint = 99prints details of every iteration except n-vectorsiprint = 100also prints the changes of active set and finalxiprint > 100prints details of every iteration includingxandg
Notes:
- this algorithm only supports
Float64numbers
See documentation of LBFGSB.jl for more details.
GCPDecompositions.CP_FastALS_iter! — Function
CP_FastALS_iter!(X, U, λ)Algorithm for computing MTTKRP sequences is from "Fast Alternating LS Algorithms for High Order CANDECOMP/PARAFAC Tensor Factorizations" by Phan et al., specifically section III-C.