Algorithms

Work-in-progress

This page of the docs is still a work-in-progress. Check back later!

GCPDecompositions.AbstractGCPAlgorithmType
AbstractGCPAlgorithm

Abstract type for GCP algorithms.

Concrete types ConcreteAlgorithm <: AbstractGCPAlgorithm should implement _gcp!(rng, M, X, loss, constraints, algorithm::ConcreteAlgorithm) that modifies the initialization M and returns the modified version.

source
GCPDecompositions.gcp_objectiveFunction
gcp_objective(M::CPD, X::AbstractArray, loss)

Compute the GCP objective function for the model tensor M, data tensor X, and loss function loss.

source
GCPDecompositions.gcp_grad_U!Function
gcp_grad_U!(GU, M::CPD, X::AbstractArray, loss)

Compute the GCP gradient with respect to the factor matrices U = (U[1],...,U[N]) for the model tensor M, data tensor X, and loss function loss, and store the result in GU = (GU[1],...,GU[N]).

source
GCPDecompositions.gcp_stoch_objectiveFunction
gcp_stoch_objective([rng=default_rng()], M::CPD, X::AbstractArray, loss, sampler)

Compute stochastic estimate of the GCP objective function for the model tensor M, data tensor X, and loss function loss using the sampler sampler with random number generator rng.

source
GCPDecompositions.gcp_stoch_grad_U!Function
gcp_stoch_grad_U!([rng=default_rng()], GU, M::CPD, X::AbstractArray, loss, sampler)

Compute stochastic estimate of the GCP gradient with respect to the factor matrices U = (U[1],...,U[N]) for the model tensor M, data tensor X, and loss function loss using the sampler sampler with random number generator rng, and store the result in GU = (GU[1],...,GU[N]).

source
GCPDecompositions.AbstractGCPSamplerType
AbstractGCPSampler

Abstract type for samplers to use in stochastic evaluation of the objective and gradients.

Concrete types ConcreteSampler <: AbstractGCPSampler should implement

  • gcp_stoch_objective(rng, M, X, loss, sampler::ConcreteSampler)
  • gcp_stoch_grad_U!(rng, GU, M, X, loss, sampler::ConcreteSampler)
source
GCPDecompositions.GCPSampleOnceType
GCPSampleOnce(X::AbstractArray, sampler::AbstractGCPSampler)

Wrapped sampler that samples entries from X using sampler only the first time, then reuses the same indices every time after that. For use with gcp_stoch_objective.

The internal field cache stores cached values - the particular choice of what is stored is an implementation detail defined by each sampler.

source
GCPDecompositions.StratifiedGCPSamplerType
StratifiedGCPSampler(num_nonzeros::Int, num_zeros::Int)

Stratified sampling of num_nonzeros nonzero entries and num_zeros zero entries with replacement. For SparseArrayCOO tensors, stored entries are all treated as nonzero.

source
GCPDecompositions.SemistratifiedGCPSamplerType
SemistratifiedGCPSampler(num_nonzeros::Int, num_zeros::Int)

Semistratified sampling of num_nonzeros nonzero entries and num_zeros assumed "zero" entries with replacement. For SparseArrayCOO tensors, stored entries are all treated as nonzero.

source
GCPDecompositions.CP_ALSType
CP_ALS

Alternating Least Squares. Workhorse algorithm for LeastSquares loss with no constraints.

Algorithm parameters:

  • maxiters::Int : max number of iterations (default: 200)
source
GCPDecompositions.CP_FastALSType
CP_FastALS

Fast Alternating Least Squares.

Efficient ALS algorithm proposed in:

Fast Alternating LS Algorithms for High Order CANDECOMP/PARAFAC Tensor Factorizations. Anh-Huy Phan, Petr Tichavský, Andrzej Cichocki. IEEE Transactions on Signal Processing, 2013. DOI: 10.1109/TSP.2013.2269903

Algorithm parameters:

  • maxiters::Int : max number of iterations (default: 200)
source
GCPDecompositions.GCP_AdamType
GCP_Adam

Stochastic gradient-based optimization with adaptive moment estimation.

Brief description of standard Adam parameters:

  • α::Float64 : step size (default: 0.001)
  • β1::Float64 : exponential decay rate for first moment estimate (default: 0.9)
  • β2::Float64 : exponential decay rate for second moment estimate (default: 0.999)
  • ϵ::Float64 : offset added to denominator for numerical stability (default: 1e-8)

We employ a few common modifications with the following parameters:

  • epochiters::Int : iterations per epoch (default: 1000)
  • maxepochs::Int : max number of epochs (default: 1000)
  • faildecay::Float64 : step size decay rate for failure to decrease (default: 0.1)
  • maxfails::Int : max number of failures (default: 1)

And the final parameter defines what sampler to use:

  • fsampler::AbstractGCPSampler : sampler to use for function value
  • gsampler::AbstractGCPSampler : sampler to use for gradients
source
GCPDecompositions.GCP_LBFGSBType
GCP_LBFGSB

Limited-memory BFGS with Box constraints.

Algorithm parameters:

  • m::Int : max number of variable metric corrections (default: 10)
  • factr::Float64 : function tolerance in units of machine epsilon (default: 1e7)
  • pgtol::Float64 : (projected) gradient tolerance (default: 1e-5)
  • maxfun::Int : max number of function evaluations (default: 15000)
  • maxiter::Int : max number of iterations (default: 15000)
  • iprint::Int : verbosity (default: -1)
    • iprint < 0 means no output
    • iprint = 0 prints only one line at the last iteration
    • 0 < iprint < 99 prints f and |proj g| every iprint iterations
    • iprint = 99 prints details of every iteration except n-vectors
    • iprint = 100 also prints the changes of active set and final x
    • iprint > 100 prints details of every iteration including x and g

Notes:

  • this algorithm only supports Float64 numbers

See documentation of LBFGSB.jl for more details.

source
GCPDecompositions.CP_FastALS_iter!Function
CP_FastALS_iter!(X, U, λ)

Algorithm for computing MTTKRP sequences is from "Fast Alternating LS Algorithms for High Order CANDECOMP/PARAFAC Tensor Factorizations" by Phan et al., specifically section III-C.

source