Loss functions
GCPDecompositions.GCPLosses — ModuleLoss functions for Generalized CP Decomposition.
GCPDecompositions.GCPLosses.AbstractLoss — TypeAbstractLossAbstract type for GCP loss functions $f(x,m)$, where $x$ is the data entry and $m$ is the model entry.
Concrete types ConcreteLoss <: AbstractLoss should implement:
value(loss::ConcreteLoss, x, m)that computes the value of the loss function $f(x,m)$deriv(loss::ConcreteLoss, x, m)that computes the value of the partial derivative $\partial_m f(x,m)$ with respect to $m$domain(loss::ConcreteLoss)that returns anIntervalfrom IntervalSets.jl defining the domain for $m$
GCPDecompositions.GCPLosses.BernoulliLogit — TypeBernoulliLogit(eps::Real = 1e-10)Loss corresponding to the statistical assumption of Bernouli data X with log odds-success rate given by the low-rank model tensor M
- Distribution: $x_i \sim \operatorname{Bernouli}(\rho_i)$
- Link function: $m_i = \log(\frac{\rho_i}{1 - \rho_i})$
- Loss function: $f(x, m) = \log(1 + e^m) - xm$
- Domain: $m \in \mathbb{R}$
GCPDecompositions.GCPLosses.BernoulliOdds — TypeBernoulliOdds(eps::Real = 1e-10)Loss corresponding to the statistical assumption of Bernouli data X with odds-sucess rate given by the low-rank model tensor M
- Distribution: $x_i \sim \operatorname{Bernouli}(\rho_i)$
- Link function: $m_i = \frac{\rho_i}{1 - \rho_i}$
- Loss function: $f(x, m) = \log(m + 1) - x\log(m + \epsilon)$
- Domain: $m \in [0, \infty)$
GCPDecompositions.GCPLosses.BetaDivergence — TypeBetaDivergence(β::Real, eps::Real)
BetaDivergence Loss for given β- Loss function: $f(x, m; β) = \frac{1}{\beta}m^{\beta} - \frac{1}{\beta - 1}xm^{\beta - 1} if \beta \in \mathbb{R} \{0, 1\}, m - x\log(m) if \beta = 1, \frac{x}{m} + \log(m) if \beta = 0$
- Domain: $m \in [0, \infty)$
GCPDecompositions.GCPLosses.Gamma — TypeGamma(eps::Real = 1e-10)Loss corresponding to a statistical assumption of Gamma-distributed data X with scale given by the low-rank model tensor M.
- Distribution: $x_i \sim \operatorname{Gamma}(k, \sigma_i)$
- Link function: $m_i = k \sigma_i$
- Loss function: $f(x,m) = \frac{x}{m + \epsilon} + \log(m + \epsilon)$
- Domain: $m \in [0, \infty)$
GCPDecompositions.GCPLosses.Huber — TypeHuber(Δ::Real)Huber Loss for given Δ
- Loss function: $f(x, m) = (x - m)^2 if \abs(x - m)\leq\Delta, 2\Delta\abs(x - m) - \Delta^2 otherwise$
- Domain: $m \in \mathbb{R}$
GCPDecompositions.GCPLosses.LeastSquares — TypeLeastSquares()Loss corresponding to conventional CP decomposition. Corresponds to a statistical assumption of Gaussian data X with mean given by the low-rank model tensor M.
- Distribution: $x_i \sim \mathcal{N}(\mu_i, \sigma)$
- Link function: $m_i = \mu_i$
- Loss function: $f(x,m) = (x-m)^2$
- Domain: $m \in \mathbb{R}$
GCPDecompositions.GCPLosses.NegativeBinomialOdds — TypeNegativeBinomialOdds(r::Integer, eps::Real = 1e-10)Loss corresponding to the statistical assumption of Negative Binomial data X with log odds failure rate given by the low-rank model tensor M
- Distribution: $x_i \sim \operatorname{NegativeBinomial}(r, \rho_i)$
- Link function: $m = \frac{\rho}{1 - \rho}$
- Loss function: $f(x, m) = (r + x) \log(1 + m) - x\log(m + \epsilon)$
- Domain: $m \in [0, \infty)$
GCPDecompositions.GCPLosses.NonnegativeLeastSquares — TypeNonnegativeLeastSquares()Loss corresponding to nonnegative CP decomposition. Corresponds to a statistical assumption of Gaussian data X with nonnegative mean given by the low-rank model tensor M.
- Distribution: $x_i \sim \mathcal{N}(\mu_i, \sigma)$
- Link function: $m_i = \mu_i$
- Loss function: $f(x,m) = (x-m)^2$
- Domain: $m \in [0, \infty)$
GCPDecompositions.GCPLosses.Poisson — TypePoisson(eps::Real = 1e-10)Loss corresponding to a statistical assumption of Poisson data X with rate given by the low-rank model tensor M.
- Distribution: $x_i \sim \operatorname{Poisson}(\lambda_i)$
- Link function: $m_i = \lambda_i$
- Loss function: $f(x,m) = m - x \log(m + \epsilon)$
- Domain: $m \in [0, \infty)$
GCPDecompositions.GCPLosses.PoissonLog — TypePoissonLog()Loss corresponding to a statistical assumption of Poisson data X with log-rate given by the low-rank model tensor M.
- Distribution: $x_i \sim \operatorname{Poisson}(\lambda_i)$
- Link function: $m_i = \log \lambda_i$
- Loss function: $f(x,m) = e^m - x m$
- Domain: $m \in \mathbb{R}$
GCPDecompositions.GCPLosses.Rayleigh — TypeRayleigh(eps::Real = 1e-10)Loss corresponding to the statistical assumption of Rayleigh data X with sacle given by the low-rank model tensor M
- Distribution: $x_i \sim \operatorname{Rayleigh}(\theta_i)$
- Link function: $m_i = \sqrt{\frac{\pi}{2}\theta_i}$
- Loss function: $f(x, m) = 2\log(m + \epsilon) + \frac{\pi}{4}(\frac{x}{m + \epsilon})^2$
- Domain: $m \in [0, \infty)$
GCPDecompositions.GCPLosses.UserDefined — TypeUserDefinedType for user-defined loss functions $f(x,m)$, where $x$ is the data entry and $m$ is the model entry.
Contains three fields:
func::Function: function that evaluates the loss function $f(x,m)$deriv::Function: function that evaluates the partial derivative $\partial_m f(x,m)$ with respect to $m$domain::Interval:Intervalfrom IntervalSets.jl defining the domain for $m$
The constructor is UserDefined(func; deriv, domain). If not provided,
derivis automatically computed fromfuncusing forward-mode automatic differentiationdomaingets a default value ofInterval(-Inf, +Inf)
GCPDecompositions.GCPLosses.value — Functionvalue(loss, x, m)Compute the value of the (entrywise) loss function loss for data entry x and model entry m.
GCPDecompositions.GCPLosses.deriv — Functionderiv(loss, x, m)Compute the derivative of the (entrywise) loss function loss at the model entry m for the data entry x.
GCPDecompositions.GCPLosses.domain — Functiondomain(loss)Return the domain of the (entrywise) loss function loss.
GCPDecompositions.GCPLosses.objective — Functionobjective(M::CPD, X::AbstractArray, loss)Compute the GCP objective function for the model tensor M, data tensor X, and loss function loss.
GCPDecompositions.GCPLosses.grad_U! — Functiongrad_U!(GU, M::CPD, X::AbstractArray, loss)Compute the GCP gradient with respect to the factor matrices U = (U[1],...,U[N]) for the model tensor M, data tensor X, and loss function loss, and store the result in GU = (GU[1],...,GU[N]).