$$
\newcommand{\R}{\mathbb{R}}
\newcommand{\E}{\mathbb{E}}
\newcommand{\x}{\mathbf{x}}
\newcommand{\y}{\mathbf{y}}
\newcommand{\wv}{\mathbf{w}}
\newcommand{\av}{\mathbf{\alpha}}
\newcommand{\bv}{\mathbf{b}}
\newcommand{\N}{\mathbb{N}}
\newcommand{\id}{\mathbf{I}}
\newcommand{\ind}{\mathbf{1}}
\newcommand{\0}{\mathbf{0}}
\newcommand{\unit}{\mathbf{e}}
\newcommand{\one}{\mathbf{1}}
\newcommand{\zero}{\mathbf{0}}
\newcommand\rfrac[2]{^{#1}\!/_{#2}}
\newcommand{\norm}[1]{\left\lVert#1\right\rVert}
$$
# Alternating Least Squares

## Description

## Operations

### Fit

### Predict

## Parameters

## Examples

The alternating least squares (ALS) algorithm factorizes a given matrix $R$ into two factors $U$ and $V$ such that $R \approx U^TV$. The unknown row dimension is given as a parameter to the algorithm and is called latent factors. Since matrix factorization can be used in the context of recommendation, the matrices $U$ and $V$ can be called user and item matrix, respectively. The $i$th column of the user matrix is denoted by $u_i$ and the $i$th column of the item matrix is $v_i$. The matrix $R$ can be called the ratings matrix with .

In order to find the user and item matrix, the following problem is solved:

with $\lambda$ being the regularization factor, being the number of items the user $i$ has rated and being the number of times the item $j$ has been rated. This regularization scheme to avoid overfitting is called weighted-$\lambda$-regularization. Details can be found in the work of Zhou et al..

By fixing one of the matrices $U$ or $V$, we obtain a quadratic form which can be solved directly. The solution of the modified problem is guaranteed to monotonically decrease the overall cost function. By applying this step alternately to the matrices $U$ and $V$, we can iteratively improve the matrix factorization.

The matrix $R$ is given in its sparse representation as a tuple of $(i, j, r)$ where $i$ denotes the row index, $j$ the column index and $r$ is the matrix value at position $(i,j)$.

`ALS`

is a `Predictor`

.
As such, it supports the `fit`

and `predict`

operation.

ALS is trained on the sparse representation of the rating matrix:

`fit: DataSet[(Int, Int, Double)] => Unit`

ALS predicts for each tuple of row and column index the rating:

`predict: DataSet[(Int, Int)] => DataSet[(Int, Int, Double)]`

The alternating least squares implementation can be controlled by the following parameters:

Parameters | Description |
---|---|

NumFactors |
The number of latent factors to use for the underlying model.
It is equivalent to the dimension of the calculated user and item vectors.
(Default value: |

Lambda |
Regularization factor. Tune this value in order to avoid overfitting or poor performance due to strong generalization.
(Default value: |

Iterations |
The maximum number of iterations.
(Default value: |

Blocks |
The number of blocks into which the user and item matrix are grouped.
The fewer blocks one uses, the less data is sent redundantly.
However, bigger blocks entail bigger update messages which have to be stored on the heap.
If the algorithm fails because of an OutOfMemoryException, then try to increase the number of blocks.
(Default value: |

Seed |
Random seed used to generate the initial item matrix for the algorithm.
(Default value: |

TemporaryPath |
Path to a temporary directory into which intermediate results are stored.
If this value is set, then the algorithm is split into two preprocessing steps, the ALS iteration and a post-processing step which calculates a last ALS half-step.
The preprocessing steps calculate the |