usingAlgebraOfGraphicsusingCairoMakieusingCategoricalArraysusingDataFramesusingEmbraceUncertainty: datasetusingGLM# for the lm functionusingMixedModelsusingMixedModelsMakieusingSparseArrays# for the nnz functionusingStatistics# for the mean functionusingTidierPlotsusingTypedTables
In the previous chapter we explored and fit models to data from a large-scale designed experiment. Here we consider an observational study - ratings of movies by users of movielens.org made available at the grouplens.org download site(Harper & Konstan, 2016).
We analyze the MovieLens 25M Dataset of roughly 25 million ratings of movies by over 162,000 users of about 59,000 movies.
Because of constraints on the redistribution of these data, the first time that dataset(:ratings) or dataset(:movies) is executed, the 250 Mb zip file is downloaded from the grouplens data site, expanded, and the tables created as Arrow files.
This operation can take a couple of minutes.
5.1 Structure of the data
One of the purposes of this chapter is to study which dimensions of the data have the greatest effect on the amount of memory used to represent the model and the time required to fit a model.
The two datasets examined in Chapter 4, from the English Lexicon Project (Balota et al., 2007), consist of trials or instances associated with subject and item factors. The subject and item factors are “incompletely crossed” in that each item occurred in trials with several subjects, but not all subjects, and each subject responded to several different items, but not to all items.
Similarly in the movie ratings, each instance of a rating is associated with a user and with a movie, and these factors are incompletely crossed.
ratings =dataset(:ratings)
Arrow.Table with 25000095 rows, 4 columns, and schema:
:userId String
:movieId String
:rating Float32
:timestamp Int32
with metadata given by a Base.ImmutableDict{String, String} with 1 entry:
"url" => "https://files.grouplens.org/datasets/movielens/ml-25m.zip"
Convert this Arrow table to a Table and drop the timestamp column, which we won’t be using. (For small data sets dropping such columns is not important but with over 25 million ratings it does help to drop unnecessary columns to save some memory space.)
In contrast to data from a designed experiment, like the English Lexicon Project, the data from this observational study are extremely unbalanced with respect to the observational grouping factors, userId and movieId. The movies table includes an nrtngs column that gives the number of ratings for each movie, which varies from 1 to over 80,000.
extrema(movies.nrtngs)
(1, 81491)
The number of ratings per user is also highly skewed
users =Table(combine(groupby(DataFrame(ratings), :userId), nrow =>:urtngs,:rating => mean =>:umnrtng, ),)
This selection of ratings was limited to users who had provided at least 20 ratings.
One way of visualizing the imbalance in the number of ratings per movie or per user is as an empirical cumulative distribution function (ecdf) plot, which is a “stair-step” plot where the vertical axis is the proportion of observations less than or equal to the corresponding value on the horizontal axis. Because the distribution of the number of ratings per movie or per user is so highly skewed in the low range we use a logarithmic horizontal axis in Figure 5.1.
Code
let f =Figure(; size=(700, 350)) xscale = log10 xminorticksvisible =true xminorgridvisible =true yminorticksvisible =true xminorticks =IntervalsBetween(10) ylabel ="Relative cumulative frequency" nrtngs =sort(movies.nrtngs)ecdfplot( f[1, 1], nrtngs; npoints=last(nrtngs), axis=( xlabel="Number of ratings per movie (logarithmic scale)", xminorgridvisible, xminorticks, xminorticksvisible, xscale, ylabel, yminorticksvisible, ), ) urtngs =sort(users.urtngs)ecdfplot( f[1, 2], urtngs; npoints=last(urtngs), axis=( xlabel="Number of ratings per user (logarithmic scale)", xminorgridvisible, xminorticks, xminorticksvisible, xscale, yminorticksvisible, ), ) fend
In this collection of about 25 million ratings, nearly 20% of the movies are rated only once and over half of the movies were rated 6 or fewer times.
count(≤(6), movies.nrtngs) /length(movies.nrtngs)
0.5205344894744864
The ecdf plot of the number of ratings per user shows a similar pattern to that of the movies — a few users with a very large number of ratings and many users with just a few ratings.
For example, the minimum number or movies rated is 20 (due to the inclusion constraint); the median number of movies rated is around 70; but the maximum is over 32,000 (which is a lot of movies - over 30 years of 3 movies per day every day - if this user actually watched all of them).
Movies with very few ratings provide little information about overall trends or even about the movie being rated. We can imagine that the “shrinkage” of random effects for movies with just a few ratings pulls their adjusted rating strongly towards the overall average.
Furthermore, the distribution of ratings for movies with only one rating is systematically lower than the distribution of ratings for movies rated at least five times.
First, add nrtngs and urtngs as columns of the ratings table.
Similarly, users who rate very few movies add little information, even about the movies that they rated, because there isn’t sufficient information to contrast a specific rating with a typical rating for the user.
One way of dealing with the extreme imbalance in the number of observations per user or per movie is to set a threshold on the number of observations for a user or a movie to be included in the data used to fit the model.
To be able to select ratings according to the number of ratings per user and the number of ratings per movie, we left-joined the movies.nrtngs and users.urtngs columns into the ratings data frame.
Seemingly inconsistent medians of “nrtngs” and “urtngs”
The medians in this table of nrtngs and urtngs are much higher than the values from the movies and users tables because a movie with 98,000 ratings occurs 98,000 times in this table whereas it occurs only once in the movies table.
5.2 Models fit with lower bounds on ratings per user and per movie
We fit a simple model to this dataset using different thresholds on the number of ratings per movie and the number of ratings per user. These fits were performed on compute servers with generous amounts of memory (128 GiB/node) and numbers of compute cores (48/node). A sample fit is shown in Section 5.2.2.
The results are summarized in the following table
Code
sizespeed =DataFrame(dataset(:sizespeed))sizespeed.ucutoff =collect(sizespeed.ucutoff) # temporary fix to get TidierPlots to worksizespeed
21×11 DataFrame
Row
mc
uc
nratings
nusers
nmvie
modelsz
L22sz
nv
fittime
evtime
ucutoff
Int8
Int8
Int32
Int32
Int32
Float64
Float64
Int8
Float64
Float64
String
1
1
20
25000095
162541
59047
28.4216
25.9768
24
9293.91
372.338
U20
2
2
20
24989797
162541
48749
20.1491
17.706
36
11872.9
316.237
U20
3
5
20
24945870
162541
32720
10.4133
7.97658
21
3299.55
151.666
U20
4
10
20
24890583
162541
24330
6.84118
4.41036
26
2484.71
80.4182
U20
5
15
20
24846701
162541
20590
5.58446
3.15866
22
1810.84
70.8015
U20
6
20
20
24810483
162541
18430
4.95285
2.5307
22
1666.03
75.6806
U20
7
50
20
24644928
162540
13176
3.69924
1.29347
20
880.731
42.1572
U20
8
1
40
23700163
115891
58930
28.189
25.874
23
8835.84
369.294
U40
9
2
40
23689979
115891
48746
20.0173
17.7039
35
11336.9
324.62
U40
10
5
40
23646529
115891
32720
10.2837
7.97658
27
3001.03
107.42
U40
11
10
40
23591948
115891
24330
6.71161
4.41036
28
2235.33
70.2365
U40
12
15
40
23548590
115891
20590
5.45494
3.15866
22
1554.82
67.2297
U40
13
20
40
23512852
115891
18430
4.82338
2.5307
22
1446.86
59.8116
U40
14
50
40
23349895
115891
13176
3.57002
1.29347
24
1045.58
41.9256
U40
15
1
80
21374218
74894
58755
27.8049
25.7205
23
10270.7
372.721
U80
16
2
80
21364203
74894
48740
19.7822
17.6995
21
6473.96
292.296
U80
17
5
80
21321452
74894
32720
10.0531
7.97658
22
2478.71
107.91
U80
18
10
80
21267814
74894
24330
6.48111
4.41036
22
1808.68
69.0953
U80
19
15
80
21225289
74894
20590
5.22452
3.15866
24
1685.01
66.9154
U80
20
20
80
21190291
74894
18430
4.59303
2.5307
18
1117.07
52.4674
U80
21
50
80
21031261
74894
13176
3.34005
1.29347
21
991.497
45.3183
U80
In this table, mc is the “movie cutoff” (i.e. the threshold on the number of ratings per movie); uc is the user cutoff (threshold on the number of ratings per user); nratings, nusers and nmvie are the number of ratings, users and movies in the resulting trimmed data set; modelsz is the size (in GiB) of the model fit; L22sz is the size of the [2,2] block of the L matrix in that model; fittime is the time (in seconds) required to fit the model; nev is the number of function evaluations until convergence; and evtime is the time (s) per function evaluation.
The “[2,2] block of the L matrix” is described in Section 5.2.2.
5.2.1 Dimensions of the model versus cut-off values
As shown if Figure 5.3, the number of ratings varies from a little over 21 million to 25 million, and is mostly associated with the user cutoff.
Code
ggplot(sizespeed, aes(; x=:mc, y=:nratings, color=:ucutoff)) +geom_point() +geom_line() +labs(x="Minimum number of ratings per movie", y="Number of ratings")
For this range of choices of cutoffs, the user cutoff has more impact on the number of ratings in the reduced dataset than does the movie cutoff.
A glance at the table shows that the number of users, nusers, is essentially a function of only the user cutoff, uc (the one exception being at mc = 50 and uc=20).
Figure 5.4 shows the similarly unsurprising result that the number of movies in the reduced table is essentially determined by the movie cutoff.
Code
ggplot(sizespeed, aes(; x=:mc, y=:nmvie, color=:ucutoff)) +geom_point() +geom_line() +labs(x="Minimum number of ratings per movie", y="Number of movies in table")
5.2.2 Memory footprint of the model representation
To explain what “the [2,2] block of the L matrix” is and why its size is important, we provide a brief overview of the evaluation of the “profiled” log-likelihood for a LinearMixedModel representation.
To make the discussion concrete we consider one of the models represented in this table, with cut-offs of 80 ratings per user and 50 ratings per movie. This, and any of the models shown in the table, can be restored in a few minutes from the saved optsum values, as opposed to taking up to two hours to perform the fit.
┌ Warning: optsum was saved with an older version of MixedModels.jl: consider resaving.
└ @ MixedModels ~/.julia/packages/MixedModels/hs2Ke/src/serialization.jl:28
Linear mixed model fit by maximum likelihood
rating ~ 1 + (1 | userId) + (1 | movieId)
logLik -2 logLik AIC AICc BIC
-26488585.4152 52977170.8304 52977178.8304 52977178.8304 52977238.2765
Variance components:
Column Variance Std.Dev.
userId (Intercept) 0.178157 0.422086
movieId (Intercept) 0.253558 0.503545
Residual 0.714548 0.845310
Number of obs: 21031261; levels of grouping factors: 74894, 13176
Fixed-effects parameters:
──────────────────────────────────────────────────
Coef. Std. Error z Pr(>|z|)
──────────────────────────────────────────────────
(Intercept) 3.40329 0.00469215 725.31 <1e-99
──────────────────────────────────────────────────
Creating the model representation and restoring the optimal parameter values can take a couple of minutes because the objective is evaluated twice — at the initial parameter values and at the final parameter values — during the call to restoreoptsum!.
Each evaluation of the objective, which requires setting the value of the parameter \({\boldsymbol\theta}\) in the numerical representation of the model, updating the blocked Cholesky factor, \(\mathbf{L}\), and evaluating the scalar objective value from this factor, takes a little over a minute (71 seconds) on a server node and probably longer on a laptop.
The lower triangular L factor is large but sparse. It is stored in six blocks of dimensions and types as shown in
BlockDescription(mvm50u80)
rows
userId
movieId
fixed
74894
Diagonal
13176
Sparse
Diag/Dense
2
Dense
Dense
Dense
This display gives the types of two blocked matrices: A which is derived from the data and does not depend on the parameters, and L, which is derived from A and the \({\boldsymbol\theta}\) parameter. The only difference in their structures is in the [2,2] block, which is diagonal in A and a dense, lower triangular matrix in L.
The memory footprint (bytes) of each of the blocks is
Code
let block =String[]for i in1:3for j in1:ipush!(block, "[$i,$j]")endendTable((; block, Abytes=Base.summarysize.(mvm50u80.A), Lbytes=Base.summarysize.(mvm50u80.L), ))end
That is, L requires roughly 10 times the amount of storage as does A, and that difference is entirely due to the different structure of the [2,2] block.
This phenomenon of the Cholesky factor requiring more storage than the sparse matrix being factored is described as fill-in.
Note that although the dimensions of the [2,1] block are larger than those of the [2,2] block its memory footprint is smaller because it is a sparse matrix. The matrix is about 98% zeros or, equivalently, a little over 2% nonzeros,
let L21 = mvm50u80.L[2] # blocks are stored in a one-dimensional arraynnz(L21) /length(L21)end
0.021312514927999675
which makes the sparse representation much smaller than the dense representation.
This fill-in of the [2,2] block leads to a somewhat unintuitive conclusion. The memory footprint of the model representation depends strongly on the number of movies, less strongly on the number of users and almost not at all on the number of ratings Figure 5.5.
Code
ggplot(sizespeed, aes(x=:mc, y=:modelsz, color=:ucutoff)) +geom_point() +geom_line() +labs(x="Minimum number of ratings per movie", y="Size of model (GiB)")
Figure 5.6 shows the dominance of the [2, 2] block of L in the overall memory footprint of the model
Code
ggplot(sizespeed, aes(; x=:L22sz, y=:modelsz, color=:ucutoff)) +geom_point() +geom_line() +labs(y="Size of model representation (GiB)", x="Size of [2,2] block of L (GiB)")
Figure 5.5 shows that when all the movies are included in the data to which the model is fit (i.e. mc == 1) the total memory footprint is over 20 GiB, and nearly 90% of that memory is that required for the [2,2] block of L. Even when requiring a minimum of 50 ratings per movie, the [2,2] block of L is over 30% of the memory footprint.
In a sense this is good news because the amount of storage required for the [2,2] block can be nearly cut in half by taking advantage of the fact that it is a triangular matrix. The rectangular full packed format looks especially promising for this purpose.
In general, for models with scalar random effects for two incompletely crossed grouping factors, the memory footprint depends strongly on the smaller of the number of levels of the grouping factors, less strongly on the larger number, and almost not at all on the number of observations.
5.2.3 Speed of log-likelihood evaluation
The time required to fit a model to large data sets is dominated by the time required to evaluate the log-likelihood during the optimization of the parameter estimates. The time for one evaluation is given in the evtime column of sizespeed. Also given is the number of evaluations to convergence, nv, and the time to fit the model, fittime The reason for considering evtime in addition to fittime and nev is because the evtime for one model, relative to other models, is reasonably stable across computers whereas nev, and hence, fittime, can be affected by seemingly trivial variations in function values resulting from different implementations of low-level calculations, such as the BLAS (Basic Linear Algebra Subroutines).
That is, we can’t expect to reproduce nv exactly when fitting the same model on different computers or with slightly different versions of software but the pattern in evtime with respect to uc and mc can be expected to reproducible.
As shown in Figure 5.7 the evaluation time for the objective is predominantly a function of the size of the [2, 2] block of L.
Code
ggplot(sizespeed, aes(x=:L22sz, y=:evtime, color=:ucutoff)) +geom_point() +geom_line() +labs(x="Size of [2,2] block of L (GiB)", y="Time for one evaluation of objective (s)")
However the middle panel shows that the number of iterations to convergence is highly variable. Most of these models required between 20 and 25 evaluations but some required almost 50 evaluations.
The derivation of the log-likelihood for linear mixed-effects models is given in Section B.7, which provides a rather remarkable result: the profiled log-likelihood for a linear mixed-effects model can be evaluated from Cholesky factor of a blocked, positive-definite symmetric matrix.
This page was rendered from git revision 57a0584
.
Balota, D. A., Yap, M. J., Hutchison, K. A., Cortese, M. J., Kessler, B., Loftis, B., Neely, J. H., Nelson, D. L., Simpson, G. B., & Treiman, R. (2007). The English lexicon project. Behavior Research Methods, 39(3), 445–459. https://doi.org/10.3758/bf03193014
Harper, F. M., & Konstan, J. A. (2016). The MovieLens datasets. ACM Transactions on Interactive Intelligent Systems, 5(4), 1–19. https://doi.org/10.1145/2827872