5  A large-scale observational study

Load the packages to be used,

Code
using AlgebraOfGraphics
using CairoMakie
CairoMakie.activate!(; type="svg")
using CategoricalArrays
using DataFrames
using EmbraceUncertainty: dataset
using GLM                  # for the lm function
using MixedModels
using MixedModelsMakie
using SparseArrays         # for the nnz function
using Statistics           # for the mean function
using TypedTables

and define some constants and a utility function

Code
optsumdir(paths::AbstractString...) =
  joinpath(@__DIR__, "optsums", paths...)
@isdefined(contrasts) || const contrasts = Dict{Symbol, Any}()
@isdefined(progress) || const progress=false

In the previous chapter we explored and fit models to data from a large-scale designed experiment. Here we consider an observational study - ratings of movies by users of movielens.org made available at the grouplens.org download site (Harper & Konstan, 2016).

The version of the data that we analyze is the full movielens data set in the ml-latest.zip archive at that site, consisting of over 27,000,000 ratings of movies by 280,000 users. The total number of movies rated is over 58,000.

As the name ml-latest.zip could become ambiguous, we provide the table of contents of this zip file to help identify the dataset being used.

Archive:  /home/bates/Downloads/ml-latest.zip
  Length      Date    Time    Name
---------  ---------- -----   ----
        0  2018-09-26 16:46   ml-latest/
  1267039  2018-09-26 16:37   ml-latest/links.csv
 39744990  2018-09-26 16:11   ml-latest/tags.csv
    18103  2018-09-26 16:44   ml-latest/genome-tags.csv
759200511  2018-09-26 16:31   ml-latest/ratings.csv
     9784  2018-09-26 16:46   ml-latest/README.txt
414851573  2018-09-26 16:44   ml-latest/genome-scores.csv
  2858223  2018-09-26 16:35   ml-latest/movies.csv
---------                     -------
1217950223                     8 files

5.1 Structure of the data

One of the purposes of this chapter is to study which dimensions of the data have the greatest effect on the amount of memory used to represent the model and the time required to fit a model.

The two datasets examined in Chapter 4, from the English Lexicon Project (Balota et al., 2007), consist of trials or instances associated with subject and item factors. The subject and item factors are “incompletely crossed” in that each item occurred in trials with several subjects, but not all subjects, and each subject responded to several different items, but not to all items.

Similarly in the movie ratings, each instance of a rating is associated with a user and with a movie, and these factors are incompletely crossed.

ratings = dataset(:ratings)
Arrow.Table with 27753444 rows, 4 columns, and schema:
 :userId     Int32
 :movieId    Int32
 :rating     Float32
 :timestamp  Int32

with metadata given by a Base.ImmutableDict{String, String} with 1 entry:
  "url" => "https://files.grouplens.org/datasets/movielens/ml-latest.zip"

Convert this Arrow table to a Table and drop the timestamp column, which we won’t be using. (For small data sets dropping such columns is not important but with over 27 million ratings it does help to drop unnecessary columns to save some memory space.)

ratings =
  Table(getproperties(ratings, (:userId, :movieId, :rating)))
Table with 3 columns and 27753444 rows:
      userId  movieId  rating
    ┌────────────────────────
 1  │ 1       307      3.5
 2  │ 1       481      3.5
 3  │ 1       1091     1.5
 4  │ 1       1257     4.5
 5  │ 1       1449     4.5
 6  │ 1       1590     2.5
 7  │ 1       1591     1.5
 8  │ 1       2134     4.5
 9  │ 1       2478     4.0
 10 │ 1       2840     3.0
 11 │ 1       2986     2.5
 12 │ 1       3020     4.0
 13 │ 1       3424     4.5
 14 │ 1       3698     3.5
 15 │ 1       3826     2.0
 16 │ 1       3893     3.5
 17 │ 2       170      3.5
 18 │ 2       849      3.5
 19 │ 2       1186     3.5
 20 │ 2       1235     3.0
 21 │ 2       1244     3.0
 22 │ 2       1296     4.5
 23 │ 2       1663     3.0
 ⋮  │   ⋮        ⋮       ⋮

Information on the movies is available in the movies dataset, which we supplement with the mean rating for each movie in the ratings table.

movies = Table(
  disallowmissing!(
    leftjoin!(
      combine(
        groupby(DataFrame(ratings), :movieId),
        :rating => mean => :mnrtng,
      ),
      DataFrame(dataset(:movies));
      on=:movieId,
    );
    error=false,
  )
)
Table with 25 columns and 53889 rows:
      movieId  mnrtng   nrtngs  title                 imdbId  tmdbId  Action  ⋯
    ┌──────────────────────────────────────────────────────────────────────────
 1  │ 1        3.88665  68469   Toy Story (1995)      114709  862     false   ⋯
 2  │ 2        3.24658  27143   Jumanji (1995)        113497  8844    false   ⋯
 3  │ 3        3.17398  15585   Grumpier Old Men (1…  113228  15602   false   ⋯
 4  │ 4        2.87454  2989    Waiting to Exhale (…  114885  31357   false   ⋯
 5  │ 5        3.07729  15474   Father of the Bride…  113041  11862   false   ⋯
 6  │ 6        3.84421  28683   Heat (1995)           113277  949     true    ⋯
 7  │ 7        3.37135  15301   Sabrina (1995)        114319  11860   false   ⋯
 8  │ 8        3.12248  1539    Tom and Huck (1995)   112302  45325   false   ⋯
 9  │ 9        3.00753  4449    Sudden Death (1995)   114576  9091    true    ⋯
 10 │ 10       3.43163  33086   GoldenEye (1995)      113189  710     true    ⋯
 11 │ 11       3.66028  19669   American President,…  112346  9087    false   ⋯
 12 │ 12       2.66965  4524    Dracula: Dead and L…  112896  12110   false   ⋯
 13 │ 13       3.33965  1952    Balto (1995)          112453  21032   false   ⋯
 14 │ 14       3.429    6838    Nixon (1995)          113987  10858   false   ⋯
 15 │ 15       2.73098  3154    Cutthroat Island (1…  112760  1408    true    ⋯
 16 │ 16       3.80236  21165   Casino (1995)         112641  524     false   ⋯
 17 │ 17       3.95013  24552   Sense and Sensibili…  114388  4584    false   ⋯
 18 │ 18       3.41207  6255    Four Rooms (1995)     113101  5       false   ⋯
 19 │ 19       2.64201  24913   Ace Ventura: When N…  112281  9273    false   ⋯
 20 │ 20       2.89448  4658    Money Train (1995)    113845  11517   true    ⋯
 21 │ 21       3.56837  25699   Get Shorty (1995)     113161  8012    false   ⋯
 22 │ 22       3.30114  11136   Copycat (1995)        112722  1710    false   ⋯
 23 │ 23       3.1588   4871    Assassins (1995)      112401  9691    true    ⋯
 ⋮  │    ⋮        ⋮       ⋮              ⋮              ⋮       ⋮       ⋮     ⋱

In contrast to data from a designed experiment, like the English Lexicon Project, the data from this observational study are extremely unbalanced with respect to the observational grouping factors, userId and movieId. The movies table includes an nrtngs column that gives the number of ratings for each movie, which varies from 1 to nearly 100,000.

extrema(movies.nrtngs)
(1, 97999)

The number of ratings per user is also highly skewed

users = Table(
  combine(
    groupby(DataFrame(ratings), :userId),
    nrow => :urtngs,
    :rating => mean => :umnrtng,
  ),
)
Table with 3 columns and 283228 rows:
      userId  urtngs  umnrtng
    ┌────────────────────────
 1  │ 1       16      3.3125
 2  │ 2       15      3.66667
 3  │ 3       11      3.54545
 4  │ 4       736     3.39742
 5  │ 5       72      4.26389
 6  │ 6       42      3.54762
 7  │ 7       15      3.2
 8  │ 8       31      3.35484
 9  │ 9       1       5.0
 10 │ 10      121     4.19008
 11 │ 11      19      3.73684
 12 │ 12      18      2.63889
 13 │ 13      20      4.15
 14 │ 14      174     3.94828
 15 │ 15      162     4.0
 16 │ 16      41      4.34146
 17 │ 17      2       4.0
 18 │ 18      92      3.68478
 19 │ 19      262     3.43893
 20 │ 20      4       2.75
 21 │ 21      15      3.13333
 22 │ 22      26      3.5
 23 │ 23      17      3.58824
 ⋮  │   ⋮       ⋮        ⋮
extrema(users.urtngs)
(1, 23715)

One way of visualizing the imbalance in the number of ratings per movie or per user is as an empirical cumulative distribution function (ecdf) plot, which is a “stair-step” plot where the vertical axis is the proportion of observations less than or equal to the corresponding value on the horizontal axis. Because the distribution of the number of ratings per movie or per user is so highly skewed in the low range we use a logarithmic horizontal axis in Figure 5.1.

Code
let
  f = Figure(; resolution=(1000, 400))
  xscale = log10
  xminorticksvisible = true
  xminorgridvisible = true
  yminorticksvisible = true
  xminorticks = IntervalsBetween(10)
  ylabel = "Relative cumulative frequency"
  nrtngs = sort(movies.nrtngs)
  ecdfplot(
    f[1, 1],
    nrtngs;
    npoints=last(nrtngs),
    axis=(
      xlabel="Number of ratings per movie (logarithmic scale)",
      xminorgridvisible,
      xminorticks,
      xminorticksvisible,
      xscale,
      ylabel,
      yminorticksvisible,
    ),
  )
  urtngs = sort(users.urtngs)
  ecdfplot(
    f[1, 2],
    urtngs;
    npoints=last(urtngs),
    axis=(
      xlabel="Number of ratings per user (logarithmic scale)",
      xminorgridvisible,
      xminorticks,
      xminorticksvisible,
      xscale,
      yminorticksvisible,
    ),
  )
  f
end
Figure 5.1: Empirical distribution plots of the number of ratings per movie and per user. The horizontal axes are on a logarithmic scale.

In this collection of over 27 million ratings, nearly 20% of the movies are rated only once and nearly half of the movies were rated 6 or fewer times.

count((6), movies.nrtngs) / length(movies.nrtngs)
0.4967989756722151

The ecdf plot of the number of ratings per user shows a similar pattern to that of the movies — a few users with a very large number of ratings and many users with just a few ratings.

For example, about 20% of the users rated 10 movies or fewer; the median number of movies rated is around 30; but the maximum is close to 24,000 (which is a lot of movies - over 20 years of 3 movies per day every day - if this user actually watched all of them).

Movies with very few ratings provide little information about overall trends or even about the movie being rated. We can imagine that the “shrinkage” of random effects for movies with just a few ratings pulls their adjusted rating strongly towards the overall average.

Furthermore, the distribution of ratings for movies with only one rating is systematically lower than the distribution of ratings for movies rated at least five times.

First, add nrtngs and urtngs as columns of the ratings table.

ratings = Table(
  disallowmissing!(
    leftjoin!(
      leftjoin!(
        DataFrame(ratings),
        DataFrame(getproperties(movies, (:movieId, :nrtngs)));
        on=:movieId,
      ),
      DataFrame(getproperties(users, (:userId, :urtngs)));
      on=:userId,
    ),
  ),
)
Table with 5 columns and 27753444 rows:
      userId  movieId  rating  nrtngs  urtngs
    ┌────────────────────────────────────────
 1  │ 1       307      3.5     7958    16
 2  │ 1       481      3.5     6037    16
 3  │ 1       1091     1.5     6138    16
 4  │ 1       1257     4.5     5902    16
 5  │ 1       1449     4.5     6867    16
 6  │ 1       1590     2.5     8511    16
 7  │ 1       1591     1.5     6508    16
 8  │ 1       2134     4.5     7020    16
 9  │ 1       2478     4.0     7797    16
 10 │ 1       2840     3.0     6047    16
 11 │ 1       2986     2.5     6060    16
 12 │ 1       3020     4.0     7783    16
 13 │ 1       3424     4.5     7265    16
 14 │ 1       3698     3.5     8269    16
 15 │ 1       3826     2.0     8898    16
 16 │ 1       3893     3.5     5259    16
 17 │ 2       170      3.5     9574    15
 18 │ 2       849      3.5     9878    15
 19 │ 2       1186     3.5     8643    15
 20 │ 2       1235     3.0     9937    15
 21 │ 2       1244     3.0     10249   15
 22 │ 2       1296     4.5     7470    15
 23 │ 2       1663     3.0     9581    15
 ⋮  │   ⋮        ⋮       ⋮       ⋮       ⋮

then create a bar chart of the two distributions

Code
let
  fiveplus = zeros(Int, 10)
  onlyone = zeros(Int, 10)
  for (i, n) in
      zip(round.(Int, Float32(2) .* ratings.rating), ratings.nrtngs)
    if n > 4
      fiveplus[i] += 1
    elseif isone(n)
      onlyone[i] += 1
    end
  end
  fiveprop = fiveplus ./ sum(fiveplus)
  oneprop = onlyone ./ sum(onlyone)
  draw(
    data((;
      props=vcat(fiveprop, oneprop),
      rating=repeat(0.5:0.5:5.0; outer=2),
      nratings=repeat(["≥5", "only 1"]; inner=10),
    )) *
    mapping(
      :rating => nonnumeric,
      :props => "Proportion of ratings";
      color=:nratings => "Ratings/movie",
      dodge=:nratings,
    ) *
    visual(BarPlot),
  )
end
Figure 5.2: Distribution of ratings for movies with only one rating compared to movies with at least 5 ratings

Similarly, users who rate very few movies add little information, even about the movies that they rated, because there isn’t sufficient information to contrast a specific rating with a typical rating for the user.

One way of dealing with the extreme imbalance in the number of observations per user or per movie is to set a threshold on the number of observations for a user or a movie to be included in the data used to fit the model. For example, a companion data set on grouplens.org, available in the ml-25m.zip archive, included only users who had rated at least 20 movies.

To be able to select ratings according to the number of ratings per user and the number of ratings per movie, we left-joined the movies.nrtngs and users.urtngs columns into the ratings data frame.

describe(DataFrame(ratings))
5×7 DataFrame
Row variable mean min median max nmissing eltype
Symbol Abstract… Real Float64 Real Int64 DataType
1 userId 141942.0 1 142022.0 283228 0 Int32
2 movieId 18488.0 1 2716.0 193886 0 Int32
3 rating 3.53045 0.5 3.5 5.0 0 Float32
4 nrtngs 17238.2 1 10633.0 97999 0 Int32
5 urtngs 559.946 1 294.0 23715 0 Int64

The medians in this table of nrtngs and urtngs are much higher than the values from the movies and users tables because a movie with 98,000 ratings occurs 98,000 times in this table whereas it occurs only once in the movies table.

5.2 Models fit with lower bounds on ratings per user and per movie

We fit a simple model to this dataset using different thresholds on the number of ratings per movie and the number of ratings per user. These fits were performed on compute servers with generous amounts of memory (128 GiB/node) and numbers of compute cores (48/node). A sample fit is shown in Section 5.2.2.

The results are summarized in the following table

Code
sizespeed = Table(dataset(:sizespeed))
Table with 10 columns and 21 rows:
      mc  uc  nratings  nusers  nmvie  modelsz  L22sz    nv  fittime  evtime
    ┌────────────────────────────────────────────────────────────────────────
 1  │ 1   5   27716748  266980  53883  24.3439  21.6318  20  7352.56  303.921
 2  │ 1   10  27565774  243658  53852  24.3029  21.607   23  7463.83  297.491
 3  │ 1   20  26544038  174605  53781  24.1394  21.55    26  8321.86  305.485
 4  │ 2   5   27706599  266980  43734  16.961   14.2504  20  4938.6   245.938
 5  │ 2   10  27555656  243658  43734  16.9449  14.2504  23  5594.63  202.176
 6  │ 2   20  26533987  174605  43730  16.8358  14.2478  37  7451.41  187.425
 7  │ 5   5   27671580  266979  30824  9.78469  7.07894  23  2703.42  113.091
 8  │ 5   10  27520713  243658  30824  9.7686   7.07894  26  3523.98  139.028
 9  │ 5   20  26499391  174605  30823  9.66167  7.07848  18  2255.44  123.423
 10 │ 10  5   27624556  266979  23716  6.89142  4.19057  23  2016.23  81.2037
 11 │ 10  10  27473781  243658  23716  6.87534  4.19057  23  1937.04  77.2586
 12 │ 10  20  26452968  174605  23716  6.76892  4.19057  18  1560.85  81.4125
 13 │ 15  5   27585626  266979  20400  5.79716  3.10063  23  1640.61  68.2099
 14 │ 15  10  27434948  243658  20400  5.78108  3.10063  24  1773.55  70.9519
 15 │ 15  20  26414555  174605  20400  5.6747   3.10063  19  1358.37  61.5033
 16 │ 20  5   27551352  266978  18366  5.20626  2.51315  19  1266.38  63.8407
 17 │ 20  10  27400762  243658  18366  5.1902   2.51315  35  2438.19  70.5935
 18 │ 20  20  26380766  174605  18366  5.08385  2.51315  21  1489.64  70.6748
 19 │ 50  5   27394411  266974  13360  4.00751  1.32985  26  1425.35  55.2993
 20 │ 50  10  27244313  243656  13360  3.9915   1.32985  30  1660.57  52.5368
 21 │ 50  20  26226280  174604  13360  3.88534  1.32985  20  1196.71  52.6929

In this table, mc is the “movie cutoff” (i.e. the threshold on the number of ratings per movie); uc is the user cutoff (threshold on the number of ratings per user); nratings, nusers and nmvie are the number of ratings, users and movies in the resulting trimmed data set; modelsz is the size (in GiB) of the model fit; L22sz is the size of the [2,2] block of the L matrix in that model; fittime is the time (in seconds) required to fit the model; nev is the number of function evaluations until convergence; and evtime is the time (s) per function evaluation.

The “[2,2] block of the L matrix” is described in Section 5.2.2.

5.2.1 Dimensions of the model versus cut-off values

First we consider the effects of the minimum number of ratings per user and per movie on the dimensions of the data set, as shown in Figure 5.3.

Code
let
  f = Figure(resolution=(800, 1000))
  xlabel = "Minimum number of ratings per movie"
  mc = refarray(sizespeed.mc)
  xticks = (1:7, string.(refpool(sizespeed.mc)))
  uc = refarray(sizespeed.uc)
  Legend(
    f[1, 1],
    lelements,
    llabels,
    ltitle;
    orientation=:horizontal,
    tellwidth=false,
    tellheight=true,
  )
  barplot!(
    Axis(f[2, 1]; xticks, ylabel="Number of users"),
    mc,
    sizespeed.nusers;
    xticks,
    dodge=uc,
    color=uc,
  )
  barplot!(
    Axis(f[3, 1]; xticks, ylabel="Number of movies"),
    mc,
    sizespeed.nmvie;
    xticks,
    dodge=uc,
    color=uc,
  )
  barplot!(
    Axis(f[4, 1]; xlabel, xticks, ylabel="Number of observations"),
    mc,
    sizespeed.nratings;
    dodge=uc,
    color=uc,
  )
  f
end
Figure 5.3: Bar plot of data set dimensions by minimum number of ratings per user and per movie.

Unsurprisingly, Figure 5.3 shows that increasing the minimum number of ratings per user decreases the number of users, does not noticeably affect the number of movies, and results in small decreases in the total number of ratings. Conversely, increasing the minimum number of ratings per movie does not noticeably affect the number of users, causes dramatic reductions in the number of movies and has very little effect on the total number of ratings.

5.2.2 Memory footprint of the model representation

To explain what “the [2,2] block of the L matrix” is and why its size is important, we provide a brief overview of the evaluation of the “profiled” log-likelihood for a LinearMixedModel representation.

To make the discussion concrete we consider one of the models represented in this table, with cut-offs of 20 ratings per user and 20 ratings per movie. This, and any of the models shown in the table, can be restored in a few minutes from the saved optsum values, as opposed to taking up to two hours to perform the fit.

Code
function ratingsoptsum(
  mcutoff::Integer,
  ucutoff::Integer;
  data=Table(ratings),
  form=@formula(rating ~ 1 + (1 | userId) + (1 | movieId)),
  contrasts=Dict(:movieId => Grouping(), :userId => Grouping()),
)
  optsumfnm = optsumdir(
    "mvm$(lpad(mcutoff, 2, '0'))u$(lpad(ucutoff, 2, '0')).json",
  )
  isfile(optsumfnm) ||
    throw(ArgumentError("File $optsumfnm is not available"))
  return restoreoptsum!(
    LinearMixedModel(
      form,
      filter(
        r -> (r.nrtngs  mcutoff) & (r.urtngs  ucutoff),
        data,
      );
      contrasts,
    ),
    optsumfnm,
  )
end
mvm20u20 = ratingsoptsum(20, 20)
print(mvm20u20)
Linear mixed model fit by maximum likelihood
 rating ~ 1 + (1 | userId) + (1 | movieId)
     logLik       -2 logLik         AIC           AICc            BIC      
 -33644689.0320  67289378.0639  67289386.0639  67289386.0639  67289446.4165

Variance components:
            Column   Variance Std.Dev. 
userId   (Intercept)  0.184529 0.429568
movieId  (Intercept)  0.242959 0.492909
Residual              0.732964 0.856133
 Number of obs: 26380766; levels of grouping factors: 174605, 18366

  Fixed-effects parameters:
──────────────────────────────────────────────────
               Coef.  Std. Error       z  Pr(>|z|)
──────────────────────────────────────────────────
(Intercept)  3.43345   0.0038669  887.91    <1e-99
──────────────────────────────────────────────────

Creating the model representation and restoring the optimal parameter values can take a couple of minutes because the objective is evaluated twice — at the initial parameter values and at the final parameter values — during the call to restoreoptsum!.

Each evaluation of the objective, which requires setting the value of the parameter \({\boldsymbol\theta}\) in the numerical representation of the model, updating the blocked Cholesky factor, \(\mathbf{L}\), and evaluating the scalar objective value from this factor, takes a little over a minute (71 seconds) on a server node and probably longer on a laptop.

The lower triangular L factor is large but sparse. It is stored in six blocks of dimensions and types as shown in

BlockDescription(mvm20u20)
rows userId movieId fixed
174605 Diagonal
18366 Sparse Diag/Dense
2 Dense Dense Dense

This display gives the types of two blocked matrices: A which is derived from the data and does not depend on the parameters, and L, which is derived from A and the \({\boldsymbol\theta}\) parameter. The only difference in their structures is in the [2,2] block, which is diagonal in A and a dense, lower triangular matrix in L.

The memory footprint (bytes) of each of the blocks is

Code
let
  block = String[]
  for i in 1:3
    for j in 1:i
      push!(block, "[$i,$j]")
    end
  end
  Table((;
    block,
    Abytes=Base.summarysize.(mvm20u20.A),
    Lbytes=Base.summarysize.(mvm20u20.L),
  ))
end
Table with 3 columns and 6 rows:
     block  Abytes     Lbytes
   ┌─────────────────────────────
 1 │ [1,1]  1396888    1396888
 2 │ [2,1]  317267776  317267776
 3 │ [2,2]  146976     2698479688
 4 │ [3,1]  2793720    2793720
 5 │ [3,2]  293896     293896
 6 │ [3,3]  72         72

resulting in total memory footprints (GiB) of

Code
NamedTuple{(:A, :L)}(
  Base.summarysize.(getproperty.(Ref(mvm20u20), (:A, :L))) ./ 2^30,
)
(A = 0.29979219287633896, L = 2.8128103613853455)

That is, L requires roughly 10 times the amount of storage as does A, and that difference is entirely due to the different structure of the [2,2] block.

This phenomenon of the Cholesky factor requiring more storage than the sparse matrix being factored is described as fill-in.

Note that although the dimensions of the [2,1] block are larger than those of the [2,2] block its memory footprint is smaller because it is a sparse matrix. The matrix is over 99% zeros or, equivalently, less than 1% nonzeros,

let
  L21 = mvm20u20.L[2]  # blocks are stored in a one-dimensional array
  nnz(L21) / length(L21)
end
0.008226519768989443

which makes the sparse representation much smaller than the dense representation.

This fill-in of the [2,2] block leads to a somewhat unintuitive conclusion. The memory footprint of the model representation depends strongly on the number of movies, less strongly on the number of users and almost not at all on the number of ratings. The first two parts of this conclusion are illustrated in Figure 5.4.

Code
let
  f = Figure(resolution=(800, 1000))
  xlabel = "Minimum number of ratings per movie"
  mc = refarray(sizespeed.mc)
  xticks = (1:7, string.(refpool(sizespeed.mc)))
  uc = refarray(sizespeed.uc)
  Legend(
    f[1, 1],
    lelements,
    llabels,
    ltitle;
    orientation=:horizontal,
    tellwidth=false,
    tellheight=true,
  )
  barplot!(
    Axis(f[2, 1]; xticks, ylabel="Memory requirement (GiB)"),
    mc,
    sizespeed.modelsz;
    xticks,
    dodge=uc,
    color=uc,
  )
  barplot!(
    Axis(f[3, 1]; xticks, ylabel="Size of L[2,2] (GiB)"),
    mc,
    sizespeed.L22sz;
    xticks,
    dodge=uc,
    color=uc,
  )
  barplot!(
    Axis(f[4, 1]; xlabel, xticks, ylabel="L[2,2] memory fraction"),
    mc,
    sizespeed.L22prop;
    xticks,
    dodge=uc,
    color=uc,
  )
  f
end
Figure 5.4: Bar plot of memory footprint of the model representation by minimum number of ratings per user and per movie.

Figure 5.4 shows that when all the movies are included in the data to which the model is fit (i.e. mc == 1) the total memory footprint is over 20 GiB, and nearly 90% of that memory is that required for the [2,2] block of L. Even when requiring a minimum of 50 ratings per movie, the [2,2] block of L is over 30% of the memory footprint.

In a sense this is good news because the amount of storage required for the [2,2] block can be nearly cut in half by taking advantage of the fact that it is a triangular matrix. The rectangular full packed format looks especially promising for this purpose.

In general, for models with scalar random effects for two incompletely crossed grouping factors, the memory footprint depends strongly on the smaller of the number of levels of the grouping factors, less strongly on the larger number, and almost not at all on the number of observations.

5.2.3 Speed of log-likelihood evaluation

The time required to fit a model to large data sets is dominated by the time required to evaluate the log-likelihood during the optimization of the parameter estimates. The time for one evaluation is given in the evtime column of sizespeed. Also given is the number of evaluations to convergence, nev, and the time to fit the model, fittime The reason for considering evtime in addition to fittime and nev is because the evtime for one model, relative to other models, is reasonably stable across computers whereas nev, and hence, fittime, can be affected by seemingly trivial variations in function values resulting from different implementations of low-level calculations, such as the BLAS (Basic Linear Algebra Subroutines).

That is, we can’t expect to reproduce nev exactly when fitting the same model on different computers or with slightly different versions of software but the pattern in evtime with respect to uc and mc can be expected to reproducible.

Code
let
  f = Figure(resolution=(800, 1000))
  xlabel = "Minimum number of ratings per movie"
  mc = refarray(sizespeed.mc)
  xticks = (1:7, string.(refpool(sizespeed.mc)))
  uc = refarray(sizespeed.uc)
  Legend(
    f[1, 1],
    lelements,
    llabels,
    ltitle;
    orientation=:horizontal,
    tellwidth=false,
    tellheight=true,
  )
  barplot!(
    Axis(
      f[2, 1];
      xticks,
      ylabel="Log-likelihood evaluation time (m)",
    ),
    mc,
    sizespeed.evtime ./ 60;
    xticks,
    dodge=uc,
    color=uc,
  )
  barplot!(
    Axis(
      f[3, 1];
      xticks,
      ylabel="Number of evaluations to convergence",
    ),
    mc,
    sizespeed.nv;
    xticks,
    dodge=uc,
    color=uc,
  )
  barplot!(
    Axis(
      f[4, 1];
      xlabel,
      xticks,
      ylabel="Time (hr) to fit the model",
    ),
    mc,
    sizespeed.fittime ./ 3600;
    xticks,
    dodge=uc,
    color=uc,
  )
  f
end
Figure 5.5: Bar plot of function log-likelihood evaluation time by minimum number of ratings per user and per movie.

Figure 5.5 shows that the average evaluation time for the log-likelihood function depends strongly on the number of movies and less strongly on the number of users.

However the middle panel shows that the number of iterations to convergence is highly variable. Most of these models required between 20 and 25 evaluations but some required almost 50 evaluations.

The derivation of the log-likelihood for linear mixed-effects models is given in Section B.7, which provides a rather remarkable result: the profiled log-likelihood for a linear mixed-effects model can be evaluated from Cholesky factor of a blocked, positive-definite symmetric matrix.

There are two blocked matrices, A and L, stored in the model representation and, for large models such as we are considering these are the largest fields in

5.3 Model size and speed for different thresholds

Code
draw(
  data(sizespeed) *
  mapping(
    :L22sz => "Size (GiB) of 2,2 block of L",
    :modelsz => "Size (GiB) of model representation";
    color=:uc,
  ) *
  visual(Scatter),
)
Figure 5.6: Amount of storage for L[2,2] versus amount of storage for the entire model. Colors are determined by the minimum number of ratings per user in the data to which the model was fit.

Linear regression of the modelsz versus the number of users and L22sz.

sizemod =
  lm(@formula(Float64(modelsz) ~ 1 + nusers + L22sz), sizespeed)
coeftable(sizemod)
Coef. Std. Error t Pr(> t )
(Intercept) 2.32966 0.0134151 173.66 <1e-29 2.30147 2.35784
nusers 1.3747e-6 5.68925e-8 24.16 <1e-14 1.25517e-6 1.49423e-6
L22sz 1.00125 0.000321437 3114.92 <1e-52 1.00058 1.00193

provides an \(r^2\) value very close to one, indicating an almost perfect fit.

(sizemod)
0.9999981449902416

This page was rendered from git revision 4cd5e78 .