5  A large-scale observational study

Load the packages to be used,

Code
using AlgebraOfGraphics
using CairoMakie
using CategoricalArrays
using DataFrames
using EmbraceUncertainty: dataset
using GLM                  # for the lm function
using MixedModels
using MixedModelsMakie
using SparseArrays         # for the nnz function
using Statistics           # for the mean function
using TidierPlots
using TypedTables

and define some constants and a utility function

Code
optsumdir(paths::AbstractString...) =
  joinpath(@__DIR__, "optsums", paths...)
@isdefined(contrasts) || const contrasts = Dict{Symbol,Any}()
@isdefined(progress) || const progress = false

In the previous chapter we explored and fit models to data from a large-scale designed experiment. Here we consider an observational study - ratings of movies by users of movielens.org made available at the grouplens.org download site (Harper & Konstan, 2016).

We analyze the MovieLens 25M Dataset of roughly 25 million ratings of movies by over 162,000 users of about 59,000 movies.

Because of constraints on the redistribution of these data, the first time that dataset(:ratings) or dataset(:movies) is executed, the 250 Mb zip file is downloaded from the grouplens data site, expanded, and the tables created as Arrow files.

This operation can take a couple of minutes.

5.1 Structure of the data

One of the purposes of this chapter is to study which dimensions of the data have the greatest effect on the amount of memory used to represent the model and the time required to fit a model.

The two datasets examined in Chapter 4, from the English Lexicon Project (Balota et al., 2007), consist of trials or instances associated with subject and item factors. The subject and item factors are “incompletely crossed” in that each item occurred in trials with several subjects, but not all subjects, and each subject responded to several different items, but not to all items.

Similarly in the movie ratings, each instance of a rating is associated with a user and with a movie, and these factors are incompletely crossed.

ratings = dataset(:ratings)
Arrow.Table with 25000095 rows, 4 columns, and schema:
 :userId     String
 :movieId    String
 :rating     Float32
 :timestamp  Int32

with metadata given by a Base.ImmutableDict{String, String} with 1 entry:
  "url" => "https://files.grouplens.org/datasets/movielens/ml-25m.zip"

Convert this Arrow table to a Table and drop the timestamp column, which we won’t be using. (For small data sets dropping such columns is not important but with over 25 million ratings it does help to drop unnecessary columns to save some memory space.)

ratings =
  Table(getproperties(ratings, (:userId, :movieId, :rating)))
Table with 3 columns and 25000095 rows:
      userId   movieId  rating
    ┌─────────────────────────
 1  │ U000001  M000296  5.0
 2  │ U000001  M000306  3.5
 3  │ U000001  M000307  5.0
 4  │ U000001  M000665  5.0
 5  │ U000001  M000899  3.5
 6  │ U000001  M001088  4.0
 7  │ U000001  M001175  3.5
 8  │ U000001  M001217  3.5
 9  │ U000001  M001237  5.0
 10 │ U000001  M001250  4.0
 11 │ U000001  M001260  3.5
 12 │ U000001  M001653  4.0
 13 │ U000001  M002011  2.5
 14 │ U000001  M002012  2.5
 15 │ U000001  M002068  2.5
 16 │ U000001  M002161  3.5
 17 │ U000001  M002351  4.5
 ⋮  │    ⋮        ⋮       ⋮

Information on the movies is available in the movies dataset.

movies = Table(dataset(:movies))
Table with 24 columns and 59047 rows:
      movieId  nrtngs  title                 imdbId  tmdbId  Action  ⋯
    ┌─────────────────────────────────────────────────────────────────
 1  │ M000356  81491   Forrest Gump (1994)   109830  13      false   ⋯
 2  │ M000318  81482   Shawshank Redemptio…  111161  278     false   ⋯
 3  │ M000296  79672   Pulp Fiction (1994)   110912  680     false   ⋯
 4  │ M000593  74127   Silence of the Lamb…  102926  274     false   ⋯
 5  │ M002571  72674   Matrix, The (1999)    133093  603     true    ⋯
 6  │ M000260  68717   Star Wars: Episode …  76759   11      true    ⋯
 7  │ M000480  64144   Jurassic Park (1993)  107290  329     true    ⋯
 8  │ M000527  60411   Schindler's List (1…  108052  424     false   ⋯
 9  │ M000110  59184   Braveheart (1995)     112573  197     true    ⋯
 10 │ M002959  58773   Fight Club (1999)     137523  550     true    ⋯
 11 │ M000589  57379   Terminator 2: Judgm…  103064  280     true    ⋯
 12 │ M001196  57361   Star Wars: Episode …  80684   1891    true    ⋯
 13 │ M000001  57309   Toy Story (1995)      114709  862     false   ⋯
 14 │ M004993  55736   Lord of the Rings: …  120737  120     false   ⋯
 15 │ M000050  55366   Usual Suspects, The…  114814  629     false   ⋯
 16 │ M001210  54917   Star Wars: Episode …  86190   1892    true    ⋯
 17 │ M001198  54675   Raiders of the Lost…  82971   85      true    ⋯
 ⋮  │    ⋮       ⋮              ⋮              ⋮       ⋮       ⋮     ⋱

In contrast to data from a designed experiment, like the English Lexicon Project, the data from this observational study are extremely unbalanced with respect to the observational grouping factors, userId and movieId. The movies table includes an nrtngs column that gives the number of ratings for each movie, which varies from 1 to over 80,000.

extrema(movies.nrtngs)
(1, 81491)

The number of ratings per user is also highly skewed

users = Table(
  combine(
    groupby(DataFrame(ratings), :userId),
    nrow => :urtngs,
    :rating => mean => :umnrtng,
  ),
)
Table with 3 columns and 162541 rows:
      userId   urtngs  umnrtng
    ┌─────────────────────────
 1  │ U000001  70      3.81429
 2  │ U000002  184     3.63043
 3  │ U000003  656     3.69741
 4  │ U000004  242     3.3781
 5  │ U000005  101     3.75248
 6  │ U000006  26      4.15385
 7  │ U000007  25      3.64
 8  │ U000008  155     3.6129
 9  │ U000009  178     3.86517
 10 │ U000010  53      3.45283
 11 │ U000011  24      3.14583
 12 │ U000012  736     3.2962
 13 │ U000013  412     3.55947
 14 │ U000014  31      4.59677
 15 │ U000015  70      4.22143
 16 │ U000016  24      4.47917
 17 │ U000017  29      3.65517
 ⋮  │    ⋮       ⋮        ⋮
extrema(users.urtngs)
(20, 32202)

This selection of ratings was limited to users who had provided at least 20 ratings.

One way of visualizing the imbalance in the number of ratings per movie or per user is as an empirical cumulative distribution function (ecdf) plot, which is a “stair-step” plot where the vertical axis is the proportion of observations less than or equal to the corresponding value on the horizontal axis. Because the distribution of the number of ratings per movie or per user is so highly skewed in the low range we use a logarithmic horizontal axis in Figure 5.1.

Code
let
  f = Figure(; size=(700, 350))
  xscale = log10
  xminorticksvisible = true
  xminorgridvisible = true
  yminorticksvisible = true
  xminorticks = IntervalsBetween(10)
  ylabel = "Relative cumulative frequency"
  nrtngs = sort(movies.nrtngs)
  ecdfplot(
    f[1, 1],
    nrtngs;
    npoints=last(nrtngs),
    axis=(
      xlabel="Number of ratings per movie (logarithmic scale)",
      xminorgridvisible,
      xminorticks,
      xminorticksvisible,
      xscale,
      ylabel,
      yminorticksvisible,
    ),
  )
  urtngs = sort(users.urtngs)
  ecdfplot(
    f[1, 2],
    urtngs;
    npoints=last(urtngs),
    axis=(
      xlabel="Number of ratings per user (logarithmic scale)",
      xminorgridvisible,
      xminorticks,
      xminorticksvisible,
      xscale,
      yminorticksvisible,
    ),
  )
  f
end
Figure 5.1: Empirical distribution plots of the number of ratings per movie and per user. The horizontal axes are on a logarithmic scale.

In this collection of about 25 million ratings, nearly 20% of the movies are rated only once and over half of the movies were rated 6 or fewer times.

count((6), movies.nrtngs) / length(movies.nrtngs)
0.5205344894744864

The ecdf plot of the number of ratings per user shows a similar pattern to that of the movies — a few users with a very large number of ratings and many users with just a few ratings.

For example, the minimum number or movies rated is 20 (due to the inclusion constraint); the median number of movies rated is around 70; but the maximum is over 32,000 (which is a lot of movies - over 30 years of 3 movies per day every day - if this user actually watched all of them).

Movies with very few ratings provide little information about overall trends or even about the movie being rated. We can imagine that the “shrinkage” of random effects for movies with just a few ratings pulls their adjusted rating strongly towards the overall average.

Furthermore, the distribution of ratings for movies with only one rating is systematically lower than the distribution of ratings for movies rated at least five times.

First, add nrtngs and urtngs as columns of the ratings table.

ratings = Table(
  disallowmissing!(
    leftjoin!(
      leftjoin!(
        DataFrame(ratings),
        select!(DataFrame(movies), :movieId, :nrtngs);
        on=:movieId,
      ),
      select!(DataFrame(users), :userId, :urtngs);
      on=:userId,
    ),
  ),
)
Table with 5 columns and 25000095 rows:
      userId   movieId  rating  nrtngs  urtngs
    ┌─────────────────────────────────────────
 1  │ U000001  M000296  5.0     79672   70
 2  │ U000001  M000306  3.5     7058    70
 3  │ U000001  M000307  5.0     6616    70
 4  │ U000001  M000665  5.0     1269    70
 5  │ U000001  M000899  3.5     10895   70
 6  │ U000001  M001088  4.0     11935   70
 7  │ U000001  M001175  3.5     7301    70
 8  │ U000001  M001217  3.5     5220    70
 9  │ U000001  M001237  5.0     5063    70
 10 │ U000001  M001250  4.0     12634   70
 11 │ U000001  M001260  3.5     4636    70
 12 │ U000001  M001653  4.0     21890   70
 13 │ U000001  M002011  2.5     22990   70
 14 │ U000001  M002012  2.5     21881   70
 15 │ U000001  M002068  2.5     1937    70
 16 │ U000001  M002161  3.5     10555   70
 17 │ U000001  M002351  4.5     1290    70
 ⋮  │    ⋮        ⋮       ⋮       ⋮       ⋮

then create a bar chart of the two distributions

Code
let
  fiveplus = zeros(Int, 10)
  onlyone = zeros(Int, 10)
  for (i, n) in
      zip(round.(Int, Float32(2) .* ratings.rating), ratings.nrtngs)
    if n > 4
      fiveplus[i] += 1
    elseif isone(n)
      onlyone[i] += 1
    end
  end
  fiveprop = fiveplus ./ sum(fiveplus)
  oneprop = onlyone ./ sum(onlyone)
  draw(
    data((;
      props=vcat(fiveprop, oneprop),
      rating=repeat(0.5:0.5:5.0; outer=2),
      nratings=repeat(["≥5", "only 1"]; inner=10),
    )) *
    mapping(
      :rating => nonnumeric,
      :props => "Proportion of ratings";
      color=:nratings => "Ratings/movie",
      dodge=:nratings,
    ) *
    visual(BarPlot);
    figure=(; size=(600, 400)),
  )
end
Figure 5.2: Distribution of ratings for movies with only one rating compared to movies with at least 5 ratings

Similarly, users who rate very few movies add little information, even about the movies that they rated, because there isn’t sufficient information to contrast a specific rating with a typical rating for the user.

One way of dealing with the extreme imbalance in the number of observations per user or per movie is to set a threshold on the number of observations for a user or a movie to be included in the data used to fit the model.

To be able to select ratings according to the number of ratings per user and the number of ratings per movie, we left-joined the movies.nrtngs and users.urtngs columns into the ratings data frame.

describe(DataFrame(ratings), :mean, :min, :median, :max, :nunique, :nmissing, :eltype)
5×8 DataFrame
Row variable mean min median max nunique nmissing eltype
Symbol Union… Any Union… Any Union… Int64 DataType
1 userId U000001 U162541 162541 0 String
2 movieId M000001 M209171 59047 0 String
3 rating 3.53385 0.5 3.5 5.0 0 Float32
4 nrtngs 14924.8 1 9152.0 81491 0 Int32
5 urtngs 620.943 20 319.0 32202 0 Int64

The medians in this table of nrtngs and urtngs are much higher than the values from the movies and users tables because a movie with 98,000 ratings occurs 98,000 times in this table whereas it occurs only once in the movies table.

5.2 Models fit with lower bounds on ratings per user and per movie

We fit a simple model to this dataset using different thresholds on the number of ratings per movie and the number of ratings per user. These fits were performed on compute servers with generous amounts of memory (128 GiB/node) and numbers of compute cores (48/node). A sample fit is shown in Section 5.2.2.

The results are summarized in the following table

Code
sizespeed = DataFrame(dataset(:sizespeed))
sizespeed.ucutoff = collect(sizespeed.ucutoff)  # temporary fix to get TidierPlots to work
sizespeed
21×11 DataFrame
Row mc uc nratings nusers nmvie modelsz L22sz nv fittime evtime ucutoff
Int8 Int8 Int32 Int32 Int32 Float64 Float64 Int8 Float64 Float64 String
1 1 20 25000095 162541 59047 28.4216 25.9768 24 9293.91 372.338 U20
2 2 20 24989797 162541 48749 20.1491 17.706 36 11872.9 316.237 U20
3 5 20 24945870 162541 32720 10.4133 7.97658 21 3299.55 151.666 U20
4 10 20 24890583 162541 24330 6.84118 4.41036 26 2484.71 80.4182 U20
5 15 20 24846701 162541 20590 5.58446 3.15866 22 1810.84 70.8015 U20
6 20 20 24810483 162541 18430 4.95285 2.5307 22 1666.03 75.6806 U20
7 50 20 24644928 162540 13176 3.69924 1.29347 20 880.731 42.1572 U20
8 1 40 23700163 115891 58930 28.189 25.874 23 8835.84 369.294 U40
9 2 40 23689979 115891 48746 20.0173 17.7039 35 11336.9 324.62 U40
10 5 40 23646529 115891 32720 10.2837 7.97658 27 3001.03 107.42 U40
11 10 40 23591948 115891 24330 6.71161 4.41036 28 2235.33 70.2365 U40
12 15 40 23548590 115891 20590 5.45494 3.15866 22 1554.82 67.2297 U40
13 20 40 23512852 115891 18430 4.82338 2.5307 22 1446.86 59.8116 U40
14 50 40 23349895 115891 13176 3.57002 1.29347 24 1045.58 41.9256 U40
15 1 80 21374218 74894 58755 27.8049 25.7205 23 10270.7 372.721 U80
16 2 80 21364203 74894 48740 19.7822 17.6995 21 6473.96 292.296 U80
17 5 80 21321452 74894 32720 10.0531 7.97658 22 2478.71 107.91 U80
18 10 80 21267814 74894 24330 6.48111 4.41036 22 1808.68 69.0953 U80
19 15 80 21225289 74894 20590 5.22452 3.15866 24 1685.01 66.9154 U80
20 20 80 21190291 74894 18430 4.59303 2.5307 18 1117.07 52.4674 U80
21 50 80 21031261 74894 13176 3.34005 1.29347 21 991.497 45.3183 U80

In this table, mc is the “movie cutoff” (i.e. the threshold on the number of ratings per movie); uc is the user cutoff (threshold on the number of ratings per user); nratings, nusers and nmvie are the number of ratings, users and movies in the resulting trimmed data set; modelsz is the size (in GiB) of the model fit; L22sz is the size of the [2,2] block of the L matrix in that model; fittime is the time (in seconds) required to fit the model; nev is the number of function evaluations until convergence; and evtime is the time (s) per function evaluation.

The “[2,2] block of the L matrix” is described in Section 5.2.2.

5.2.1 Dimensions of the model versus cut-off values

As shown if Figure 5.3, the number of ratings varies from a little over 21 million to 25 million, and is mostly associated with the user cutoff.

Code
ggplot(sizespeed, aes(; x=:mc, y=:nratings, color=:ucutoff)) +
    geom_point() +
    geom_line() +
    labs(x="Minimum number of ratings per movie", y="Number of ratings")
Figure 5.3: Number of ratings in reduced table by movie cutoff value

For this range of choices of cutoffs, the user cutoff has more impact on the number of ratings in the reduced dataset than does the movie cutoff.

A glance at the table shows that the number of users, nusers, is essentially a function of only the user cutoff, uc (the one exception being at mc = 50 and uc=20).

Figure 5.4 shows the similarly unsurprising result that the number of movies in the reduced table is essentially determined by the movie cutoff.

Code
ggplot(sizespeed, aes(; x=:mc, y=:nmvie, color=:ucutoff)) +
    geom_point() +
    geom_line() +
    labs(x="Minimum number of ratings per movie", y="Number of movies in table")
Figure 5.4: Number of users in reduced table by movie cutoff value
Code
describe(DataFrame(sizespeed), :mean, :min, :median, :max, :nunique, :eltype)
11×7 DataFrame
Row variable mean min median max nunique eltype
Symbol Union… Any Union… Any Union… DataType
1 mc 14.7143 1 10.0 50 Int8
2 uc 46.6667 20 40.0 80 Int8
3 nratings 2.32354e7 21031261 2.35919e7 25000095 Int32
4 nusers 1.17775e5 74894 115891.0 162541 Int32
5 nmvie 30986.0 13176 24330.0 59047 Int32
6 modelsz 11.2567 3.34005 6.71161 28.4216 Float64
7 L22sz 8.99 1.29347 4.41036 25.9768 Float64
8 nv 23.9524 18 22.0 36 Int8
9 fittime 4075.74 880.731 2235.33 11872.9 Float64
10 evtime 150.312 41.9256 75.6806 372.721 Float64
11 ucutoff U20 U80 3 String

5.2.2 Memory footprint of the model representation

To explain what “the [2,2] block of the L matrix” is and why its size is important, we provide a brief overview of the evaluation of the “profiled” log-likelihood for a LinearMixedModel representation.

To make the discussion concrete we consider one of the models represented in this table, with cut-offs of 80 ratings per user and 50 ratings per movie. This, and any of the models shown in the table, can be restored in a few minutes from the saved optsum values, as opposed to taking up to two hours to perform the fit.

Code
function ratingsoptsum(
  mcutoff::Integer,
  ucutoff::Integer;
  data=Table(ratings),
  form=@formula(rating ~ 1 + (1 | userId) + (1 | movieId)),
  contrasts=contrasts,
)
  optsumfnm = optsumdir(
    "mvm$(lpad(mcutoff, 2, '0'))u$(lpad(ucutoff, 2, '0')).json",
  )
  model = LinearMixedModel(
      form,
      filter(
        r -> (r.nrtngs  mcutoff) & (r.urtngs  ucutoff),
        data,
      );
      contrasts,
    )
  isfile(optsumfnm) && return restoreoptsum!(model, optsumfnm)

  @warn "File $optsumfnm is not available, fitting model."
  model.optsum.initial .= 0.5
  fit!(model; thin=1)
  saveoptsum(optsumfnm, model)
  return model
end
mvm50u80 = ratingsoptsum(50, 80)
print(mvm50u80)
┌ Warning: optsum was saved with an older version of MixedModels.jl: consider resaving.
└ @ MixedModels ~/.julia/packages/MixedModels/hs2Ke/src/serialization.jl:28
Linear mixed model fit by maximum likelihood
 rating ~ 1 + (1 | userId) + (1 | movieId)
     logLik       -2 logLik         AIC           AICc            BIC      
 -26488585.4152  52977170.8304  52977178.8304  52977178.8304  52977238.2765

Variance components:
            Column   Variance Std.Dev. 
userId   (Intercept)  0.178157 0.422086
movieId  (Intercept)  0.253558 0.503545
Residual              0.714548 0.845310
 Number of obs: 21031261; levels of grouping factors: 74894, 13176

  Fixed-effects parameters:
──────────────────────────────────────────────────
               Coef.  Std. Error       z  Pr(>|z|)
──────────────────────────────────────────────────
(Intercept)  3.40329  0.00469215  725.31    <1e-99
──────────────────────────────────────────────────

Creating the model representation and restoring the optimal parameter values can take a couple of minutes because the objective is evaluated twice — at the initial parameter values and at the final parameter values — during the call to restoreoptsum!.

Each evaluation of the objective, which requires setting the value of the parameter \({\boldsymbol\theta}\) in the numerical representation of the model, updating the blocked Cholesky factor, \(\mathbf{L}\), and evaluating the scalar objective value from this factor, takes a little over a minute (71 seconds) on a server node and probably longer on a laptop.

The lower triangular L factor is large but sparse. It is stored in six blocks of dimensions and types as shown in

BlockDescription(mvm50u80)
rows userId movieId fixed
74894 Diagonal
13176 Sparse Diag/Dense
2 Dense Dense Dense

This display gives the types of two blocked matrices: A which is derived from the data and does not depend on the parameters, and L, which is derived from A and the \({\boldsymbol\theta}\) parameter. The only difference in their structures is in the [2,2] block, which is diagonal in A and a dense, lower triangular matrix in L.

The memory footprint (bytes) of each of the blocks is

Code
let
  block = String[]
  for i in 1:3
    for j in 1:i
      push!(block, "[$i,$j]")
    end
  end
  Table((;
    block,
    Abytes=Base.summarysize.(mvm50u80.A),
    Lbytes=Base.summarysize.(mvm50u80.L),
  ))
end
Table with 3 columns and 6 rows:
     block  Abytes     Lbytes
   ┌─────────────────────────────
 1 │ [1,1]  599200     599200
 2 │ [2,1]  252674872  252674872
 3 │ [2,2]  105456     1388855848
 4 │ [3,1]  1198344    1198344
 5 │ [3,2]  210856     210856
 6 │ [3,3]  72         72

resulting in total memory footprints (GiB) of

Code
NamedTuple{(:A, :L)}(
  Base.summarysize.(getproperty.(Ref(mvm50u80), (:A, :L))) ./ 2^30,
)
(A = 0.2372906431555748, L = 1.5306652337312698)

That is, L requires roughly 10 times the amount of storage as does A, and that difference is entirely due to the different structure of the [2,2] block.

This phenomenon of the Cholesky factor requiring more storage than the sparse matrix being factored is described as fill-in.

Note that although the dimensions of the [2,1] block are larger than those of the [2,2] block its memory footprint is smaller because it is a sparse matrix. The matrix is about 98% zeros or, equivalently, a little over 2% nonzeros,

let
  L21 = mvm50u80.L[2]  # blocks are stored in a one-dimensional array
  nnz(L21) / length(L21)
end
0.021312514927999675

which makes the sparse representation much smaller than the dense representation.

This fill-in of the [2,2] block leads to a somewhat unintuitive conclusion. The memory footprint of the model representation depends strongly on the number of movies, less strongly on the number of users and almost not at all on the number of ratings Figure 5.5.

Code
ggplot(sizespeed, aes(x=:mc, y=:modelsz, color=:ucutoff)) +
    geom_point() +
    geom_line() +
    labs(x="Minimum number of ratings per movie", y="Size of model (GiB)")
Figure 5.5: Memory footprint of the model representation by minimum number of ratings per user and per movie.”

Figure 5.6 shows the dominance of the [2, 2] block of L in the overall memory footprint of the model

Code
ggplot(sizespeed, aes(; x=:L22sz, y=:modelsz, color=:ucutoff)) +
    geom_point() +
    geom_line() +
    labs(y="Size of model representation (GiB)", x="Size of [2,2] block of L (GiB)")
Figure 5.6: Memory footprint of the model representation (GiB) versus the size of the [2, 2] block of L (GiB)

Figure 5.5 shows that when all the movies are included in the data to which the model is fit (i.e. mc == 1) the total memory footprint is over 20 GiB, and nearly 90% of that memory is that required for the [2,2] block of L. Even when requiring a minimum of 50 ratings per movie, the [2,2] block of L is over 30% of the memory footprint.

In a sense this is good news because the amount of storage required for the [2,2] block can be nearly cut in half by taking advantage of the fact that it is a triangular matrix. The rectangular full packed format looks especially promising for this purpose.

In general, for models with scalar random effects for two incompletely crossed grouping factors, the memory footprint depends strongly on the smaller of the number of levels of the grouping factors, less strongly on the larger number, and almost not at all on the number of observations.

5.2.3 Speed of log-likelihood evaluation

The time required to fit a model to large data sets is dominated by the time required to evaluate the log-likelihood during the optimization of the parameter estimates. The time for one evaluation is given in the evtime column of sizespeed. Also given is the number of evaluations to convergence, nv, and the time to fit the model, fittime The reason for considering evtime in addition to fittime and nev is because the evtime for one model, relative to other models, is reasonably stable across computers whereas nev, and hence, fittime, can be affected by seemingly trivial variations in function values resulting from different implementations of low-level calculations, such as the BLAS (Basic Linear Algebra Subroutines).

That is, we can’t expect to reproduce nv exactly when fitting the same model on different computers or with slightly different versions of software but the pattern in evtime with respect to uc and mc can be expected to reproducible.

As shown in Figure 5.7 the evaluation time for the objective is predominantly a function of the size of the [2, 2] block of L.

Code
ggplot(sizespeed, aes(x=:L22sz, y=:evtime, color=:ucutoff)) +
    geom_point() +
    geom_line() +
    labs(x="Size of [2,2] block of L (GiB)", y="Time for one evaluation of objective (s)")
Figure 5.7: Evaluation time for the objective (s) versus size of the [2, 2] block of L (GiB)

However the middle panel shows that the number of iterations to convergence is highly variable. Most of these models required between 20 and 25 evaluations but some required almost 50 evaluations.

The derivation of the log-likelihood for linear mixed-effects models is given in Section B.7, which provides a rather remarkable result: the profiled log-likelihood for a linear mixed-effects model can be evaluated from Cholesky factor of a blocked, positive-definite symmetric matrix.

This page was rendered from git revision 57a0584 .