Now Reading
Bayesian Structural Equation Modeling utilizing blavaan

Bayesian Structural Equation Modeling utilizing blavaan

2023-11-09 17:46:40

On this tutorial, we purpose to reveal find out how to use
blavaan (Merkle and Rosseel
2015)
for structural equation fashions (SEMs) and the corresponding
mannequin evaluation. The latest model of blavaan is supplied
with an environment friendly strategy primarily based on Stan (Merkle et al. 2020).

As a measurement mannequin and possibly some of the widespread particular
circumstances of a SEM, CFA is commonly used to 1) validate a hypothesized issue
construction amongst a number of variables, 2) estimate the correlation between
components, and three) acquire issue scores. For instance, think about a two-factor
((eta_{1j}, eta_{2j})) mannequin with
every issue measured by six objects ((y_{1j},
dots, y_{6j})
) for particular person (j): [
underbrace{left[begin{array}{l}
y_{1 j}
y_{2 j}
y_{3 j}
y_{4 j}
y_{5 j}
y_{6 j}
end{array}right]}_{boldsymbol{y}_{j}}=underbrace{left[begin{array}{c}
beta_{1}
beta_{2}
beta_{3}
beta_{4}
beta_{5}
beta_{6}
end{array}right]}_{boldsymbol{beta}}+underbrace{left[begin{array}{cc}
1 & 0
lambda_{21} & 0
lambda_{31} & 0
0 & 1
0 & lambda_{52}
0 & lambda_{62}
end{array}right]}_{Lambda}underbrace{left[begin{array}{l}
eta_{1 j}
eta_{2 j}
end{array}right]}_{boldsymbol{eta}_j}+underbrace{left[begin{array}{c}
epsilon_{1 j}
epsilon_{2 j}
epsilon_{3 j}
epsilon_{4 j}
epsilon_{5 j}
epsilon_{6 j}
end{array}right]}_{boldsymbol{epsilon}_j}
]

[
boldsymbol{epsilon}_{j} sim N_{I}(mathbf{0}, mathbf{Theta})
]

[
boldsymbol{eta}_{j} sim N_{K}(mathbf{0}, boldsymbol{Psi}),
]
the place the variety of objects or variables is (I = 6), the variety of components is (Ok = 2) and (mathbf{Theta}) is commonly assumed to be a
diagonal matrix.

  • (Y_{ij}) is the response of
    particular person (j) ((j=1,…,J)) on merchandise (i) ((i=1,…,I)).

  • (beta_i) is the intercept for
    merchandise (i).

  • (eta_{jk}) is the (ok)th widespread issue for particular person (j).

  • (lambda_{ik}) is the issue
    loading of merchandise (i) on issue (ok).

  • (epsilon_{ij}) is the random
    error time period for particular person (j) on merchandise
    (i).

  • (boldsymbol{Psi}) is the
    variance-covariance matrix of the widespread components (boldsymbol{eta}_{j}) .

  • (mathbf{Theta}) is the
    variance-covariance matrix of the residuals (or distinctive components) (boldsymbol{epsilon}_{j}).

Suppose the errors or residuals (epsilon_{ij}) are impartial of every
different. Then:

(psi_{kk}) is the variance for
the (ok)th issue, (psi_{jk}) is the covariance between the
(j)th and (ok)th components, (theta_{ii}) is the variance for the (i)th residual, and (theta_{ii’}=0) if and provided that (ineq i’). Particularly,

[
boldsymbol{Psi}=mathrm{Cov}begin{pmatrix}
eta_{1j}
eta_{2j}
end{pmatrix}=
left[begin{array}{cc}
psi_{1 1}&psi_{1 2}
psi_{2 1}&psi_{2 2}
end{array}right] ]

[mathbf{Theta}=mathrm{Cov}begin{pmatrix}
epsilon_{1 j}
epsilon_{2 j}
epsilon_{3 j}
epsilon_{4 j}
epsilon_{5 j}
epsilon_{6 j}
end{pmatrix}=
left[begin{array}{cc}
theta_{1 1} &0&0&0&0&0
0&theta_{2 2}&0&0&0&0
0&0&theta_{3 3}&0&0&0
0&0&0&theta_{4 4}&0&0
0&0&0&0&theta_{5 5}&0
0&0&0&0&0&theta_{6 6}
end{array}right] ]

To get began, we load the next R packages:

#knitr::opts_chunk$set(eval = FALSE)
knitr::opts_chunk$set(echo = TRUE, remark = "", cache = T)
library(rstan)
library(knitr)
library(blavaan)
library(lavaan)
library(MASS)
library(mvtnorm)
library(tidyverse)
library(semPlot)
library(magrittr)
library(Matrix)
choices(mc.cores = parallel::detectCores())

Simulation

To higher illustrate using blavaan, we simulate
information in order that we all know the info producing parameters. In our simulation,
we set (beta_i = 0), (psi_{11} = 1), (psi_{12} = psi_{21} =.5), (psi_{22} = .8), (lambda_{21} = 1.5), (lambda_{31} = 2), (lambda_{52} = 1.5), (lambda_{62} = 2), (theta_{ii} =.3). We simulate information from
the above mannequin for (J = 1000)
models.

# setup
J <- 1000
I <- 6
Ok <- 2
psi <- matrix(c(1, 0.5,
                0.5, 0.8), nrow = Ok)  
beta <- seq(1, 2, by = .2)

# loading matrix
Lambda <- cbind(c(1, 1.5, 2, 0, 0, 0), c(0, 0, 0, 1, 1.5, 2))

# error covariance
Theta <- diag(0.3, nrow = I)

# issue scores
eta <- mvrnorm(J, mu = c(0, 0), Sigma = psi)

# error time period
epsilon <- mvrnorm(J, mu = rep(0, ncol(Theta)),Sigma = Theta)

dat <- tcrossprod(eta, Lambda) + epsilon
dat_cfa  <-  dat %>% as.information.body() %>% setNames(c("Y1", "Y2", "Y3", "Y4", "Y5", "Y6"))

We outline the mannequin for lavaan as follows:

lavaan_cfa <- 'eta1 =~ Y1 + Y2 + Y3
               eta2 =~ Y4 + Y5 + Y6'

Two latent variables eta1 and eta2 are
specified to be measured by three objects every, denoted as
eta1 =~ Y1 + Y2 + Y3, and similary for eta2. By not
specifying different components of the mannequin, by default we assume that the error
phrases for objects are uncorrelated with one another whereas the covariances
between latent variables are free to be estimated.

We characterize the CFA mannequin in a path diagram after which match the mannequin
by most probability estimation utilizing the cfa operate in
the lavaan bundle. By conference, latent variables (eta_1) and (eta_2) are represented by circles, and
noticed variables (Y_{1}) to (Y_{6}) by rectangles. Straight arrows
characterize linear relations (right here with coefficients given by the issue
loadings (lambda)), and
double-headed arrows characterize variances and covariances. We may make
the diagram by merely utilizing the operate name
semPaths(semPlotModel_lavaanModel(lavaan_cfa)). Beneath is
the extra complicated syntax to show Greek letters, subscripts, and so on.

FIT <-
  semPlotModel_lavaanModel(
    lavaan_cfa,
    auto.var = TRUE,
    auto.repair.first = TRUE,
    auto.cov.lv.x = TRUE
  )
semPaths(
  FIT,
  what = "paths",
  whatLabels = "par",
  ,
  nodeLabels = c(
    expression(paste(Y[1])),
    expression(paste(Y[2])),
    expression(paste(Y[3])),
    expression(paste(Y[4])),
    expression(paste(Y[5])),
    expression(paste(Y[6])),
    expression(paste(eta[1])),
    expression(paste(eta[2]))
  ),
  edge.label.cex = 0.8,
  edgeLabels = c(
    expression(paste(lambda[1])),
    expression(paste(lambda[2])),
    expression(paste(lambda[3])),
    expression(paste(lambda[4])),
    expression(paste(lambda[5])),
    expression(paste(lambda[6])),
    expression(paste("Covariance")),
    expression(paste(epsilon[1])),
    expression(paste(epsilon[2])),
    expression(paste(epsilon[3])),
    expression(paste(epsilon[4])),
    expression(paste(epsilon[5])),
    expression(paste(epsilon[6])),
    expression(paste(psi[1])),
    expression(paste(psi[2]))
  )
)

# lavaan
lav_cfa_fit <- cfa(lavaan_cfa, information = dat_cfa, meanstructure = TRUE)
abstract(lav_cfa_fit, match.measures = TRUE)
lavaan 0.6.15 ended usually after 29 iterations

  Estimator                                         ML
  Optimization methodology                           NLMINB
  Variety of mannequin parameters                        19

  Variety of observations                          1000

Mannequin Take a look at Consumer Mannequin:
                                                      
  Take a look at statistic                                 1.553
  Levels of freedom                                 8
  P-value (Chi-square)                           0.992

Mannequin Take a look at Baseline Mannequin:

  Take a look at statistic                              6005.574
  Levels of freedom                                15
  P-value                                        0.000

Consumer Mannequin versus Baseline Mannequin:

  Comparative Match Index (CFI)                    1.000
  Tucker-Lewis Index (TLI)                       1.002

Loglikelihood and Data Standards:

  Loglikelihood consumer mannequin (H0)              -7909.932
  Loglikelihood unrestricted mannequin (H1)      -7909.155
                                                      
  Akaike (AIC)                               15857.863
  Bayesian (BIC)                             15951.111
  Pattern-size adjusted Bayesian (SABIC)      15890.766

Root Imply Sq. Error of Approximation:

  RMSEA                                          0.000
  90 % confidence interval - decrease         0.000
  90 % confidence interval - higher         0.000
  P-value H_0: RMSEA <= 0.050                    1.000
  P-value H_0: RMSEA >= 0.080                    0.000

Standardized Root Imply Sq. Residual:

  SRMR                                           0.002

Parameter Estimates:

  Customary errors                             Customary
  Data                                 Anticipated
  Data saturated (h1) mannequin          Structured

Latent Variables:
                   Estimate  Std.Err  z-value  P(>|z|)
  eta1 =~                                             
    Y1                1.000                           
    Y2                1.448    0.032   45.487    0.000
    Y3                1.964    0.042   46.845    0.000
  eta2 =~                                             
    Y4                1.000                           
    Y5                1.513    0.036   41.878    0.000
    Y6                1.969    0.045   43.298    0.000

Covariances:
                   Estimate  Std.Err  z-value  P(>|z|)
  eta1 ~~                                             
    eta2              0.523    0.037   14.100    0.000

Intercepts:
                   Estimate  Std.Err  z-value  P(>|z|)
   .Y1               -0.016    0.037   -0.435    0.664
   .Y2               -0.010    0.049   -0.194    0.846
   .Y3               -0.004    0.066   -0.055    0.956
   .Y4               -0.086    0.033   -2.581    0.010
   .Y5               -0.078    0.047   -1.662    0.097
   .Y6               -0.110    0.059   -1.852    0.064
    eta1              0.000                           
    eta2              0.000                           

Variances:
                   Estimate  Std.Err  z-value  P(>|z|)
   .Y1                0.328    0.017   19.053    0.000
   .Y2                0.271    0.022   12.417    0.000
   .Y3                0.351    0.037    9.537    0.000
   .Y4                0.296    0.016   18.809    0.000
   .Y5                0.297    0.023   12.952    0.000
   .Y6                0.340    0.035    9.711    0.000
    eta1              1.025    0.059   17.262    0.000
    eta2              0.822    0.049   16.788    0.000

By default, lavaan makes use of most probability estimation.
Estimates of the loadings (lambda_{ik}) are given below “Latent
Variables”, the estimated covariance among the many widespread components is given
below “Covariances”, estimates of intercepts (beta_i) of the measurement fashions, as
effectively because the technique of the widespread components (set to 0, not estimated), are
given below “Intercepts”, and estimates of the residual variances of the
responses (variances (theta_{ii}) of
(epsilon_{ij})) and of the widespread
components (variances (psi_{kk}) of
(eta_{jk})) are given below
“Variances.”

blavaan borrows the syntax from lavaan and
is a wrapper for Bayesian estimation utilizing Stan. The Stan object created
by blavaan may be seen utilizing lavInspect().

By default, the bcfa operate in blavaan
makes use of an anchoring merchandise for every issue for identification, so the issue
loading of the primary merchandise that hundreds on an element is fastened at 1.
blavaan additionally contains intercepts for all noticed variables
by default, which is completely different from the default in lavaan
(we specified meanstructure = TRUE within the cfa
operate name above in order that the 2 parameterizations match).

# blavaan
blav_cfa_fit <- bcfa(lavaan_cfa, information=dat_cfa, mcmcfile = T)
Computing posterior predictives...
abstract(blav_cfa_fit)
blavaan (0.3-15) outcomes of 1000 samples after 500 adapt/burnin iterations

  Variety of observations                          1000

  Variety of lacking patterns                         1

  Statistic                                 MargLogLik         PPP
  Worth                                      -8007.416       0.744

Latent Variables:
                   Estimate  Submit.SD pi.decrease pi.higher     Rhat    Prior       
  eta1 =~                                                                      
    Y1                1.000                                  NA                
    Y2                1.449    0.031     1.39    1.512    0.999    regular(0,10)
    Y3                1.967    0.042    1.887     2.05    1.000    regular(0,10)
  eta2 =~                                                                      
    Y4                1.000                                  NA                
    Y5                1.516    0.036    1.446    1.587    1.000    regular(0,10)
    Y6                1.973    0.044    1.888    2.062    1.000    regular(0,10)

Covariances:
                   Estimate  Submit.SD pi.decrease pi.higher     Rhat    Prior       
  eta1 ~~                                                                      
    eta2              0.522    0.037    0.453    0.598    1.000     lkj_corr(1)

Intercepts:
                   Estimate  Submit.SD pi.decrease pi.higher     Rhat    Prior       
   .Y1               -0.015    0.038    -0.09     0.06    1.000    regular(0,32)
   .Y2               -0.009    0.051   -0.108    0.088    1.000    regular(0,32)
   .Y3               -0.002    0.067   -0.137     0.13    1.000    regular(0,32)
   .Y4               -0.086    0.034   -0.154   -0.022    1.000    regular(0,32)
   .Y5               -0.077    0.047    -0.17    0.014    1.001    regular(0,32)
   .Y6               -0.109    0.061    -0.23    0.009    1.001    regular(0,32)
    eta1              0.000                                  NA                
    eta2              0.000                                  NA                

Variances:
                   Estimate  Submit.SD pi.decrease pi.higher     Rhat    Prior       
   .Y1                0.330    0.017    0.297    0.365    1.000 gamma(1,.5)[sd]
   .Y2                0.272    0.022     0.23    0.318    1.000 gamma(1,.5)[sd]
   .Y3                0.351    0.039    0.279    0.431    1.000 gamma(1,.5)[sd]
   .Y4                0.298    0.016    0.269    0.333    1.000 gamma(1,.5)[sd]
   .Y5                0.299    0.023    0.255    0.345    1.000 gamma(1,.5)[sd]
   .Y6                0.342    0.036    0.272    0.412    1.000 gamma(1,.5)[sd]
    eta1              1.027    0.059    0.919    1.151    1.000 gamma(1,.5)[sd]
    eta2              0.823    0.049    0.729    0.921    0.999 gamma(1,.5)[sd]
fitmeasures(blav_cfa_fit)
      npar       logl        ppp        bic        dic      p_dic       waic 
    19.000  -7909.981      0.744  15951.190  15858.125     19.081  15858.403 
    p_waic    se_waic      looic      p_loo     se_loo margloglik 
    19.272    111.599  15858.456     19.299    111.600  -8007.416 

We will see that the output from blavaan resembles that
from lavaan though it offers the posterior means and
commonplace deviations for the mannequin parameters.

We current the utmost probability estimates from lavaan
and Bayesian estimates from blavaan subsequent to one another for
comparability; Since by default blavaan makes use of non-informative
priors and the scale of information is massive, we see the 2 units of estimates
are very comparable:

bind_cols(parameterEstimates(lav_cfa_fit)[, 1:4], parameterEstimates(blav_cfa_fit)[, 4]) %>% rename(ML = est, Bayes = ...5) %>% knitr::kable()
New names:
• `` -> `...5`
eta1 =~ Y1 1.0000000 1.0000000
eta1 =~ Y2 1.4479658 1.4494496
eta1 =~ Y3 1.9641409 1.9666819
eta2 =~ Y4 1.0000000 1.0000000
eta2 =~ Y5 1.5126562 1.5157573
eta2 =~ Y6 1.9690845 1.9727406
Y1 ~~ Y1 0.3276724 0.3296146
Y2 ~~ Y2 0.2709558 0.2724962
Y3 ~~ Y3 0.3507402 0.3512957
Y4 ~~ Y4 0.2962317 0.2984597
Y5 ~~ Y5 0.2972492 0.2991214
Y6 ~~ Y6 0.3398565 0.3415060
eta1 ~~ eta1 1.0248042 1.0265839
eta2 ~~ eta2 0.8222773 0.8226690
eta1 ~~ eta2 0.5225991 0.5223931
Y1 ~1 -0.0159813 -0.0152329
Y2 ~1 -0.0095513 -0.0086645
Y3 ~1 -0.0036208 -0.0020447
Y4 ~1 -0.0863222 -0.0856052
Y5 ~1 -0.0775607 -0.0767313
Y6 ~1 -0.1099804 -0.1088010
eta1 ~1 0.0000000 0.0000000
eta2 ~1 0.0000000 0.0000000

To verify the setting for the underlying Stan program (i.e., the
variety of chains, the variety of warm-up interations, and the variety of
publish warm-up attracts):

blavInspect(blav_cfa_fit, "mcobj")
Inference for Stan mannequin: stanmarg.
3 chains, every with iter=1500; warmup=500; skinny=1; 
post-warmup attracts per chain=1000, complete post-warmup attracts=3000.

                 imply se_mean   sd     2.5%      25%      50%      75%    97.5%
ly_sign[1]       1.45    0.00 0.03     1.39     1.43     1.45     1.47     1.51
ly_sign[2]       1.97    0.00 0.04     1.89     1.94     1.97     2.00     2.05
ly_sign[3]       1.52    0.00 0.04     1.45     1.49     1.52     1.54     1.59
ly_sign[4]       1.97    0.00 0.04     1.89     1.94     1.97     2.00     2.06
Theta_var[1]     0.33    0.00 0.02     0.30     0.32     0.33     0.34     0.36
Theta_var[2]     0.27    0.00 0.02     0.23     0.26     0.27     0.29     0.32
Theta_var[3]     0.35    0.00 0.04     0.28     0.32     0.35     0.38     0.43
Theta_var[4]     0.30    0.00 0.02     0.27     0.29     0.30     0.31     0.33
Theta_var[5]     0.30    0.00 0.02     0.26     0.28     0.30     0.31     0.35
Theta_var[6]     0.34    0.00 0.04     0.27     0.32     0.34     0.37     0.41
Psi_cov[1]       0.52    0.00 0.04     0.45     0.50     0.52     0.55     0.60
Psi_var[1]       1.03    0.00 0.06     0.92     0.99     1.02     1.06     1.15
Psi_var[2]       0.82    0.00 0.05     0.73     0.79     0.82     0.86     0.92
Nu_free[1]      -0.02    0.00 0.04    -0.09    -0.04    -0.02     0.01     0.06
Nu_free[2]      -0.01    0.00 0.05    -0.11    -0.04    -0.01     0.03     0.09
Nu_free[3]       0.00    0.00 0.07    -0.14    -0.05     0.00     0.04     0.13
Nu_free[4]      -0.09    0.00 0.03    -0.15    -0.11    -0.09    -0.06    -0.02
Nu_free[5]      -0.08    0.00 0.05    -0.17    -0.11    -0.08    -0.04     0.01
Nu_free[6]      -0.11    0.00 0.06    -0.23    -0.15    -0.11    -0.07     0.01
lp__         -7971.62    0.09 3.19 -7978.79 -7973.49 -7971.28 -7969.34 -7966.40
             n_eff Rhat
ly_sign[1]    3120    1
ly_sign[2]    2922    1
ly_sign[3]    3137    1
ly_sign[4]    3137    1
Theta_var[1]  4755    1
Theta_var[2]  3581    1
Theta_var[3]  3796    1
Theta_var[4]  4847    1
Theta_var[5]  4220    1
Theta_var[6]  4103    1
Psi_cov[1]    3418    1
Psi_var[1]    3003    1
Psi_var[2]    3284    1
Nu_free[1]    1591    1
Nu_free[2]    1514    1
Nu_free[3]    1529    1
Nu_free[4]    1769    1
Nu_free[5]    1597    1
Nu_free[6]    1564    1
lp__          1219    1

Samples had been drawn utilizing NUTS(diag_e) at Thu Apr 20 18:43:35 2023.
For every parameter, n_eff is a crude measure of efficient pattern measurement,
and Rhat is the potential scale discount issue on break up chains (at 
convergence, Rhat=1).

We see that there have been 3 chains with 1500 iterations every, the primary
500 of which had been warmup, giving a complete of 3000 attracts. Within the Stan
output above, se_mean is the Monte Carlo error as a result of
undeniable fact that we use posterior attracts to empirically estimate posterior
expectations. These Monte Carlo errors are given by (frac{sd}{n_{eff}}), the place sd
right here refers back to the posterior commonplace deviation which may be considered as a
Bayesian counterpart of frequentist commonplace errors. As a result of there may be an
autocorrelation amongst our Monte Carlo attracts inside chains, the
“efficient” pattern, n_eff, used to estimate the Monte Carlo
errors is smaller than the variety of publish warmup attracts. The efficient
pattern measurement may be considered because the variety of impartial attracts that might
give the identical quantity of data when it comes to the Monte Carlo errors.
We see that the Monte Carlo errors will go to zero because the variety of
attracts goes to infinity. Rhat quantifies how effectively the
completely different chains combine with one another (overlap) publish warmup, with a worth
nearer to 1 lower than 1.1 indicating that there’s enough mixing. The
goal of working a number of chains with completely different beginning values for
the parameters is to evaluate whether or not the chains have reached the
stationary distribution, i.e., the required appropriate posterior
distribution, by checking that the chains are converging to the identical
distribution and are therefore mixing with one another.

One good thing about working a Bayesian evaluation is that we will incorporate
prior info if now we have it. blavaan permits customers to
specify priors for all of the mannequin parameters by way of the argument
dp and makes use of non-informative priors (with flat
densities/massive variances) if this argument is left clean. When
non-informative priors are used for all parameters, the Bayesian
estimates might be much like their most probability counterparts.

To verify the default priors,

(default_prior <- dpriors())
               nu             alpha            lambda              beta 
   "regular(0,32)"    "regular(0,10)"    "regular(0,10)"    "regular(0,10)" 
            theta               psi               rho             ibpsi 
"gamma(1,.5)[sd]" "gamma(1,.5)[sd]"       "beta(1,1)" "wishart(3,iden)" 
              tau             delta 
"regular(0,10^.5)" "gamma(1,.5)[sd]" 

To vary a previous, we will simply substitute the specified one: for
instance, if we wish to change the prior for beta to be
regular(0, 1), then we will type a brand new prior vector:

(new_prior <- dpriors(beta = "regular(0, 1)"))
               nu             alpha            lambda              beta 
   "regular(0,32)"    "regular(0,10)"    "regular(0,10)"    "regular(0, 1)" 
            theta               psi               rho             ibpsi 
"gamma(1,.5)[sd]" "gamma(1,.5)[sd]"       "beta(1,1)" "wishart(3,iden)" 
              tau             delta 
"regular(0,10^.5)" "gamma(1,.5)[sd]" 
# new cfa with up to date prior for beta
# bcfa(lavaan_cfa, information=dat_cfa, dp = new_prior)

For additional info on find out how to change the priors for particular person
parameters, verify
https://college.missouri.edu/~merklee/blavaan/prior.html.

As a well-liked methodology for modeling progress, on this part, we talk about
one other particular case of SEM: latent growth-curve fashions. This
mannequin is helpful when modeling change over time inside people
altering over time and variability amongst particular person trajectories. A
latent progress curve mannequin for six events may be written as:

[
left[begin{array}{l}
y_{1 j}
y_{2 j}
y_{3 j}
y_{4 j}
y_{5 j}
y_{6 j}
end{array}right]=left[begin{array}{ll}
1 & 0
1 & 1
1 & 2
1 & 3
1 & 4
1 & 5
end{array}right]left[begin{array}{l}
eta_{1 j}
eta_{2 j}
end{array}right]+left[begin{array}{l}
epsilon_{1 j}
epsilon_{2 j}
epsilon_{3 j}
epsilon_{4 j}
epsilon_{5 j}
epsilon_{6 j}
end{array}right]=left[begin{array}{l}
eta_{1 j}+0 eta_{2 j}+epsilon_{1 j}
eta_{1 j}+1 eta_{2 j}+epsilon_{2 j}
eta_{1 j}+2 eta_{2 j}+epsilon_{3 j}
eta_{1 j}+3 eta_{2 j}+epsilon_{4 j}
eta_{1 j}+4 eta_{2 j}+epsilon_{5 j}
eta_{1 j}+5 eta_{2 j}+epsilon_{6 j}
end{array}right] ]

[
boldsymbol{epsilon}_{j} sim N_{p}(mathbf{0}, mathbf{Theta})
]

The place

  • (y_{ij}) is the response at
    event (i) for particular person (j).

  • (eta_{1j}) is an
    individual-specific intercept and (eta_{2j}) is an individual-specific slope
    of time. These intercepts and slopes are latent variables that observe a
    bivariate regular distribution with free means and an unstructured
    covariance matrix.

  • (epsilon_{ij}) is an
    occasion-specific error. The errors (epsilon_{1j}) to (epsilon_{6j})have zero means and a
    diagonal (6 instances 6) covariance
    matrix:

[mathbf{Theta}=covbegin{pmatrix}
epsilon_{1 j}
epsilon_{2 j}
epsilon_{3 j}
epsilon_{4 j}
epsilon_{5 j}
epsilon_{6 j}
end{pmatrix}=
left[begin{array}{cc}
theta_{1 1} &0&0&0&0&0
0&theta_{2 2}&0&0&0&0
0&0&theta_{3 3}&0&0&0
0&0&0&theta_{4 4}&0&0
0&0&0&0&theta_{5 5}&0
0&0&0&0&0&theta_{6 6}
end{array}right] ]
The pre-fixed loadings (0, 1, 2, 3, 4, 5) for the
individual-specific slope characterize the instances related to the
events, and this type of mannequin works provided that the time variable takes
on the identical set of values for all people. Right here the events are
moreover assumed to be equally spaced.

We simulated information for 500 people primarily based on the mannequin above as
follows:

















N <- 500
psi <- matrix(c(1, 0.5, 
                    0.5, 1), nrow = 2)  
# loading matrix
Lambda <- matrix(c(1, 1, 1, 1, 1, 1, 0:5), nrow = 6)

# error covariance
Theta <- diag(0.3, nrow = 6)

# exogenous latent variables
eta <- mvrnorm(N, mu = c(2, 1), Sigma = psi)

# error time period
epsilon <- mvrnorm(N, mu = rep(0, ncol(Theta)), Sigma = Theta)

dat <- tcrossprod(eta, Lambda) + epsilon
dat  <-  dat %>% as.matrix() %>% as.information.body() %>% setNames(paste0("Y", 1:6))

To estimate the parameters of our mannequin (visualized within the path
diagram beneath) utilizing lavaan, we outline two latent variables
(i.e., ri the random intercept and rc the
random slope/coefficient). The syntax 1*Y1 signifies that we
are fixing the issue loadings for Y1 to 1. We additionally
constrain the residual variance estimates (indicated as c)
for all of the noticed variables to be the identical by
Y1 ~~ c*Y1, which implies Y1 has a residual
variance c to be estimated (and this is similar for the
different response variables).

lavaan_lgm <- 'ri =~ 1*Y1 + 1*Y2 + 1*Y3 + 1*Y4 + 1*Y5 + 1*Y6
               rc =~ 0*Y1 + 1*Y2 + 2*Y3 + 3*Y4 + 4*Y5 + 5*Y6
               Y1 ~~ c*Y1
               Y2 ~~ c*Y2
               Y3 ~~ c*Y3
               Y4 ~~ c*Y4
               Y5 ~~ c*Y5
               Y6 ~~ c*Y6
               ri ~~ rc
               '

We will make a path diagram like this:

semPaths(semPlotModel_lavaanModel(lavaan_lgm))

To show Greek letters and subscripts, we will use the next
syntax:

FIT2 <-
  semPlotModel_lavaanModel(
    lavaan_lgm,
    auto.var = TRUE,
    auto.repair.first = TRUE,
    auto.cov.lv.x = TRUE
  )
semPaths(
  FIT2,
  what = "paths",
  whatLabels = "par",
  nodeLabels = c(
    expression(paste(Y[1])),
    expression(paste(Y[2])),
    expression(paste(Y[3])),
    expression(paste(Y[4])),
    expression(paste(Y[5])),
    expression(paste(Y[6])),
    expression(paste(eta[1])),
    expression(paste(eta[2]))
  ),
  edge.label.cex = 0.6,
  edgeLabels = c(
    expression(paste(1)),
    expression(paste(1)),
    expression(paste(1)),
    expression(paste(1)),
    expression(paste(1)),
    expression(paste(1)),
    expression(paste(0)),
    expression(paste(1)),
    expression(paste(2)),
    expression(paste(3)),
    expression(paste(4)),
    expression(paste(5)),
    expression(paste(epsilon[1])),
    expression(paste(epsilon[2])),
    expression(paste(epsilon[3])),
    expression(paste(epsilon[4])),
    expression(paste(epsilon[5])),
    expression(paste(epsilon[6])),
    expression(paste("Covariance")),
    expression(paste(psi[1])),
    expression(paste(psi[2]))
  )
)

Following the above mannequin specification, we match a latent progress curve
mannequin utilizing the progress operate in lavaan. By
default, progress estimates the latent means, as required,
fairly than the intercepts for the noticed variables:

# lavaan
lav_lgm_fit <- progress(lavaan_lgm, information = dat)
abstract(lav_lgm_fit)
lavaan 0.6.15 ended usually after 26 iterations

  Estimator                                         ML
  Optimization methodology                           NLMINB
  Variety of mannequin parameters                        11
  Variety of equality constraints                     5

  Variety of observations                           500

Mannequin Take a look at Consumer Mannequin:
                                                      
  Take a look at statistic                                30.656
  Levels of freedom                                21
  P-value (Chi-square)                           0.080

Parameter Estimates:

  Customary errors                             Customary
  Data                                 Anticipated
  Data saturated (h1) mannequin          Structured

Latent Variables:
                   Estimate  Std.Err  z-value  P(>|z|)
  ri =~                                               
    Y1                1.000                           
    Y2                1.000                           
    Y3                1.000                           
    Y4                1.000                           
    Y5                1.000                           
    Y6                1.000                           
  rc =~                                               
    Y1                0.000                           
    Y2                1.000                           
    Y3                2.000                           
    Y4                3.000                           
    Y5                4.000                           
    Y6                5.000                           

Covariances:
                   Estimate  Std.Err  z-value  P(>|z|)
  ri ~~                                               
    rc                0.484    0.051    9.506    0.000

Intercepts:
                   Estimate  Std.Err  z-value  P(>|z|)
   .Y1                0.000                           
   .Y2                0.000                           
   .Y3                0.000                           
   .Y4                0.000                           
   .Y5                0.000                           
   .Y6                0.000                           
    ri                2.038    0.048   42.629    0.000
    rc                1.058    0.044   24.113    0.000

Variances:
                   Estimate  Std.Err  z-value  P(>|z|)
   .Y1         (c)    0.300    0.010   31.623    0.000
   .Y2         (c)    0.300    0.010   31.623    0.000
   .Y3         (c)    0.300    0.010   31.623    0.000
   .Y4         (c)    0.300    0.010   31.623    0.000
   .Y5         (c)    0.300    0.010   31.623    0.000
   .Y6         (c)    0.300    0.010   31.623    0.000
    ri                0.986    0.072   13.602    0.000
    rc                0.945    0.061   15.529    0.000

On this mannequin, we’re primarily within the means and variances of
the random intercept and slope. We will use the bgrowth
operate within the blavaan bundle to suit the identical mannequin:

# blavaan
blav_lgm_fit <- bgrowth(lavaan_lgm, information = dat)
Computing posterior predictives...
abstract(blav_lgm_fit)
blavaan (0.3-15) outcomes of 1000 samples after 500 adapt/burnin iterations

  Variety of observations                           500

  Variety of lacking patterns                         1

  Statistic                                 MargLogLik         PPP
  Worth                                      -4216.911       0.116

Latent Variables:
                   Estimate  Submit.SD pi.decrease pi.higher     Rhat    Prior       
  ri =~                                                                        
    Y1                1.000                                  NA                
    Y2                1.000                                  NA                
    Y3                1.000                                  NA                
    Y4                1.000                                  NA                
    Y5                1.000                                  NA                
    Y6                1.000                                  NA                
  rc =~                                                                        
    Y1                0.000                                  NA                
    Y2                1.000                                  NA                
    Y3                2.000                                  NA                
    Y4                3.000                                  NA                
    Y5                4.000                                  NA                
    Y6                5.000                                  NA                

Covariances:
                   Estimate  Submit.SD pi.decrease pi.higher     Rhat    Prior       
  ri ~~                                                                        
    rc                0.485    0.051    0.393     0.59    1.000     lkj_corr(1)

Intercepts:
                   Estimate  Submit.SD pi.decrease pi.higher     Rhat    Prior       
   .Y1                0.000                                  NA                
   .Y2                0.000                                  NA                
   .Y3                0.000                                  NA                
   .Y4                0.000                                  NA                
   .Y5                0.000                                  NA                
   .Y6                0.000                                  NA                
    ri                2.038    0.048    1.946     2.13    1.000    regular(0,10)
    rc                1.058    0.044     0.97    1.146    0.999    regular(0,10)

Variances:
                   Estimate  Submit.SD pi.decrease pi.higher     Rhat    Prior       
   .Y1         (c)    0.301    0.010    0.283     0.32    0.999 gamma(1,.5)[sd]
   .Y2         (c)    0.301    0.010    0.283     0.32    0.999                
   .Y3         (c)    0.301    0.010    0.283     0.32    0.999                
   .Y4         (c)    0.301    0.010    0.283     0.32    0.999                
   .Y5         (c)    0.301    0.010    0.283     0.32    0.999                
   .Y6         (c)    0.301    0.010    0.283     0.32    0.999                
    ri                0.992    0.072    0.857    1.143    1.001 gamma(1,.5)[sd]
    rc                0.952    0.061    0.839    1.079    1.001 gamma(1,.5)[sd]

For latent progress curve fashions, we will additionally view the responses on the
completely different time factors as a univariate final result (lengthy format), as a substitute of
being multivariate (large format). In such a multilevel perspective,
responses on the completely different time-points (stage 1) are considered as nested
inside every particular person (stage 2). The expansion curve mannequin can then be
specified as a two-level linear combined mannequin (or hierarchical linear
mannequin, HLM) and estimated utilizing the lmer operate within the
lme4 bundle. To do that, we first reshape our information from
large format to lengthy format:

# large to lengthy
dat_long <-
  dat %>% add_column(id = 1:500) %>% collect(key = "time", worth = "y",-id) %>% mutate(time = dplyr::recode(
    time,
    Y1 = 0,
    Y2 = 1,
    Y3 = 2,
    Y4 = 3,
    Y5 = 4,
    Y6 = 5
  ))
dat_long %>% head %>% kable()
1 0 2.9577313
2 0 3.2700705
3 0 2.6845096
4 0 1.4890722
5 0 0.5877364
6 0 2.6755304

For the random a part of the mannequin, (1 + time)|id is used
to point that the mannequin contains random intercept and random slope of
time for clusters (individuals within the present instance) whose identifiers is
within the variable id. For an in depth tutorial on Bayesian
multilevel regression and multilevel regression on the whole, see
https://mc-stan.org/customers/documentation/case-studies/tutorial_rstanarm.html.

# multi level marketing framework
lav_lgm_fit_hlm <- lme4::lmer(y ~ 1 + time + (1 + time|id), information = dat_long, REML = FALSE)
abstract(lav_lgm_fit_hlm)
Linear combined mannequin match by most probability  ['lmerMod']
System: y ~ 1 + time + (1 + time | id)
   Information: dat_long

     AIC      BIC   logLik deviance df.resid 
  8398.1   8434.1  -4193.0   8386.1     2994 

Scaled residuals: 
    Min      1Q  Median      3Q     Max 
-3.3512 -0.5844  0.0021  0.5514  2.9637 

Random results:
 Teams   Identify        Variance Std.Dev. Corr
 id       (Intercept) 0.9856   0.9928       
          time        0.9452   0.9722   0.50
 Residual             0.3005   0.5481       
Variety of obs: 3000, teams:  id, 500

Mounted results:
            Estimate Std. Error t worth
(Intercept)  2.03815    0.04781   42.63
time         1.05787    0.04387   24.11

Correlation of Mounted Results:
     (Intr)
time 0.420 

We current the frequentist SEM estimates from lavaan,
the frequentist HLM estimates from lme4 and Bayesian
estimates from blavaan. As for CFA, the Bayesian and
frequentist estimates from the completely different capabilities are comparable:

hlm_beta <- lav_lgm_fit_hlm@beta
hlm_var_RE <-
  lme4::VarCorr(lav_lgm_fit_hlm) %>% as.information.body(., comp = c("Variance"), order = "decrease.tri")
hlm_estimates <-
  c(rep(NA, 12),
    rep(hlm_var_RE[4, 4], 6),
    hlm_var_RE[c(2, 1, 3), 4],
    rep(NA, 6),
    hlm_beta)
bind_cols(
  parameterEstimates(lav_lgm_fit)[, 1:5],
  hlm_estimates,
  parameterEstimates(blav_lgm_fit)[, 5]
) %>% rename(ML_SEM = est,
             ML_HLM = ...6,
             Bayes = ...7) %>% knitr::kable()
New names:
• `` -> `...6`
• `` -> `...7`
ri =~ Y1 1.0000000 NA 1.0000000
ri =~ Y2 1.0000000 NA 1.0000000
ri =~ Y3 1.0000000 NA 1.0000000
ri =~ Y4 1.0000000 NA 1.0000000
ri =~ Y5 1.0000000 NA 1.0000000
ri =~ Y6 1.0000000 NA 1.0000000
rc =~ Y1 0.0000000 NA 0.0000000
rc =~ Y2 1.0000000 NA 1.0000000
rc =~ Y3 2.0000000 NA 2.0000000
rc =~ Y4 3.0000000 NA 3.0000000
rc =~ Y5 4.0000000 NA 4.0000000
rc =~ Y6 5.0000000 NA 5.0000000
Y1 ~~ Y1 c 0.3004538 0.3004539 0.3008866
Y2 ~~ Y2 c 0.3004538 0.3004539 0.3008866
Y3 ~~ Y3 c 0.3004538 0.3004539 0.3008866
Y4 ~~ Y4 c 0.3004538 0.3004539 0.3008866
Y5 ~~ Y5 c 0.3004538 0.3004539 0.3008866
Y6 ~~ Y6 c 0.3004538 0.3004539 0.3008866
ri ~~ rc 0.4838138 0.4838131 0.4853817
ri ~~ ri 0.9855904 0.9855833 0.9921259
rc ~~ rc 0.9451540 0.9451574 0.9524373
Y1 ~1 0.0000000 NA 0.0000000
Y2 ~1 0.0000000 NA 0.0000000
Y3 ~1 0.0000000 NA 0.0000000
Y4 ~1 0.0000000 NA 0.0000000
Y5 ~1 0.0000000 NA 0.0000000
Y6 ~1 0.0000000 NA 0.0000000
ri ~1 2.0381506 2.0381506 2.0380207
rc ~1 1.0578727 1.0578727 1.0576254

SEM is helpful to check hypothesized covariance buildings amongst
variables primarily based on substantive theories by modeling the relations amongst
a number of latent (unobserved) variables or constructs which might be measured
by manifest (noticed) variables or indicators and evaluating the
corresponding mannequin match. It may be considered a mixture of
regression evaluation and issue evaluation. SEM incorporates two components, a
so-called measurement mannequin describes relations between noticed
indicators and latent variables, and a so-called structural mannequin
specifies a set of simultaneous linear relations amongst latent variables
and presumably noticed variables. SEM includes two fundamental sorts of
relations: 1) correlational relations between latent variables are
represented by two-headed arrows; 2) cause-and-effect sort relations are
represented by single-headed arrows within the path diagram. For instance, a
cause-and-effect sort SEM mannequin may be written by beginning with the
measurement mannequin for the exogenous latent variables:

[
left[begin{array}{l}
x_{1 j}
x_{2 j}
x_{3 j}
z_{4 j}
z_{5 j}
z_{6 j}
end{array}right]=left[begin{array}{c}
beta_{1}
beta_{2}
beta_{3}
beta_{4}
beta_{5}
beta_{6}
end{array}right]+left[begin{array}{cc}
1 & 0
lambda_{21} & 0
lambda_{31} & 0
0 & 1
0 & lambda_{52}
0 & lambda_{62}
end{array}right]left[begin{array}{l}
eta_{1 j}
eta_{2 j}
end{array}right]+left[begin{array}{c}
epsilon_{1 j}
epsilon_{2 j}
epsilon_{3 j}
epsilon_{4 j}
epsilon_{5 j}
epsilon_{6 j}
end{array}right] ]

the place

  • (x_{i j}) represents an
    indicator of (eta_{1 j}) and (z_{i j}) represents an indicator of (eta_{2 j}).

  • (eta_{1 j}) and (eta_{2 j}) characterize exogenous latent
    variables within the mannequin with zero means and unstructured covariance
    matrix.

  • (epsilon_{i j}) represents
    errors of measurement for (x_{i j})
    or (z_{i j}).

  • (beta_i) represents an
    intercept for (x_i) or (z_{ij}).

  • (lambda_{i j}) represents a
    issue loading for the measurement mannequin, which represents the magnitude
    of the anticipated change within the noticed variables for a one-unit change
    within the latent variable.

We assume that the errors of measurement (epsilon_{i j}) have an anticipated worth of
zero, that they’re uncorrelated with all latent variables and
uncorrelated with one another for all pairs of things.

Subsequent, now we have a measurement mannequin for a latent endogenous variable:
[
left[begin{array}{l}
y_{1 j}
y_{2 j}
y_{3 j}
end{array}right]=left[begin{array}{c}
gamma_{1}
gamma_{2}
gamma_{3}
end{array}right]+left[begin{array}{cc}
1
lambda_{8}
lambda_{9}
end{array}right]left[begin{array}{l}
xi_{1 j}
end{array}right]+left[begin{array}{c}
delta_{1 j}
delta_{2 j}
delta_{3 j}
end{array}right] ]

the place

See Also

  • (y_{i j}) represents an
    indicator of (xi_{1 j}).

  • (xi_{1 j}) represents a
    latent endogenous variable with zero imply.

  • (delta_{i j}) represents
    measurement error for (y_{ij}).

  • (gamma_i) represents an
    intercept for (y_i).

  • (lambda_{i}) represents a
    issue loading for the measurement mannequin.

We assume that the measurement errors have zero means and are
uncorrelated with the exogenous and endogenous latent variables. We additionally
assume that (delta_{i j}) is
homoscedastic and non-autocorrelated.

Lastly, the structural relations among the many latent variables are
specified as: [
left[begin{array}{l}
xi_{1 j}
end{array}right]=left[begin{array}{cc}
gamma_{11} & gamma_{12}
end{array}right] left[begin{array}{l}
eta_{1 j}
eta_{2 j}
end{array}right] + zeta_{1j}
]
which represents a linear relation. We simulate information from this
mannequin as follows:

N <- 500
lat_cov <- matrix(c(1, 0, 
                    0, 0.8), nrow = 2)  
# loading matrix
Lambda <- bdiag(c(1, 1.5, 2), c(1, 1.5, 2), c(1, 1.5, 2))


# error covariance
Theta <- diag(0.3, nrow = 9)

# exogenous latent variables
eta <- mvrnorm(N, mu = c(0, 0), Sigma = lat_cov)

# disturbance for latent final result
zeta <- rnorm(N)

# structural mannequin
xi <- tcrossprod(eta, t(c(1, 2))) + zeta

# error time period
epsilon <- mvrnorm(N, mu = rep(0, ncol(Theta)), Sigma = Theta)
cbind(eta, xi) %>% dim
[1] 500   3
dat <- tcrossprod(cbind(eta, xi), Lambda) + epsilon
dat  <-  dat %>% as.matrix() %>% as.information.body() %>% setNames(c("X1", "X2", "X3", "Z1", "Z2", "Z3", "Y1", "Y2", "Y3"))

To suit the SEM mannequin, the measurement half is specified as earlier than
with =~ connecting the hypothesized latent variables and
the corresponding noticed indicators. What’s new right here is the
“structural relations” amongst latent variables, indicated by
~, though right here “structural relation” is loosely outlined
and doesn’t essentially imply causal.

lavaan_sem <- 'eta1 =~ X1 + X2 + X3
               eta2 =~ Z1 + Z2 + Z3
               xi =~ Y1 + Y2 + Y3
               xi ~ eta1 + eta2'


# lavaan
lav_sem_fit <- cfa(lavaan_sem, information = dat, meanstructure = TRUE)
abstract(lav_sem_fit)
lavaan 0.6.15 ended usually after 64 iterations

  Estimator                                         ML
  Optimization methodology                           NLMINB
  Variety of mannequin parameters                        30

  Variety of observations                           500

Mannequin Take a look at Consumer Mannequin:
                                                      
  Take a look at statistic                                22.985
  Levels of freedom                                24
  P-value (Chi-square)                           0.521

Parameter Estimates:

  Customary errors                             Customary
  Data                                 Anticipated
  Data saturated (h1) mannequin          Structured

Latent Variables:
                   Estimate  Std.Err  z-value  P(>|z|)
  eta1 =~                                             
    X1                1.000                           
    X2                1.547    0.045   34.389    0.000
    X3                2.087    0.059   35.315    0.000
  eta2 =~                                             
    Z1                1.000                           
    Z2                1.511    0.051   29.545    0.000
    Z3                2.037    0.064   31.647    0.000
  xi =~                                               
    Y1                1.000                           
    Y2                1.539    0.022   70.267    0.000
    Y3                2.010    0.026   76.904    0.000

Regressions:
                   Estimate  Std.Err  z-value  P(>|z|)
  xi ~                                                
    eta1              1.023    0.056   18.201    0.000
    eta2              2.010    0.081   24.871    0.000

Covariances:
                   Estimate  Std.Err  z-value  P(>|z|)
  eta1 ~~                                             
    eta2             -0.024    0.040   -0.591    0.554

Intercepts:
                   Estimate  Std.Err  z-value  P(>|z|)
   .X1               -0.080    0.051   -1.587    0.113
   .X2               -0.084    0.072   -1.165    0.244
   .X3               -0.100    0.097   -1.034    0.301
   .Z1                0.029    0.045    0.650    0.515
   .Z2                0.001    0.063    0.018    0.986
   .Z3                0.057    0.082    0.692    0.489
   .Y1               -0.015    0.102   -0.143    0.886
   .Y2               -0.023    0.155   -0.151    0.880
   .Y3               -0.025    0.200   -0.123    0.902
    eta1              0.000                           
    eta2              0.000                           
   .xi                0.000                           

Variances:
                   Estimate  Std.Err  z-value  P(>|z|)
   .X1                0.287    0.021   13.626    0.000
   .X2                0.251    0.029    8.763    0.000
   .X3                0.338    0.048    7.017    0.000
   .Z1                0.268    0.019   13.736    0.000
   .Z2                0.292    0.026   11.063    0.000
   .Z3                0.280    0.037    7.485    0.000
   .Y1                0.323    0.024   13.366    0.000
   .Y2                0.386    0.038   10.137    0.000
   .Y3                0.312    0.053    5.892    0.000
    eta1              0.992    0.080   12.471    0.000
    eta2              0.739    0.062   11.894    0.000
   .xi                0.946    0.078   12.056    0.000
# blavaan with mcmcfile=T indicating having the blavaan-generated Stan code saved.
blav_sem_fit <- bcfa(lavaan_sem, information = dat, mcmcfile = T)
Computing posterior predictives...
abstract(blav_sem_fit)
blavaan (0.3-15) outcomes of 1000 samples after 500 adapt/burnin iterations

  Variety of observations                           500

  Variety of lacking patterns                         1

  Statistic                                 MargLogLik         PPP
  Worth                                      -6227.716       0.541

Latent Variables:
                   Estimate  Submit.SD pi.decrease pi.higher     Rhat    Prior       
  eta1 =~                                                                      
    X1                1.000                                  NA                
    X2                1.552    0.046    1.467    1.645    1.000    regular(0,10)
    X3                2.094    0.060    1.979    2.215    1.000    regular(0,10)
  eta2 =~                                                                      
    Z1                1.000                                  NA                
    Z2                1.518    0.053    1.417    1.625    1.000    regular(0,10)
    Z3                2.047    0.067    1.915     2.18    1.001    regular(0,10)
  xi =~                                                                        
    Y1                1.000                                  NA                
    Y2                1.540    0.023    1.496    1.585    1.001    regular(0,10)
    Y3                2.010    0.026    1.959    2.064    1.000    regular(0,10)

Regressions:
                   Estimate  Submit.SD pi.decrease pi.higher     Rhat    Prior       
  xi ~                                                                         
    eta1              1.025    0.058    0.916    1.148    1.001    regular(0,10)
    eta2              2.019    0.081    1.859    2.175    1.001    regular(0,10)

Covariances:
                   Estimate  Submit.SD pi.decrease pi.higher     Rhat    Prior       
  eta1 ~~                                                                      
    eta2             -0.023    0.041   -0.104    0.056    0.999       beta(1,1)

Intercepts:
                   Estimate  Submit.SD pi.decrease pi.higher     Rhat    Prior       
   .X1               -0.079    0.051   -0.178    0.025    1.000    regular(0,32)
   .X2               -0.081    0.074   -0.225    0.064    1.000    regular(0,32)
   .X3               -0.096    0.099   -0.285    0.099    1.001    regular(0,32)
   .Z1                0.028    0.046   -0.062    0.118    1.000    regular(0,32)
   .Z2                0.000    0.063   -0.121    0.125    1.000    regular(0,32)
   .Z3                0.055    0.082   -0.106    0.214    1.001    regular(0,32)
   .Y1               -0.013    0.104   -0.216    0.198    1.000    regular(0,32)
   .Y2               -0.022    0.158   -0.326     0.29    1.001    regular(0,32)
   .Y3               -0.025    0.204   -0.416    0.382    1.001    regular(0,32)
    eta1              0.000                                  NA                
    eta2              0.000                                  NA                
   .xi                0.000                                  NA                

Variances:
                   Estimate  Submit.SD pi.decrease pi.higher     Rhat    Prior       
   .X1                0.291    0.022    0.251    0.336    1.000 gamma(1,.5)[sd]
   .X2                0.254    0.028    0.202     0.31    1.001 gamma(1,.5)[sd]
   .X3                0.340    0.048    0.253     0.44    1.000 gamma(1,.5)[sd]
   .Z1                0.271    0.020    0.233    0.314    1.000 gamma(1,.5)[sd]
   .Z2                0.295    0.027    0.246    0.351    0.999 gamma(1,.5)[sd]
   .Z3                0.283    0.037    0.214    0.362    0.999 gamma(1,.5)[sd]
   .Y1                0.328    0.025    0.281    0.382    0.999 gamma(1,.5)[sd]
   .Y2                0.392    0.038    0.321     0.47    0.999 gamma(1,.5)[sd]
   .Y3                0.314    0.052    0.214    0.415    1.000 gamma(1,.5)[sd]
    eta1              0.998    0.084    0.844    1.177    1.000 gamma(1,.5)[sd]
    eta2              0.741    0.063    0.626     0.87    1.000 gamma(1,.5)[sd]
   .xi                0.960    0.079    0.812    1.123    0.999 gamma(1,.5)[sd]

The output now has a brand new part describing the trail coefficients
amongst latent variables, which may be interpreted as in linear
regression.

FIT3 <-
  semPlotModel_lavaanModel(lavaan_sem, auto.var = TRUE, auto.repair.first = TRUE)
semPaths(
  FIT3,
  what = "paths",
  whatLabels = "par",
  nodeLabels = c(
    expression(paste(x[1])),
    expression(paste(x[2])),
    expression(paste(x[3])),
    expression(paste(z[1])),
    expression(paste(z[2])),
    expression(paste(z[3])),
    expression(paste(y[1])),
    expression(paste(y[2])),
    expression(paste(y[3])),
    expression(paste(eta[1])),
    expression(paste(eta[2])),
    expression(paste(xi))
  ),
  edge.label.cex = 0.6,
  edgeLabels = c(
    expression(paste(1)),
    expression(paste(lambda[21])),
    expression(paste(lambda[31])),
    expression(paste(1)),
    expression(paste(lambda[52])),
    expression(paste(lambda[62])),
    expression(paste(1)),
    expression(paste(lambda[7])),
    expression(paste(lambda[8])),
    expression(paste(gamma[1])),
    expression(paste(gamma[2])),
    expression(paste(epsilon[1])),
    expression(paste(epsilon[2])),
    expression(paste(epsilon[3])),
    expression(paste(epsilon[4])),
    expression(paste(epsilon[5])),
    expression(paste(epsilon[6])),
    expression(paste(delta[1])),
    expression(paste(delta[2])),
    expression(paste(delta[3])),
    expression(paste(psi[1])),
    expression(paste(psi[2])),
    expression(paste(psi[3]))
  )
)

We once more examine the estimates from lavaan and
blavaan:

bind_cols(parameterEstimates(lav_sem_fit)[, 1:4],
          parameterEstimates(blav_sem_fit)[, 4]) %>% rename(ML_SEM = est, Bayes = ...5) %>% knitr::kable()
New names:
• `` -> `...5`
eta1 =~ X1 1.0000000 1.0000000
eta1 =~ X2 1.5468897 1.5518487
eta1 =~ X3 2.0867574 2.0940599
eta2 =~ Z1 1.0000000 1.0000000
eta2 =~ Z2 1.5111137 1.5179009
eta2 =~ Z3 2.0369639 2.0465995
xi =~ Y1 1.0000000 1.0000000
xi =~ Y2 1.5394007 1.5395212
xi =~ Y3 2.0096801 2.0102847
xi ~ eta1 1.0229308 1.0254598
xi ~ eta2 2.0102597 2.0188056
X1 ~~ X1 0.2870573 0.2908129
X2 ~~ X2 0.2508078 0.2538065
X3 ~~ X3 0.3376840 0.3404139
Z1 ~~ Z1 0.2678232 0.2712257
Z2 ~~ Z2 0.2922366 0.2952527
Z3 ~~ Z3 0.2798114 0.2833929
Y1 ~~ Y1 0.3233531 0.3278058
Y2 ~~ Y2 0.3862745 0.3917939
Y3 ~~ Y3 0.3124156 0.3137321
eta1 ~~ eta1 0.9924733 0.9980795
eta2 ~~ eta2 0.7392622 0.7408280
xi ~~ xi 0.9457461 0.9603592
eta1 ~~ eta2 -0.0236847 -0.0234200
X1 ~1 -0.0802587 -0.0786916
X2 ~1 -0.0844009 -0.0808092
X3 ~1 -0.0998296 -0.0958554
Z1 ~1 0.0291862 0.0283963
Z2 ~1 0.0011052 0.0000806
Z3 ~1 0.0565958 0.0549565
Y1 ~1 -0.0145914 -0.0132324
Y2 ~1 -0.0233656 -0.0220552
Y3 ~1 -0.0246824 -0.0246455
eta1 ~1 0.0000000 0.0000000
eta2 ~1 0.0000000 0.0000000
xi ~1 0.0000000 0.0000000


























When utilizing SEMs to mannequin multivariate linear relations in schooling,
sociology, and psychology, evaluating mannequin match is often thought-about to
be a significant step earlier than drawing smart conclusions from the
outcomes.

Statistical match indices present a solution to quantify how effectively our mannequin
matches the info, and there are a lot of selections out there in
blavaan starting from these particularly proposed within the SEM
context to some extra basic, information-theoretic indices.

In frequentist SEM, many widespread match indices can be found in
lavaan:

fitmeasures(lav_cfa_fit)
                 npar                  fmin                 chisq 
               19.000                 0.001                 1.553 
                   df                pvalue        baseline.chisq 
                8.000                 0.992              6005.574 
          baseline.df       baseline.pvalue                   cfi 
               15.000                 0.000                 1.000 
                  tli                  nnfi                   rfi 
                1.002                 1.002                 1.000 
                  nfi                  pnfi                   ifi 
                1.000                 0.533                 1.001 
                  rni                  logl     unrestricted.logl 
                1.001             -7909.932             -7909.155 
                  aic                   bic                ntotal 
            15857.863             15951.111              1000.000 
                 bic2                 rmsea        rmsea.ci.decrease 
            15890.766                 0.000                 0.000 
       rmsea.ci.higher        rmsea.ci.stage          rmsea.pvalue 
                0.000                 0.900                 1.000 
       rmsea.shut.h0 rmsea.notclose.pvalue     rmsea.notclose.h0 
                0.050                 0.000                 0.080 
                  rmr            rmr_nomean                  srmr 
                0.005                 0.005                 0.002 
         srmr_bentler   srmr_bentler_nomean                  crmr 
                0.002                 0.002                 0.002 
          crmr_nomean            srmr_mplus     srmr_mplus_nomean 
                0.003                 0.002                 0.002 
                cn_05                 cn_01                   gfi 
             9989.555             12941.502                 0.999 
                 agfi                  pgfi                   mfi 
                0.998                 0.296                 1.003 
                 ecvi 
                0.040 

In Bayesian latent variable modeling, there are usually two
variations of match indices, relying on which model of the probability
operate the software program makes use of. blavaan makes use of the marginal
probability (marginal over the latent variables), so match indices similar to
DIC and WAIC replicate the mannequin’s predictive validity for a future
particular person’s response whereas the conditional likelihood-based match indices
concern future responses from the present individuals in our pattern (See
Merkle, Furr, and Rabe-Hesketh (2019) for
extra detailed dialogue). In nearly all purposes of SEM, the aim
is to make inferences for an underlying inhabitants of people, so
the conditional likelihood-based indices aren’t acceptable.
blavaan makes use of the marginal model of the probability
operate by default, and we illustrate the distinction in implementation
beneath utilizing the uncooked Stan syntax.

To evaluate how effectively a mannequin matches the info with blavaan,
we will verify the match indices in a similar way – taking the fitted
CFA mannequin for example:

# NULL mannequin wanted for relative indices similar to CFI, TLI, and NFI
cfa_null <-
  c(paste0("Y", 1:6, " ~~ Y", 1:6), paste0("Y", 1:6, " ~ 1"))
blav_cfa_null_fit <- bcfa(cfa_null, information = dat_cfa, mcmcfile = T)
Computing posterior predictives...
## The default methodology mimics match indices derived from ML estimation
fitinx <-
  blavFitIndices(blav_cfa_fit, baseline.mannequin = blav_cfa_null_fit)
fitinx
Posterior imply (EAP) of devm-based match indices:

      BRMSEA    BGammaHat adjBGammaHat          BMc         BCFI         BTLI 
       0.003        1.000        0.999        1.000        1.000        1.002 
        BNFI 
       1.000 
abstract(fitinx)

Posterior abstract statistics and highest posterior density (HPD) 90% credible intervals for devm-based match indices:

               EAP Median   MAP    SD decrease higher
BRMSEA       0.003  0.000 0.000 0.009 0.000 0.018
BGammaHat    1.000  1.000 1.000 0.001 0.999 1.000
adjBGammaHat 0.999  1.000 1.000 0.003 0.997 1.000
BMc          1.000  1.000 1.000 0.001 0.999 1.000
BCFI         1.000  1.000 1.000 0.000 1.000 1.000
BTLI         1.002  1.002 1.003 0.002 0.999 1.005
BNFI         1.000  1.000 1.000 0.001 0.998 1.001

Discover the match indices out there in each lavaan and
blavaan are usually comparable, simply because the parameter estimates
had been comparable within the above examples, as a result of the posterior means are
merely used rather than most probability estimates of the identical
parameters, yielding comparable indices.

BRMESA was proposed by Hoofs et al.
(2018)
, and Garnier-Villarreal and
Jorgensen (2020)
proposed Bayesian counterparts of many
widely-used indices similar to CFI, NFI, TLI which is on the market when
specify match.measures = "all".

Data-based match indices primarily based on marginal probability are additionally
out there:

fitmeasures(blav_cfa_fit)
      npar       logl        ppp        bic        dic      p_dic       waic 
    19.000  -7909.981      0.744  15951.190  15858.125     19.081  15858.403 
    p_waic    se_waic      looic      p_loo     se_loo margloglik 
    19.272    111.599  15858.456     19.299    111.600  -8007.416 

On this part, we analyze an instance dataset utilizing
blavaan. The dataset is publicly out there by the
lavaan.survey bundle It incorporates Belgian faculty kids’s
responses to math efficacy and math self-concept questions, in addition to
measures of their math potential from the Programme for Worldwide
Pupil Evaluation (PISA) research performed in 2003. Ferla, Valcke, and Cai (2009) investigated the
affiliation between a number of math-related constructs and math
achievement. On this instance, we’ll assemble a simplified mannequin and
examine the relations amongst (destructive) math efficacy, (destructive)
math idea and math achievement.

  • math achievement is ‘measured’ by 4 believable values (PVs).
    PVs are random attracts from the empirical Bayes posterior distribution of
    every particular person’s math achievement, given their merchandise responses on a math
    take a look at and background variables. The posterior distribution is predicated on an
    merchandise response mannequin with a latent regression.

  • (destructive) math efficacy is measured by eight self-reported
    objects, e.g., “I’ve all the time believed that Arithmetic is one in every of my greatest
    topics” 1 (strongly agree) – 4 (strongly disagree).

  • (destructive) math self-concept is measured by 4 self-reported
    objects on perceived math potential for a given job, e.g., “Really feel assured
    doing job:”1 (very) – 4 (by no means).

library(lavaan.survey)
information(pisa.be.2003)
# Simplified model of Ferla et al. (2009) mannequin.
mannequin.pisa <- "
math =~ PV1MATH1 + PV1MATH2 + PV1MATH3 + PV1MATH4
neg.efficacy =~ ST31Q01 + ST31Q02 + ST31Q03 + ST31Q04 +
ST31Q05 + ST31Q06 + ST31Q07 + ST31Q08
neg.selfconcept =~ ST32Q04 + ST32Q06 + ST32Q07 + ST32Q09
math ~ neg.selfconcept + neg.efficacy
"

semPaths(semPlotModel_lavaanModel(mannequin.pisa))

# Match the mannequin utilizing lavaan
match <- sem(mannequin.pisa, information = pisa.be.2003)
abstract(match)
lavaan 0.6.15 ended usually after 87 iterations

  Estimator                                         ML
  Optimization methodology                           NLMINB
  Variety of mannequin parameters                        35

                                                  Used       Complete
  Variety of observations                          7890        8796

Mannequin Take a look at Consumer Mannequin:
                                                      
  Take a look at statistic                              5581.954
  Levels of freedom                               101
  P-value (Chi-square)                           0.000

Parameter Estimates:

  Customary errors                             Customary
  Data                                 Anticipated
  Data saturated (h1) mannequin          Structured

Latent Variables:
                     Estimate  Std.Err  z-value  P(>|z|)
  math =~                                               
    PV1MATH1            1.000                           
    PV1MATH2            1.110    0.005  204.520    0.000
    PV1MATH3            0.994    0.006  174.365    0.000
    PV1MATH4            0.964    0.006  170.700    0.000
  neg.efficacy =~                                       
    ST31Q01             1.000                           
    ST31Q02             1.238    0.032   38.580    0.000
    ST31Q03             1.425    0.035   40.178    0.000
    ST31Q04             1.174    0.032   37.176    0.000
    ST31Q05             1.379    0.035   39.729    0.000
    ST31Q06             1.390    0.035   39.397    0.000
    ST31Q07             1.396    0.036   38.635    0.000
    ST31Q08             1.058    0.031   33.716    0.000
  neg.selfconcept =~                                    
    ST32Q04             1.000                           
    ST32Q06             1.217    0.018   68.174    0.000
    ST32Q07             1.248    0.019   64.541    0.000
    ST32Q09             1.024    0.017   61.616    0.000

Regressions:
                   Estimate  Std.Err  z-value  P(>|z|)
  math ~                                              
    neg.selfconcpt    0.001    0.004    0.140    0.889
    neg.efficacy     -0.238    0.008  -30.632    0.000

Covariances:
                   Estimate  Std.Err  z-value  P(>|z|)
  neg.efficacy ~~                                     
    neg.selfconcpt    0.112    0.004   25.691    0.000

Variances:
                   Estimate  Std.Err  z-value  P(>|z|)
   .PV1MATH1          0.006    0.000   57.948    0.000
   .PV1MATH2          0.001    0.000   16.184    0.000
   .PV1MATH3          0.003    0.000   51.940    0.000
   .PV1MATH4          0.004    0.000   53.499    0.000
   .ST31Q01           0.462    0.008   58.312    0.000
   .ST31Q02           0.427    0.008   55.328    0.000
   .ST31Q03           0.442    0.008   53.216    0.000
   .ST31Q04           0.466    0.008   56.656    0.000
   .ST31Q05           0.445    0.008   53.895    0.000
   .ST31Q06           0.476    0.009   54.350    0.000
   .ST31Q07           0.539    0.010   55.268    0.000
   .ST31Q08           0.578    0.010   58.796    0.000
   .ST32Q04           0.341    0.006   52.790    0.000
   .ST32Q06           0.200    0.005   37.223    0.000
   .ST32Q07           0.335    0.007   46.470    0.000
   .ST32Q09           0.291    0.006   50.354    0.000
   .math              0.028    0.001   51.228    0.000
    neg.efficacy      0.172    0.008   22.910    0.000
    neg.selfconcpt    0.362    0.010   34.841    0.000
fitmeasures(match)
                 npar                  fmin                 chisq 
               35.000                 0.354              5581.954 
                   df                pvalue        baseline.chisq 
              101.000                 0.000             89388.597 
          baseline.df       baseline.pvalue                   cfi 
              120.000                 0.000                 0.939 
                  tli                  nnfi                   rfi 
                0.927                 0.927                 0.926 
                  nfi                  pnfi                   ifi 
                0.938                 0.789                 0.939 
                  rni                  logl     unrestricted.logl 
                0.939            -73960.921            -71169.944 
                  aic                   bic                ntotal 
           147991.842            148235.909              7890.000 
                 bic2                 rmsea        rmsea.ci.decrease 
           148124.686                 0.083                 0.081 
       rmsea.ci.higher        rmsea.ci.stage          rmsea.pvalue 
                0.085                 0.900                 0.000 
       rmsea.shut.h0 rmsea.notclose.pvalue     rmsea.notclose.h0 
                0.050                 0.996                 0.080 
                  rmr            rmr_nomean                  srmr 
                0.035                 0.035                 0.056 
         srmr_bentler   srmr_bentler_nomean                  crmr 
                0.056                 0.056                 0.060 
          crmr_nomean            srmr_mplus     srmr_mplus_nomean 
                0.060                 0.056                 0.056 
                cn_05                 cn_01                   gfi 
              178.333               194.606                 0.914 
                 agfi                  pgfi                   mfi 
                0.885                 0.679                 0.707 
                 ecvi 
                0.716 
# Match the mannequin utilizing blavaan
bfit <- bsem(mannequin.pisa, information = pisa.be.2003)
blavaan NOTE: Posterior predictives with lacking information are presently very gradual.
    Think about setting take a look at="none".

Computing posterior predictives...
abstract(bfit)
blavaan (0.3-15) outcomes of 1000 samples after 500 adapt/burnin iterations

  Variety of observations                          8796

  Variety of lacking patterns                       119

  Statistic                                 MargLogLik         PPP
  Worth                                     -79503.055       0.000

Latent Variables:
                     Estimate  Submit.SD pi.decrease pi.higher     Rhat
  math =~                                                        
    PV1MATH1            1.000                                  NA
    PV1MATH2            1.118    0.005    1.108    1.128    1.000
    PV1MATH3            0.983    0.005    0.973    0.993    1.000
    PV1MATH4            1.003    0.005    0.993    1.014    1.000
  neg.efficacy =~                                                
    ST31Q01             1.000                                  NA
    ST31Q02             1.224    0.030    1.168    1.287    1.000
    ST31Q03             1.394    0.034    1.331    1.461    1.000
    ST31Q04             1.159    0.030    1.104    1.219    1.000
    ST31Q05             1.388    0.035    1.322    1.462    1.000
    ST31Q06             1.366    0.034    1.303    1.435    1.000
    ST31Q07             1.395    0.037    1.327    1.469    1.001
    ST31Q08             1.038    0.029    0.982    1.096    1.000
  neg.selfconcept =~                                             
    ST32Q04             1.000                                  NA
    ST32Q06             1.215    0.018    1.182     1.25    1.002
    ST32Q07             1.248    0.019    1.211    1.288    1.000
    ST32Q09             1.020    0.017    0.987    1.053    1.000
    Prior       
                
                
    regular(0,10)
    regular(0,10)
    regular(0,10)
                
                
    regular(0,10)
    regular(0,10)
    regular(0,10)
    regular(0,10)
    regular(0,10)
    regular(0,10)
    regular(0,10)
                
                
    regular(0,10)
    regular(0,10)
    regular(0,10)

Regressions:
                   Estimate  Submit.SD pi.decrease pi.higher     Rhat    Prior       
  math ~                                                                       
    neg.selfconcpt    0.005    0.004   -0.004    0.013    0.999    regular(0,10)
    neg.efficacy     -0.251    0.008   -0.267   -0.236    0.999    regular(0,10)

Covariances:
                   Estimate  Submit.SD pi.decrease pi.higher     Rhat    Prior       
  neg.efficacy ~~                                                              
    neg.selfconcpt    0.113    0.004    0.104    0.121    1.000       beta(1,1)

Intercepts:
                   Estimate  Submit.SD pi.decrease pi.higher     Rhat    Prior       
   .PV1MATH1          1.068    0.002    1.063    1.072    1.001    regular(0,32)
   .PV1MATH2          1.079    0.002    1.074    1.084    1.001    regular(0,32)
   .PV1MATH3          1.058    0.002    1.054    1.062    1.001    regular(0,32)
   .PV1MATH4          1.068    0.002    1.063    1.072    1.000    regular(0,32)
   .ST31Q01           1.888    0.009     1.87    1.905    1.001    regular(0,32)
   .ST31Q02           1.888    0.009    1.871    1.905    1.000    regular(0,32)
   .ST31Q03           2.145    0.010    2.127    2.164    1.001    regular(0,32)
   .ST31Q04           2.041    0.009    2.023    2.058    1.000    regular(0,32)
   .ST31Q05           1.726    0.010    1.706    1.745    1.000    regular(0,32)
   .ST31Q06           2.169    0.010     2.15    2.188    1.000    regular(0,32)
   .ST31Q07           2.159    0.010    2.139    2.178    1.000    regular(0,32)
   .ST31Q08           2.398    0.009    2.378    2.416    1.001    regular(0,32)
   .ST32Q04           2.362    0.009    2.345     2.38    0.999    regular(0,32)
   .ST32Q06           2.526    0.009    2.508    2.543    0.999    regular(0,32)
   .ST32Q07           2.908    0.010    2.888    2.928    0.999    regular(0,32)
   .ST32Q09           2.900    0.009    2.883    2.917    0.999    regular(0,32)
   .math              0.000                                  NA                
    neg.efficacy      0.000                                  NA                
    neg.selfconcpt    0.000                                  NA                

Variances:
                   Estimate  Submit.SD pi.decrease pi.higher     Rhat    Prior       
   .PV1MATH1          0.007    0.000    0.006    0.007    1.000 gamma(1,.5)[sd]
   .PV1MATH2          0.001    0.000        0    0.001    1.001 gamma(1,.5)[sd]
   .PV1MATH3          0.004    0.000    0.004    0.004    1.000 gamma(1,.5)[sd]
   .PV1MATH4          0.004    0.000    0.004    0.005    1.001 gamma(1,.5)[sd]
   .ST31Q01           0.472    0.008    0.457    0.487    0.999 gamma(1,.5)[sd]
   .ST31Q02           0.431    0.008    0.416    0.446    1.000 gamma(1,.5)[sd]
   .ST31Q03           0.445    0.008     0.43    0.461    1.000 gamma(1,.5)[sd]
   .ST31Q04           0.469    0.008    0.454    0.485    1.000 gamma(1,.5)[sd]
   .ST31Q05           0.452    0.009    0.435    0.469    1.000 gamma(1,.5)[sd]
   .ST31Q06           0.479    0.008    0.462    0.496    1.000 gamma(1,.5)[sd]
   .ST31Q07           0.538    0.010    0.518    0.558    0.999 gamma(1,.5)[sd]
   .ST31Q08           0.581    0.010    0.563      0.6    1.000 gamma(1,.5)[sd]
   .ST32Q04           0.346    0.006    0.333    0.359    0.999 gamma(1,.5)[sd]
   .ST32Q06           0.202    0.005    0.191    0.213    1.000 gamma(1,.5)[sd]
   .ST32Q07           0.332    0.007    0.318    0.347    1.000 gamma(1,.5)[sd]
   .ST32Q09           0.294    0.006    0.283    0.305    1.001 gamma(1,.5)[sd]
   .math              0.030    0.001    0.029    0.031    1.000 gamma(1,.5)[sd]
    neg.efficacy      0.179    0.008    0.164    0.193    1.000 gamma(1,.5)[sd]
    neg.selfconcpt    0.360    0.010     0.34    0.381    1.002 gamma(1,.5)[sd]
fitmeasures(bfit)
      npar       logl        ppp        bic        dic      p_dic       waic 
    51.000 -79141.585      0.000 158746.349 158384.787     50.808 158388.655 
    p_waic    se_waic      looic      p_loo     se_loo margloglik 
    54.617    801.011 158388.792     54.685    801.013 -79503.055 

As we will see, the estimates from lavaan and
blavaan are comparable, and particularly, the latent assemble
“math achievement” is estimated to be negatively predicted by (destructive)
self-efficacy (i.e., they’re estimated to be positively correlated)
accounting for (destructive) self-concept.

Ferla, Johan, Martin Valcke, and Yonghong Cai. 2009. “Tutorial
Self-Efficacy and Tutorial Self-Idea: Reconsidering Structural
Relationships.”
Studying and Particular person Variations 19
(4): 499–505.

Garnier-Villarreal, Mauricio, and Terrence D Jorgensen. 2020.
“Adapting Match Indices for Bayesian Structural Equation Modeling:
Comparability to Most Chance.”
Psychological Strategies
25 (1): 46.

Hoofs, Huub, Rens van de Schoot, Nicole WH Jansen, and IJmert Kant.
2018. “Evaluating Mannequin Slot in Bayesian Confirmatory Issue
Evaluation with Massive Samples: Simulation Examine Introducing the
BRMSEA.”
Academic and Psychological Measurement 78
(4): 537–68.

Merkle, Edgar C, Ellen Fitzsimmons, James Uanhoro, and Ben Goodrich.
2020. “Environment friendly Bayesian Structural Equation Modeling in
Stan.”
Journal of Statistical Software program.

Merkle, Edgar C, Daniel Furr, and Sophia Rabe-Hesketh. 2019.
“Bayesian Comparability of Latent Variable Fashions: Conditional Versus
Marginal Likelihoods.”
Psychometrika 84 (3): 802–29.

Merkle, Edgar C, and Yves Rosseel. 2015. “Blavaan: Bayesian
Structural Equation Fashions by way of Parameter Growth.”
Journal
of Statistical Software program
.

Source Link

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
View Comments (0)

Leave a Reply

Your email address will not be published.

2022 Blinking Robots.
WordPress by Doejo

Scroll To Top