在lme4 glmer中缩放预测变量不能解决特征值警告;它不能解决特征值警告.替代优化也没有 [英] Scaling predictors in lme4 glmer doesn't resolve eigenvalue warnings; neither does alternative optimization

查看:73
本文介绍了在lme4 glmer中缩放预测变量不能解决特征值警告;它不能解决特征值警告.替代优化也没有的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用R中的 lme4 glmer 函数分析数据(包括以下内容).我正在建立的模型包含泊松分布的响应变量( obs ),一个随机因子( area ),一个连续偏移量( duration ),五个连续的固定效果( can_perc can_n time temp cloud_cover )和一个二项式固定影响因子( burned ).在拟合模型之前,我检查了共线性并删除了所有共线性变量.

I am analysing data (included below) using lme4's glmer function in R. The model I am building consists of a Poisson-distributed response variable (obs), one random factor (area), one continuous offset (duration), five continuous fixed effects (can_perc, can_n, time, temp, cloud_cover) and one binomial fixed effect factor (burnt). Before fitting the model I checked for collinearity and removed any collinear variables.

初始模型为:

q1 = glmer(obs ~ can_perc + can_n  + time * temp + 
           cloud_cover + factor(burnt) + (1|area) + offset(dat$duration), 
           data=dat, family=poisson, na.action = na.fail)

(注意:我需要将 na.action 指定为'na.fail',因为稍后我想 dredge()该模型,这是必需的)

(Note: I need to specify the na.action as 'na.fail' as I want to dredge() the model later and this is required for that.)

运行模型会给出以下警告:

Running the model gives the following warning:

"Hessian是数字上的奇数:参数不是唯一确定的"

"Hessian is numerically singular: parameters are not uniquely determined"

在该模型的类似变体中,我还收到了警告:

In similar variations of the model, I have also received the warning:

在checkConv(attr(opt," derivs),opt $ par,ctrl = control $ checkConv,中:模型几乎无法识别:特征值比大-重新缩放变量?"

"In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, : Model is nearly unidentifiable: large eigenvalue ratio - Rescale variables?"

从我对建议的有限理解中 https://rdrr.io/cran/lme4/man/troubleshooting.html 和其他地方,这两个警告都反映了类似的问题,即具有较大特征值的Hessian(逆曲率矩阵)表​​示表面(在数值公差范围内)完全平坦一些方向.根据警告和链接中的建议,我使用 scale()重新缩放了所有连续预测变量.我还缩放了偏移量变量(我尝试过缩放此变量,也可以不缩放).具有缩放预测变量的模型在这里:

From my limited understanding of the advice here https://rdrr.io/cran/lme4/man/troubleshooting.html and elsewhere, both of these warnings reflect a similar issue, of the Hessian (inverse curvature matrix) having a large eigenvalue, indicating that (within numerical tolerances) the surface is completely flat in some direction. Based on the advice in the warnings and link, I rescaled all of the continuous predictor variables using scale(). I also scaled the offset variable (I tried both with and without scaling this one). The model with scaled predictor variables is here:

q2 = glmer(obs ~ s.can_perc + s.can_n  + s.time * s.temp + 
           s.cloud_cover + factor(burnt) + (1|area) +
           offset(dat$s.duration), 
           data=dat, family=poisson, na.action = na.fail)

但是我还没有逃脱特征值!缩放后的模型会给出两个警告:

However I have not yet escaped the eigenvalues! The scaled model gives two warnings:

无法评估比例梯度"
模型无法收敛:退化具有1个负特征值的Hessian"

"unable to evaluate scaled gradient"
"Model failed to converge: degenerate Hessian with 1 negative eigenvalues"

我在网上进行了很多搜索,除了对模型没有进行错误指定之外,一旦对预测变量进行了缩放,就找不到如何处理特征值问题的其他案例/解决方案.

I have searched a lot online and couldn't find another case/solution to how to deal with eigenvalue problems once the predictors have been scaled, other than checking that the model hasn't been misspecified.

基于以下页面/文档: https://cran.r-project.org/web/packages/lme4/lme4.pdf

https://stats.stackexchange.com/questions/164457/r-glmer-warnings-model-fails-to-converge-model-is-nearly-unidentified

https://rdrr.io/cran/lme4/man/isSingular.html

https://stats.stackexchange.com/questions/242109/model-failed整合到警告中

和其他人,

我有:

  1. 检查了模型规格和数据是否有错误(我看不到-我错过了什么吗?)

  1. checked the model specifications and data for mistakes (none that I can see - have I missed something?)

使用 is_singular(x,tol = 1e-05)检查奇异性(以某种方式该函数调用从 isSingular()演变为当前形式?):所有模型都给出FALSE.

checked for singularity with is_singular(x, tol = 1e-05) (somehow this function call evolved from isSingular() to current form?): all models give FALSE.

使用 converge_ok(q2,公差= 0.001)检验的收敛性测度:除非我大幅提高了公差,否则所有模型都为FALSE.但是它们的收敛度量确实有很大差异.

checked convergence measure with converge_ok(q2, tolerance = 0.001): All models give FALSE, unless I substantially increase the tolerance; however they do vary considerably in their convergence measure.

尝试了以下不同的优化器/模型估计方法:

tried different optimizers/model estimation methods as follows:

  • a) glmerControl(optimizer ="bobyqa")和glmerControl(optimizer ="Nelder_Mead")
  • b) glmerControl(optimizer ='optimx',optCtrl = list(method ='nlminb'))
  • c)bobyqa,Nelder_Mead,optimx.nlminb,optimx.L-BFGS-B,nloptwrap.NLOPT_LN_NELDERMEAD,nloptwrap.NLOPT_LN_BOBYQA和nmkbw优化程序,使用optimx中的 all_fit()函数.

这是代码:

# singularity and convergence for first two models:
is_singular(s1, tol = 1e-05) # FALSE (a good thing?)
converge_ok(s1, tol = 1e-05) # FALSE (a bad thing?) 0.0259109730912352

is_singular(s2, tol = 1e-05) # FALSE (a good thing?)
converge_ok(s2, tol = 1e-05) # FALSE (a bad thing?) 0.0023434329028163
# I looked at singularity and converge measures for the others below, but omitted for brevity.

# Alternate optimisations for q1:
q1.bobyqa = glmer(obs ~ can_perc + can_n  + time * temp + cloud_cover + factor(burnt) + (1|area) + offset(dat$duration), data=dat, family=poisson, na.action = na.fail, glmerControl(optimizer = "bobyqa", optCtrl = list(maxfun = 2e5)))
# Warning 1: unable to evaluate scaled gradient
# Warning 2: Model failed to converge: degenerate  Hessian with 1 negative eigenvalues

q1.neldermead = glmer(obs ~ can_perc + can_n  + time * temp + cloud_cover + factor(burnt) +  (1|area) + offset(dat$duration), data=dat, family=poisson, na.action = na.fail,  glmerControl(optimizer ="Nelder_Mead", optCtrl = list(maxfun = 2e5)))
# Warning: unable to evaluate scaled gradient Hessian is numerically singular: parameters are not uniquely determined

q1.nlminb = glmer(obs ~ can_perc + can_n  + time * temp + cloud_cover + factor(burnt) + (1|area) + offset(dat$duration), data=dat, family=poisson, na.action = na.fail, glmerControl(optimizer ='optimx', optCtrl=list(method='nlminb')))
# Warning: Parameters or bounds appear to have different scalings. This can cause poor performance in optimization. 
# It is important for derivative free methods like BOBYQA, UOBYQA, NEWUOA.convergence code 9999 from optimxError in pwrssUpdate(pp, resp, tol = tolPwrss, GQmat = GQmat, compDev = compDev,  :   (maxstephalfit) PIRLS step-halvings failed to reduce deviance in pwrssUpdate

all_fit(q1)

# Alternate optimisations for q2:
q2.bobyqa = glmer(obs ~ s.can_perc + s.can_n  + s.time * s.temp + s.cloud_cover + factor(burnt) +  (1|area) + offset(dat$s.duration), data=dat, family=poisson, na.action = na.fail, glmerControl(optimizer = "bobyqa", optCtrl = list(maxfun = 2e5)))
# Warning 1: unable to evaluate scaled gradient
# Warning 2: Model failed to converge: degenerate  Hessian with 1 negative eigenvalues

q2.neldermead = glmer(obs ~ s.can_perc + s.can_n  + s.time * s.temp + s.cloud_cover + factor(burnt) + (1|area) + offset(dat$s.duration), data=dat, family=poisson, na.action = na.fail, glmerControl(optimizer ="Nelder_Mead", optCtrl = list(maxfun = 2e5)))
# Warning: unable to evaluate scaled gradient Hessian is numerically singular: parameters are not uniquely determined

q2.nlminb = glmer(obs ~ s.can_perc + s.can_n  + s.time * s.temp + s.cloud_cover + factor(burnt) +  (1|area) + offset(dat$s.duration), data=dat, family=poisson, na.action = na.fail, control = glmerControl(optimizer ='optimx', optCtrl=list(method='nlminb')))
# Warning: Model is nearly unidentifiable: large eigenvalue ratio - Rescale variables?

all_fit(q2)

上述代码的输出,用于未缩放模型(q1):

is_singular(s1, tol = 1e-05) # FALSE (a good thing?)
[1] FALSE
converge_ok(s1, tol = 1e-05) # FALSE (a bad thing?) 0.0259109730912352
0.0259109730912352 
             FALSE 
is_singular(s2, tol = 1e-05) # FALSE (a good thing?)
[1] FALSE
alternate optimisations for original model:
q1.bobyqa = glmer(obs ~ can_perc + can_n  + time * temp + cloud_cover + factor(burnt) + (1|area) + offset(dat$duration), data=dat, family=poisson, na.action = na.fail, glmerControl(optimizer = "bobyqa", optCtrl = list(maxfun = 2e5)))
unable to evaluate scaled gradientModel failed to converge: degenerate  Hessian with 1 negative eigenvalues

alternate optimisations for original model:
q1.bobyqa = glmer(obs ~ can_perc + can_n  + time * temp + cloud_cover + factor(burnt) + (1|area) + offset(dat$duration), data=dat, family=poisson, na.action = na.fail, glmerControl(optimizer = "bobyqa", optCtrl = list(maxfun = 2e5)))
    unable to evaluate scaled gradientModel failed to converge: degenerate      Hessian with 1 negative eigenvalues
    q1.neldermead = glmer(obs ~ can_perc + can_n  + time * temp + cloud_cover + factor(burnt) + (1|area) + offset(dat$duration), data=dat, family=poisson, na.action = na.fail, glmerControl(optimizer ="Nelder_Mead", optCtrl = list(maxfun = 2e5)))
    unable to evaluate scaled gradient Hessian is numerically singular: parameters are not uniquely determined

all_fit(q1)
bobyqa. : unable to evaluate scaled gradientModel failed to converge:     degenerate  Hessian with 1 negative eigenvalues[OK]
Nelder_Mead. : unable to evaluate scaled gradient Hessian is numerically singular: parameters are not uniquely determined[OK]
optimx.nlminb : Parameters or bounds appear to have different scalings.
This can cause poor performance in optimization. 
It is important for derivative free methods like BOBYQA, UOBYQA,     NEWUOA.convergence code 9999 from optimxParameters or bounds appear to have different scalings.
This can cause poor performance in optimization. 
It is important for derivative free methods like BOBYQA, UOBYQA,     NEWUOA.convergence code 9999 from optimx[ERROR]
optimx.L-BFGS-B : Parameters or bounds appear to have different scalings.
This can cause poor performance in optimization. 
It is important for derivative free methods like BOBYQA, UOBYQA, NEWUOA.convergence code 9999 from optimxParameters or bounds appear to have different scalings.
This can cause poor performance in optimization. 
It is important for derivative free methods like BOBYQA, UOBYQA,     NEWUOA.convergence code 9999 from optimx[ERROR]
nloptwrap.NLOPT_LN_NELDERMEAD : [ERROR]
nloptwrap.NLOPT_LN_BOBYQA : [ERROR]
nmkbw. : [ERROR]

$`bobyqa.`
    Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
     Family: poisson  ( log )
    Formula: obs ~ can_perc + can_n + time * temp + cloud_cover + factor(burnt) +  (1 | area) + offset(dat$duration)
       Data: dat
      AIC       BIC    logLik  deviance  df.resid 
     311.0473  330.3356 -146.5237  293.0473        54 
    Random effects:
     Groups Name        Std.Dev.
     area   (Intercept) 1.992   
    Number of obs: 63, groups:  area, 8
    Fixed Effects:
(Intercept)  can_perc  can_n   time     temp  
-67.4998    -1.3180    0.0239    4.8025    1.7793  
cloud_cover  factor(burnt)unburnt             time:temp  
             -0.3813               18.5676               -0.1748  
convergence code 0; 2 optimizer warnings; 0 lme4 warnings 

$Nelder_Mead.
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
     Family: poisson  ( log )
    Formula: obs ~ can_perc + can_n + time * temp + cloud_cover + factor(burnt) +      (1 | area) + offset(dat$duration)
       Data: dat
      AIC       BIC    logLik  deviance  df.resid 
     311.0473  330.3356 -146.5237  293.0473        54 
    Random effects:
     Groups Name        Std.Dev.
     area   (Intercept) 1.992   
    Number of obs: 63, groups:  area, 8
    Fixed Effects:
             (Intercept)              
can_perc      can_n       time      temp  
-67.48057    -1.31791    0.02389    4.80463    1.78012  
cloud_cover  factor(burnt)unburnt    time:temp  
-0.38118    18.52637      -0.17483  
convergence code 0; 2 optimizer warnings; 0 lme4 warnings 

$optimx.nlminb
<std::runtime_error in pwrssUpdate(pp, resp, tol = tolPwrss, GQmat = GQmat,    compDev = compDev,     grpFac = fac, maxit = maxit, verbose = verbose): (maxstephalfit) PIRLS step-halvings failed to reduce deviance in pwrssUpdate>

$`optimx.L-BFGS-B`
<std::runtime_error in pwrssUpdate(pp, resp, tol = tolPwrss, GQmat = GQmat, compDev = compDev,     grpFac = fac, maxit = maxit, verbose = verbose): (maxstephalfit) PIRLS step-halvings failed to reduce deviance in pwrssUpdate>

$nloptwrap.NLOPT_LN_NELDERMEAD
<simpleError in pwrssUpdate(pp, resp, tol = tolPwrss, GQmat = GQmat, compDev = compDev,     grpFac = fac, maxit = maxit, verbose = verbose): Downdated VtV is not positive definite>

$nloptwrap.NLOPT_LN_BOBYQA
<simpleError in pwrssUpdate(pp, resp, tol = tolPwrss, GQmat = GQmat, compDev = compDev,     grpFac = fac, maxit = maxit, verbose = verbose): Downdated VtV is not positive definite>

$nmkbw.
<std::runtime_error in pwrssUpdate(pp, resp, tol = tolPwrss, GQmat = GQmat, compDev = compDev,     grpFac = fac, maxit = maxit, verbose = verbose): (maxstephalfit) PIRLS step-halvings failed to reduce deviance in pwrssUpdate>

上述代码的输出,用于缩放模型(q2):

alternate optimisations for q2:
q2.bobyqa = glmer(obs ~ s.can_perc + s.can_n  + s.time * s.temp + s.cloud_cover + factor(burnt) + (1|area) + offset(dat$s.duration), data=dat, family=poisson, na.action = na.fail, glmerControl(optimizer = "bobyqa", optCtrl = list(maxfun = 2e5)))
Model is nearly unidentifiable: large eigenvalue ratio - Rescale variables?
q2.neldermead = glmer(obs ~ s.can_perc + s.can_n  + s.time * s.temp + s.cloud_cover + factor(burnt) + (1|area) + offset(dat$s.duration), data=dat, family=poisson, na.action = na.fail, glmerControl(optimizer ="Nelder_Mead", optCtrl = list(maxfun = 2e5)))
unable to evaluate scaled gradientModel failed to converge: degenerate  Hessian with 1 negative eigenvalues

all_fit(q2)
bobyqa. : Model is nearly unidentifiable: large eigenvalue ratio
 - Rescale variables?[OK]
Nelder_Mead. : unable to evaluate scaled gradientModel failed to converge: degenerate  Hessian with 1 negative eigenvalues[OK]
optimx.nlminb : Model is nearly unidentifiable: large eigenvalue ratio
 - Rescale variables?[OK]
optimx.L-BFGS-B : unable to evaluate scaled gradientModel failed to converge: degenerate  Hessian with 1 negative eigenvalues[OK]
nloptwrap.NLOPT_LN_NELDERMEAD : [ERROR]
nloptwrap.NLOPT_LN_BOBYQA : [ERROR]
nmkbw. : [ERROR]
$`bobyqa.`
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
 Family: poisson  ( log )
Formula: n_shreiberi ~ s.can_perc + s.can_n + s.time * s.temp + s.cloud_cover +  
    factor(burnt) + (1 | area) + offset(dat$s.duration)
   Data: dat
      AIC       BIC    logLik  deviance  df.resid 
 316.8412  336.1294 -149.4206  298.8412        54 
Random effects:
 Groups Name        Std.Dev.
 area   (Intercept) 2.523   
Number of obs: 63, groups:  area, 8
Fixed Effects:
(Intercept)    s.can_perc    s.can_n    s.time    s.temp  
-18.19816    -0.22152    0.45839    0.05239    -0.24983  
       s.cloud_cover  factor(burnt)unburnt         s.time:s.temp  
            -0.19691              17.92390              -0.13948  
convergence code 0; 1 optimizer warnings; 0 lme4 warnings 

$Nelder_Mead.
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
 Family: poisson  ( log )
Formula: n_shreiberi ~ s.can_perc + s.can_n + s.time * s.temp + s.cloud_cover +  
    factor(burnt) + (1 | area) + offset(dat$s.duration)
   Data: dat
      AIC       BIC    logLik  deviance  df.resid 
 316.8408  336.1290 -149.4204  298.8408        54 
Random effects:
 Groups Name        Std.Dev.
 area   (Intercept) 2.524   
Number of obs: 63, groups:  area, 8
Fixed Effects:
         (Intercept)            s.can_perc               s.can_n                s.time                s.temp  
           -19.29632              -0.22153               0.45840               0.05241              -0.24990  
       s.cloud_cover  factor(burnt)unburnt         s.time:s.temp  
            -0.19692              19.02091              -0.13949  
convergence code 0; 2 optimizer warnings; 0 lme4 warnings 

$optimx.nlminb
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
 Family: poisson  ( log )
Formula: n_shreiberi ~ s.can_perc + s.can_n + s.time * s.temp + s.cloud_cover +  
    factor(burnt) + (1 | area) + offset(dat$s.duration)
   Data: dat
      AIC       BIC    logLik  deviance  df.resid 
 316.8412  336.1294 -149.4206  298.8412        54 
Random effects:
 Groups Name        Std.Dev.
 area   (Intercept) 2.523   
Number of obs: 63, groups:  area, 8
Fixed Effects:
         (Intercept)            s.can_perc               s.can_n                s.time                s.temp  
           -18.23626              -0.22152               0.45839               0.05239              -0.24983  
       s.cloud_cover  factor(burnt)unburnt         s.time:s.temp  
            -0.19691              17.96199              -0.13948  
convergence code 0; 1 optimizer warnings; 0 lme4 warnings 

$`optimx.L-BFGS-B`
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
 Family: poisson  ( log )
Formula: n_shreiberi ~ s.can_perc + s.can_n + s.time * s.temp + s.cloud_cover +  
    factor(burnt) + (1 | area) + offset(dat$s.duration)
   Data: dat
      AIC       BIC    logLik  deviance  df.resid 
 316.8412  336.1294 -149.4206  298.8412        54 
Random effects:
 Groups Name        Std.Dev.
 area   (Intercept) 2.524   
Number of obs: 63, groups:  area, 8
Fixed Effects:
         (Intercept)            s.can_perc               s.can_n                s.time                s.temp  
           -18.23581              -0.22155               0.45841               0.05242              -0.24997  
       s.cloud_cover  factor(burnt)unburnt         s.time:s.temp  
            -0.19694              17.96246              -0.13943  
convergence code 0; 2 optimizer warnings; 0 lme4 warnings 

$nloptwrap.NLOPT_LN_NELDERMEAD
<simpleError in pwrssUpdate(pp, resp, tol = tolPwrss, GQmat = GQmat, compDev = compDev,     grpFac = fac, maxit = maxit, verbose = verbose): Downdated VtV is not positive definite>

$nloptwrap.NLOPT_LN_BOBYQA
<simpleError in pwrssUpdate(pp, resp, tol = tolPwrss, GQmat = GQmat, compDev = compDev,     grpFac = fac, maxit = maxit, verbose = verbose): Downdated VtV is not positive definite>

$nmkbw.
<simpleError in pwrssUpdate(pp, resp, tol = tolPwrss, GQmat = GQmat, compDev = compDev,     grpFac = fac, maxit = maxit, verbose = verbose): Downdated VtV is not positive definite>

数据:

可在以下链接上获得数据集: https://www.dropbox.com/s/ud50uatztjq4bh9/20181217%20Surveys%20simplified%20data%20for%20stackX.xlsx?dl=0

在我看来,这些替代性优化方法均未成功;实际上,那时似乎有些人提出了其他警告/错误,这些警告/错误将使我跌入另一个困境.

It looks to me that none of these alternative methods of optimisation have succeeded either; in fact some of then seem to have raised other warnings/errors which would take me down another rabbit hole.

谁能建议我如何改进这些模型?我不是要将这些模型用作最终模型,而是要疏通它们,然后从不同的替代子集模型中选择最佳/最佳模型.

Can anyone advise how I could progress with the fitting of these models? It is not my intent for these to be the final models, but rather to dredge them and then select optimal/top models from the different alternative subset models.

推荐答案

tl; dr 在烧伤"状态下,您根本没有任何积极结果.您不必不必担心这一点-AIC比较仍然应该相当健壮-但是您可能想在继续之前了解正在发生的事情.在GLMM常见问题解答( CrossValidated 上有各种各样的相关问题/答案).

tl;dr This looks like a case of complete separation; you have no positive outcomes at all in your "burned" condition. You don't necessarily need to worry about this - the AIC comparisons should still be reasonably robust - but you might want to understand what's going on before you proceed. This problem (and remedies) are discussed in a relevant section of the GLMM FAQ (and there are a variety of relevant questions/answers on CrossValidated).

我怎么知道?这是系数:

How do I know? Here are the coefficients:

  (Intercept)       s.can_perc               s.can_n                s.time                s.temp  
   -19.29632          -0.22153               0.45840               0.05241              -0.24990  
       s.cloud_cover  factor(burnt)unburnt         s.time:s.temp  
            -0.19692              19.02091              -0.13949  

任何时候,(二项式或泊松)GLM中的系数(绝对值)都大于8-10时,就不得不担心(除非您查看的是数值协变量的系数,该系数的测量值非常大)单位,例如,如果您要查看后院中的碳含量(以千兆吨为单位).这意味着预测变量的单位变化会导致对数赔率(例如,二项式/对数链接模型)的对数赔率发生(例如)10个单位的变化.从0.006( plogis(-5))到0.994( plogis(5))的概率.在您的情况下,截距为-19.29,因此在处于燃烧状态的所有预测变量的值为零时,您获得的概率为4.2e-9.另一个巨大的系数是 unburnt (19.02),因此在未燃烧(unburnt?)条件下所有预测变量的值为零时,您会得到 plogis(-19.29 + 19.02) = 0.43.

Any time you have coefficients in a (binomial or Poisson) GLM that are larger (in absolute value) than 8-10, you have to worry (unless you are looking at the coefficient of a numerical covariate that's measured in very large units, e.g. if you're looking at the amount of carbon in your backyard in units of gigatonnes). This means a one-unit change in the predictor variable causes a (say) 10-unit change in the log-odds (for a binomial/logit-link model), e.g. from a probability of 0.006 (plogis(-5)) to 0.994 (plogis(5)). In your case, the intercept is -19.29, so at zero values of all of the predictors in the burned condition you get a probability of 4.2e-9. The other huge coefficient is for unburnt (19.02), so at zero values of all of the predictors in the unburned (unburnt?) condition you get plogis(-19.29+19.02) = 0.43.

这篇关于在lme4 glmer中缩放预测变量不能解决特征值警告;它不能解决特征值警告.替代优化也没有的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆