R:重新运行模型时 GLMNET 的奇怪行为 [英] R: GLMNET odd behavior when model is reran
问题描述
我正在尝试使用 LASSO 进行变量选择,并尝试使用 glmnet
包在 R 中实现.这是我目前写的代码:
I am trying to use LASSO for variable selection, and attempted the implementation in R using the glmnet
package. This is the code I wrote so far:
set.seed(1)
library(glmnet)
return = matrix(ret.ff.zoo[which(index(ret.ff.zoo) == beta.df$date[1]),])
data = matrix(unlist(beta.df[which(beta.df$date == beta.df$date[1]),][,-1]), ncol = num.factors)
dimnames(data)[[2]] <- names(beta.df)[-1]
model <- cv.glmnet(data, return, standardize = TRUE)
coef(model)
这是我第一次运行时得到的:
This is what I obtain when I run it the first time:
> coef(model)
15 x 1 sparse Matrix of class "dgCMatrix"
1
(Intercept) 0.009159452
VAL .
EQ .
EFF .
SIZE 0.018479078
MOM .
FSCR .
MSCR .
SY .
URP .
UMP .
UNIF .
OIL .
DEI .
PROD .
但是,这是我再次运行相同代码时获得的结果:
BUT, this is what I obtain when I run the SAME code once more:
> coef(model)
15 x 1 sparse Matrix of class "dgCMatrix"
1
(Intercept) 0.008031915
VAL .
EQ .
EFF .
SIZE 0.021250778
MOM .
FSCR .
MSCR .
SY .
URP .
UMP .
UNIF .
OIL .
DEI .
PROD .
我不确定模型为什么会这样.如果系数在每次运行时都发生变化,我将如何选择最终模型?它是否在每次运行时使用不同的调整参数 $\lambda$?我以为 cv.glmnet
默认使用 model$lambda.1se
?!
I am not sure why the model behaves this way. How would I be able to choose a final model if the coefficients change at every run? Does it use a different tuning parameter $\lambda$ at every run? I thought that cv.glmnet
uses model$lambda.1se
by default?!
我刚刚开始学习这个包,如果我能得到任何帮助,我将不胜感激!
I have just started learning about this package, and would appreciate any help I can get!
谢谢!
推荐答案
该模型不是确定性的.在模型拟合之前运行 set.seed(1)
以产生确定性结果.
The model isn't deterministic. Run set.seed(1)
before your model fit to produce deterministic results.
这篇关于R:重新运行模型时 GLMNET 的奇怪行为的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!