决策树Sklearn-树的深度和准确性 [英] Decision Tree Sklearn -Depth Of tree and accuracy
问题描述
我正在使用sklearn将决策树应用于数据集
在Sklearn中,有一个参数可以选择树的深度-dtree = DecisionTreeClassifier(max_depth = 10).
我的问题是max_depth参数如何对模型有所帮助.高/低max_depth如何帮助更准确地预测测试数据?
max_depth
顾名思义:允许树生长的最大深度.允许的深度越深,模型将变得越复杂.
对于训练错误,很容易看到会发生什么.如果增加 max_depth
,训练错误将始终下降(或至少不会上升).
对于测试错误,它变得不太明显.如果您将 max_depth
设置得太高,那么决策树可能会简单地过度拟合训练数据,而不会捕获我们想要的有用模式.这将导致测试错误增加.但是,如果将它设置得太低,那也不是一件好事.那么您可能给决策树的灵活性太小,无法捕获训练数据中的模式和交互.这也将导致测试错误增加.
在过高和过低的极端之间有一个不错的黄金点.通常,建模者会将 max_depth
视为超参数,并使用带有交叉验证的某种网格/随机搜索来找到 max_depth
的合适数字./p>
I am applying Decision Tree to a data set, using sklearn
In Sklearn there is a parameter to select the depth of the tree - dtree = DecisionTreeClassifier(max_depth=10).
My question is how the max_depth parameter helps on the model. how does high/low max_depth help in predicting the test data more accurately?
max_depth
is what the name suggests: The maximum depth that you allow the tree to grow to. The deeper you allow, the more complex your model will become.
For training error, it is easy to see what will happen. If you increase max_depth
, training error will always go down (or at least not go up).
For testing error, it gets less obvious. If you set max_depth
too high, then the decision tree might simply overfit the training data without capturing useful patterns as we would like; this will cause testing error to increase. But if you set it too low, that is not good as well; then you might be giving the decision tree too little flexibility to capture the patterns and interactions in the training data. This will also cause the testing error to increase.
There is a nice golden spot in between the extremes of too-high and too-low. Usually, the modeller would consider the max_depth
as a hyper-parameter, and use some sort of grid/random search with cross-validation to find a good number for max_depth
.
这篇关于决策树Sklearn-树的深度和准确性的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!