在随机森林分类器中打印特定样本的决策路径 [英] Print the decision path of a specific sample in a random forest classifier

查看:125
本文介绍了在随机森林分类器中打印特定样本的决策路径的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

对于特定样本,如何打印随机森林的决策路径,而不是随机森林中的单个树的路径.

How to print the decision path of a randomforest rather than the path of individual trees in a randomforest for a specific sample.

import numpy as np
import pandas as pd
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier

X, y = make_classification(n_samples=1000,
                           n_features=6,
                           n_informative=3,
                           n_classes=2,
                           random_state=0,
                           shuffle=False)

# Creating a dataFrame
df = pd.DataFrame({'Feature 1':X[:,0],
                                  'Feature 2':X[:,1],
                                  'Feature 3':X[:,2],
                                  'Feature 4':X[:,3],
                                  'Feature 5':X[:,4],
                                  'Feature 6':X[:,5],
                                  'Class':y})


y_train = df['Class']
X_train = df.drop('Class',axis = 1)

rf = RandomForestClassifier(n_estimators=10,
                               random_state=0)

rf.fit(X_train, y_train) 

v0.18中引入了用于随机森林的

decision_path. ( http://scikit-learn.org/stable/modules /generation/sklearn.ensemble.RandomForestClassifier.html )

decision_path for random forest was introduced in v0.18. (http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html)

但是,它输出一个稀疏矩阵,我不确定该如何理解.任何人都可以建议如何最好地打印该特定样本的决策路径,然后对其进行可视化处理?

However, it outputs a sparse matrix which I'm not certain how to make sense of. Can anyone advise on how best to print the decision path of that specific sample and then visualize it?

#Extracting the decision path for instance i = 12
i_data = X_train.iloc[12].values.reshape(1,-1)
d_path = rf.decision_path(i_data)

print(d_path)

输出:

(< 1x1432类型的稀疏矩阵 带有96个以压缩稀疏行格式存储的元素>,array([0,133,> 282,415,588,761,910,1041,1182,1309,1432],dtype = int32))

(<1x1432 sparse matrix of type '' with 96 stored elements in Compressed Sparse Row format>, array([ 0, 133, >282, 415, 588, 761, 910, 1041, 1182, 1309, 1432], dtype=int32))

推荐答案

我发现了这个 code ,并对其进行了修改以适合您的问题.

I found this code in the scikit-learn documentation and modified it to fit your problem.

RandomForestClassifier 是一个集合我们的 DecisionTreeClassifier 可以遍历不同的树,并在每棵树中检索样本的决策路径. 希望对您有所帮助:

As the RandomForestClassifier is a collection of DecisionTreeClassifier we can iterate over the different trees and retrieve the decision path for the sample in each one. Hope it helps:

import numpy as np

from sklearn.model_selection import train_test_split
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier

X, y = make_classification(n_samples=1000,
                           n_features=6,
                           n_informative=3,
                           n_classes=2,
                           random_state=0,
                           shuffle=False)

X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)

estimator = RandomForestClassifier(n_estimators=10,
                               random_state=0)
estimator.fit(X_train, y_train)

# The decision estimator has an attribute called tree_  which stores the entire
# tree structure and allows access to low level attributes. The binary tree
# tree_ is represented as a number of parallel arrays. The i-th element of each
# array holds information about the node `i`. Node 0 is the tree's root. NOTE:
# Some of the arrays only apply to either leaves or split nodes, resp. In this
# case the values of nodes of the other type are arbitrary!
#
# Among those arrays, we have:
#   - left_child, id of the left child of the node
#   - right_child, id of the right child of the node
#   - feature, feature used for splitting the node
#   - threshold, threshold value at the node
#

# Using those arrays, we can parse the tree structure:

#n_nodes = estimator.tree_.node_count
n_nodes_ = [t.tree_.node_count for t in estimator.estimators_]
children_left_ = [t.tree_.children_left for t in estimator.estimators_]
children_right_ = [t.tree_.children_right for t in estimator.estimators_]
feature_ = [t.tree_.feature for t in estimator.estimators_]
threshold_ = [t.tree_.threshold for t in estimator.estimators_]


def explore_tree(estimator, n_nodes, children_left,children_right, feature,threshold,
                suffix='', print_tree= False, sample_id=0, feature_names=None):

    if not feature_names:
        feature_names = feature


    assert len(feature_names) == X.shape[1], "The feature names do not match the number of features."
    # The tree structure can be traversed to compute various properties such
    # as the depth of each node and whether or not it is a leaf.
    node_depth = np.zeros(shape=n_nodes, dtype=np.int64)
    is_leaves = np.zeros(shape=n_nodes, dtype=bool)

    stack = [(0, -1)]  # seed is the root node id and its parent depth
    while len(stack) > 0:
        node_id, parent_depth = stack.pop()
        node_depth[node_id] = parent_depth + 1

        # If we have a test node
        if (children_left[node_id] != children_right[node_id]):
            stack.append((children_left[node_id], parent_depth + 1))
            stack.append((children_right[node_id], parent_depth + 1))
        else:
            is_leaves[node_id] = True

    print("The binary tree structure has %s nodes"
          % n_nodes)
    if print_tree:
        print("Tree structure: \n")
        for i in range(n_nodes):
            if is_leaves[i]:
                print("%snode=%s leaf node." % (node_depth[i] * "\t", i))
            else:
                print("%snode=%s test node: go to node %s if X[:, %s] <= %s else to "
                      "node %s."
                      % (node_depth[i] * "\t",
                         i,
                         children_left[i],
                         feature[i],
                         threshold[i],
                         children_right[i],
                         ))
            print("\n")
        print()

    # First let's retrieve the decision path of each sample. The decision_path
    # method allows to retrieve the node indicator functions. A non zero element of
    # indicator matrix at the position (i, j) indicates that the sample i goes
    # through the node j.

    node_indicator = estimator.decision_path(X_test)

    # Similarly, we can also have the leaves ids reached by each sample.

    leave_id = estimator.apply(X_test)

    # Now, it's possible to get the tests that were used to predict a sample or
    # a group of samples. First, let's make it for the sample.

    #sample_id = 0
    node_index = node_indicator.indices[node_indicator.indptr[sample_id]:
                                        node_indicator.indptr[sample_id + 1]]

    print(X_test[sample_id,:])

    print('Rules used to predict sample %s: ' % sample_id)
    for node_id in node_index:
        # tabulation = " "*node_depth[node_id] #-> makes tabulation of each level of the tree
        tabulation = ""
        if leave_id[sample_id] == node_id:
            print("%s==> Predicted leaf index \n"%(tabulation))
            #continue

        if (X_test[sample_id, feature[node_id]] <= threshold[node_id]):
            threshold_sign = "<="
        else:
            threshold_sign = ">"

        print("%sdecision id node %s : (X_test[%s, '%s'] (= %s) %s %s)"
              % (tabulation,
                 node_id,
                 sample_id,
                 feature_names[feature[node_id]],
                 X_test[sample_id, feature[node_id]],
                 threshold_sign,
                 threshold[node_id]))
    print("%sPrediction for sample %d: %s"%(tabulation,
                                          sample_id,
                                          estimator.predict(X_test)[sample_id]))

    # For a group of samples, we have the following common node.
    sample_ids = [sample_id, 1]
    common_nodes = (node_indicator.toarray()[sample_ids].sum(axis=0) ==
                    len(sample_ids))

    common_node_id = np.arange(n_nodes)[common_nodes]

    print("\nThe following samples %s share the node %s in the tree"
          % (sample_ids, common_node_id))
    print("It is %s %% of all nodes." % (100 * len(common_node_id) / n_nodes,))

    for sample_id_ in sample_ids:
        print("Prediction for sample %d: %s"%(sample_id_,
                                          estimator.predict(X_test)[sample_id_]))

要在随机森林中打印不同的树,您可以通过以下方式遍历估计器:

And for printing the different trees in the random forest you can just iterate over the estimators this way:

for i,e in enumerate(estimator.estimators_):

    print("Tree %d\n"%i)
    explore_tree(estimator.estimators_[i],n_nodes_[i],children_left_[i],
                 children_right_[i], feature_[i],threshold_[i],
                suffix=i, sample_id=1, feature_names=["Feature_%d"%i for i in range(X.shape[1])])
    print('\n'*2)

这是sample_id = 0RandomForestClassifier中第一棵树的输出:

This is the output for the first trees in the RandomForestClassifier for sample_id = 0:

Tree 1

The binary tree structure has 115 nodes
[ 2.36609963  1.32658511 -0.08002818  0.88295736  2.24224824 -0.71469736]
Rules used to predict sample 1: 
decision id node 0 : (X_test[1, 'Feature_3'] (= 0.8829573603562209) > 0.7038955688476562)
decision id node 86 : (X_test[1, 'Feature_2'] (= -0.08002817952064323) > -1.4465678930282593)
decision id node 92 : (X_test[1, 'Feature_0'] (= 2.366099632530947) > 0.7020512223243713)
decision id node 102 : (X_test[1, 'Feature_5'] (= -0.7146973587899221) > -1.2842652797698975)
decision id node 106 : (X_test[1, 'Feature_2'] (= -0.08002817952064323) > -0.4031955599784851)
decision id node 110 : (X_test[1, 'Feature_0'] (= 2.366099632530947) > 0.717217206954956)
decision id node 112 : (X_test[1, 'Feature_4'] (= 2.2422482391211678) <= 3.0181679725646973)
==> Predicted leaf index
decision id node 113 : (X_test[1, 'Feature_4'] (= 2.2422482391211678) > -2.0)
Prediction for sample 1: 1.0

The following samples [1, 1] share the node [  0  86  92 102 106 110 112 113] in the tree
It is 6.956521739130435 % of all nodes.
Prediction for sample 1: 1.0
Prediction for sample 1: 1.0



Tree 2

The binary tree structure has 135 nodes
[ 2.36609963  1.32658511 -0.08002818  0.88295736  2.24224824 -0.71469736]
Rules used to predict sample 1: 
decision id node 0 : (X_test[1, 'Feature_3'] (= 0.8829573603562209) > 0.5484486818313599)
decision id node 88 : (X_test[1, 'Feature_2'] (= -0.08002817952064323) > -0.7239605188369751)
decision id node 102 : (X_test[1, 'Feature_5'] (= -0.7146973587899221) > -1.6143207550048828)
decision id node 110 : (X_test[1, 'Feature_0'] (= 2.366099632530947) > 2.3399271965026855)
decision id node 130 : (X_test[1, 'Feature_5'] (= -0.7146973587899221) <= -0.5680553913116455)
decision id node 131 : (X_test[1, 'Feature_0'] (= 2.366099632530947) <= 2.4545814990997314)
==> Predicted leaf index
decision id node 132 : (X_test[1, 'Feature_4'] (= 2.2422482391211678) > -2.0)
Prediction for sample 1: 0.0

The following samples [1, 1] share the node [  0  88 102 110 130 131 132] in the tree
It is 5.185185185185185 % of all nodes.
Prediction for sample 1: 0.0
Prediction for sample 1: 0.0

这篇关于在随机森林分类器中打印特定样本的决策路径的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆