使我所有的预测都偏向于二进制分类 [英] Having all my predictions inclined to one side for binary classification

查看:38
本文介绍了使我所有的预测都偏向于二进制分类的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在训练一个模型,该模型包含8个功能,可以让我们预测房间出售的可能性.

I was training a model that contains 8 features that allow us to predict the probability of a room been sold.

  • 区域:房间所属的区域(整数,取值介于1到10之间)

  • Region: The region the room belongs to (an integer, taking a value between 1 and 10)

日期:停留日期(1-365之间的整数,这里我们只考虑一日请求)

Date: The date of stay (an integer between 1‐365, here we consider only one‐day request)

工作日:星期几(1到7之间的整数)

Weekday: Day of week (an integer between 1‐7)

公寓:房间是整个公寓(1)还是仅一个房间(0)

Apartment: Whether the room is a whole apartment (1) or just a room (0)

#beds:房间中的床位数(1-4之间的整数)

#beds:The number of beds in the room (an integer between 1‐4)

评论:卖家的平均评论(介于1到5之间的连续变量)

Review: Average review of the seller (a continuous variable between 1 and 5)

图片质量:房间图片的质量(0到1之间的连续变量)

Pic Quality: Quality of the picture of the room (a continuous variable between 0 and 1)

价格:房间的历史发布价格(一个连续变量)

Price: he historic posted price of the room (a continuous variable)

接受:该帖子最后是否被接受(有人接受了,为1)或不接受(0)*

Accept: Whether this post gets accepted (someone took it, 1) or not (0) in the end*

列Accept是"y".因此,这是一个二进制分类.

Column Accept is the "y". Hence, this is a binary classification.

  1. 我已经为分类数据完成了 OneHotEncoder .
  2. 我已对数据进行归一化.
  3. 我修改了以下 RandomRofrest 参数:

  • 最大深度:峰值为16
  • n_estimators :峰值为300
  • min_samples_leaf :高峰在2
  • max_features :对AUC没有影响.
    • Max_depth: Peak at 16
    • n_estimators: Peak at 300
    • min_samples_leaf: Peak at 2
    • max_features: Has no effect on the AUC.
    • AUC达到0.7889的峰值.我还能做些什么来增加它?

      The AUC peaked at 0.7889. What else can I do to increase it?

      这是我的代码

      import pandas as pd
      import numpy as np
      from sklearn.model_selection import cross_val_score
      from sklearn.preprocessing import OneHotEncoder
      from sklearn.pipeline import make_pipeline
      from sklearn.compose import make_column_transformer
      from sklearn.model_selection import train_test_split
      df_train = pd.read_csv('case2_training.csv')
      
      # Exclude ID since it is not a feature
      X, y = df_train.iloc[:, 1:-1], df_train.iloc[:, -1]
      y = y.astype(np.float32)
      
      # Split the data
      X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.05,shuffle=False)
      
      ohe = OneHotEncoder(sparse = False)
      column_trans = make_column_transformer(
      (OneHotEncoder(),['Region','Weekday','Apartment']),remainder='passthrough')
      X_train = column_trans.fit_transform(X_train)
      X_test = column_trans.fit_transform(X_test)
      
      # Normalization
      from sklearn.preprocessing import MaxAbsScaler
      mabsc = MaxAbsScaler()
      
      X_train = mabsc.fit_transform(X_train)
      X_test = mabsc.transform(X_test)
      
      X_train = X_train.astype(np.float32)
      X_test = X_test.astype(np.float32)
      
      from sklearn.ensemble import RandomForestClassifier
      from sklearn.metrics import roc_auc_score
      
      RF =  RandomForestClassifier(min_samples_leaf=2,random_state=0, n_estimators=300,max_depth = 16,n_jobs=-1,oob_score=True,max_features=i)
      cross_val_score(RF,X_train,y_train,cv=5,scoring = 'roc_auc').mean()
      RF.fit(X_train, y_train)
      yhat = RF.predict_proba(X_test)
      
      print("AUC:",roc_auc_score(y_test, yhat[:,-1]))
      
      # Run the prediction on the given test set.
      testset = pd.read_csv('case2_testing.csv')
      testset = testset.iloc[:, 1:] # exclude the 'ID' column
      testset = column_trans.fit_transform(testset)
      testset = mabsc.transform(testset)
      
      
      yhat_2 = RF.predict_proba(testset)
      final_prediction = yhat[:,-1]
      

      但是,所有来自"final_prediction"的概率都低于0.45,基本上,该模型认为所有样本均为0.有人可以帮忙吗?

      However, all the probabilities from 'final_prediction` are below 0.45, basically, the model believes that all the samples are 0. Can anyone help ?

      推荐答案

      您正在测试集上使用 column_trans.fit_transform ,这将完全覆盖训练期间安装的功能.基本上,数据现在采用的是您的训练模型无法理解的格式.

      You are using column_trans.fit_transform on the test set, this completely overwrites the features that were fitted during training. Basically the data is now in a format your trained model doesn't understand.

      在对训练集进行训练后,只需使用 column_trans.transform .

      Once fitted in training on the training set, simply use column_trans.transform afterwards.

      这篇关于使我所有的预测都偏向于二进制分类的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆