pd.get_dummies()在大级别上变慢 [英] pd.get_dummies() slow on large levels

查看:178
本文介绍了pd.get_dummies()在大级别上变慢的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我不确定这是否已经是最快的方法,还是效率不高.

I'm unsure if this is already the fastest possible method, or if I'm doing this inefficiently.

我想对具有27k +可能级别的特定类别列进行热编码.该列在2个不同的数据集中具有不同的值,因此在使用get_dummies()之前,我首先将级别进行了组合

I want to hot encode a particular categorical column which has 27k+ possible levels. The column has different values in 2 different datasets, so I combined the levels first before using get_dummies()

def hot_encode_column_in_both_datasets(column_name,df,df2,sparse=True):
    col1b = set(df2[column_name].unique())
    col1a = set(df[column_name].unique())
    combined_cats = list(col1a.union(col1b))
    df[column_name] = df[column_name].astype('category', categories=combined_cats)
    df2[column_name] = df2[column_name].astype('category', categories=combined_cats)

    df = pd.get_dummies(df, columns=[column_name],sparse=sparse)
    df2 = pd.get_dummies(df2, columns=[column_name],sparse=sparse)
    try:
        del df[column_name]
        del df2[column_name]
    except:
        pass
    return df,df2

但是,它已经运行了2个多小时,并且仍然停留在热编码状态.

However, Its been running for more than 2 hours and it's still stuck hot encoding.

我在这里做错什么了吗?还是仅仅是在大型数据集上运行它的本质?

Could I be doing something wrongly here? Or is it just the nature of running it on large datasets?

Df具有680万行和27列,Df2具有19990行和27列,然后才对我想要的列进行热编码.

Df has 6.8m rows and 27 columns, Df2 has 19990 rows and 27 columns before hot encoding the column that I wanted to.

建议表示感谢,谢谢! :)

Advice appreciated, thank you! :)

推荐答案

我查看了

I reviewed the get_dummies source code briefly, and I think it may not be taking full advantage of the sparsity for your use case. The following approach may be faster, but I did not attempt to scale it all the way up to the 19M records you have:

import numpy as np
import pandas as pd
import scipy.sparse as ssp

np.random.seed(1)
N = 10000

dfa = pd.DataFrame.from_dict({
    'col1': np.random.randint(0, 27000, N)
    , 'col2b': np.random.choice([1, 2, 3], N)
    , 'target': np.random.choice([1, 2, 3], N)
    })

# construct an array of the unique values of the column to be encoded
vals = np.array(dfa.col1.unique())
# extract an array of values to be encoded from the dataframe
col1 = dfa.col1.values
# construct a sparse matrix of the appropriate size and an appropriate,
# memory-efficient dtype
spmtx = ssp.dok_matrix((N, len(vals)), dtype=np.uint8)
# do the encoding. NB: This is only vectorized in one of the two dimensions.
# Finding a way to vectorize the second dimension may yield a large speed up
for idx, val in enumerate(vals):
    spmtx[np.argwhere(col1 == val), idx] = 1

# Construct a SparseDataFrame from the sparse matrix and apply the index
# from the original dataframe and column names.
dfnew = pd.SparseDataFrame(spmtx, index=dfa.index,
                           columns=['col1_' + str(el) for el in vals])
dfnew.fillna(0, inplace=True)

更新

从其他答案中借用见解在这里这里,我当时能够在两个维度上向量化解决方案.在有限的测试中,我注意到构造SparseDataFrame似乎将执行时间增加了几倍.因此,如果不需要返回类似DataFrame的对象,则可以节省大量时间.此解决方案还处理需要将2个以上DataFrame编码为具有相等列数的2维数组的情况.

Borrowing insights from other answers here and here, I was able to vectorize the solution in both dimensions. In my limited testing, I noted that constructing the SparseDataFrame seems to increase the execution time several fold. So, if you don't need to return a DataFrame-like object, you can save a lot of time. This solution also handles the case where you need to encode 2+ DataFrames into 2-d arrays with equal numbers of columns.

import numpy as np
import pandas as pd
import scipy.sparse as ssp

np.random.seed(1)
N1 = 10000
N2 = 100000

dfa = pd.DataFrame.from_dict({
    'col1': np.random.randint(0, 27000, N1)
    , 'col2a': np.random.choice([1, 2, 3], N1)
    , 'target': np.random.choice([1, 2, 3], N1)
    })

dfb = pd.DataFrame.from_dict({
    'col1': np.random.randint(0, 27000, N2)
    , 'col2b': np.random.choice(['foo', 'bar', 'baz'], N2)
    , 'target': np.random.choice([1, 2, 3], N2)
    })

# construct an array of the unique values of the column to be encoded
# taking the union of the values from both dataframes.
valsa = set(dfa.col1.unique())
valsb = set(dfb.col1.unique())
vals = np.array(list(valsa.union(valsb)), dtype=np.uint16)


def sparse_ohe(df, col, vals):
    """One-hot encoder using a sparse ndarray."""
    colaray = df[col].values
    # construct a sparse matrix of the appropriate size and an appropriate,
    # memory-efficient dtype
    spmtx = ssp.dok_matrix((df.shape[0], vals.shape[0]), dtype=np.uint8)
    # do the encoding
    spmtx[np.where(colaray.reshape(-1, 1) == vals.reshape(1, -1))] = 1

    # Construct a SparseDataFrame from the sparse matrix
    dfnew = pd.SparseDataFrame(spmtx, dtype=np.uint8, index=df.index,
                               columns=[col + '_' + str(el) for el in vals])
    dfnew.fillna(0, inplace=True)
    return dfnew

dfanew = sparse_ohe(dfa, 'col1', vals)
dfbnew = sparse_ohe(dfb, 'col1', vals)

这篇关于pd.get_dummies()在大级别上变慢的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆