使用SqlAlchemy和cx_Oracle将Pandas DataFrame写入Oracle数据库时,加快to_sql()的速度 [英] Speed up to_sql() when writing Pandas DataFrame to Oracle database using SqlAlchemy and cx_Oracle

查看:256
本文介绍了使用SqlAlchemy和cx_Oracle将Pandas DataFrame写入Oracle数据库时,加快to_sql()的速度的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

使用pandas数据框的to_sql方法,我可以很容易地将少量行写入oracle数据库中的表:

Using pandas dataframe's to_sql method, I can write a small number of rows to a table in oracle database pretty easily:

from sqlalchemy import create_engine
import cx_Oracle
dsn_tns = "(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<host>)(PORT=1521))\
       (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=<servicename>)))"
pwd = input('Please type in password:')
engine = create_engine('oracle+cx_oracle://myusername:' + pwd + '@%s' % dsn_tns)
df.to_sql('test_table', engine.connect(), if_exists='replace')

但是对于任何常规大小的数据帧(我的数据库有6万行,不是那么大),代码变得无法使用,因为它在我愿意等待的时间(肯定超过10分钟)内从未完成.我用Google搜索了很多次,最接近的解决方案是 ansonw 给出的答案. https://stackoverflow.com/questions/31997859/bulk-insert-a-pandas-dataframe-using-sqlalchemy>这个问题.但是那是关于mysql的,而不是关于oracle的.正如 Ziggy Eunicien 指出的那样,它不适用于oracle.有什么想法吗?

But with any regular-sized dataframes (mine has 60k rows, not so big), the code became unusable as it never finished in the time I was willing to wait (definitely more than 10 min). I've googled and searched quite a few times and the closest solution was the answer given by ansonw in this question. But that one was about mysql, not oracle. As Ziggy Eunicien pointed out, it did not work for oracle. Any ideas?

编辑

以下是数据框中的行示例:

Here's a sample of rows in the dataframe:

id          name            premium     created_date    init_p  term_number uprate  value   score   group   action_reason
160442353   LDP: Review     1295.619617 2014-01-20  1130.75     1           7       -42 236.328243  6       pass
164623435   TRU: Referral   453.224880  2014-05-20  0.00        11          NaN     -55 38.783290   1       suppress

这是df的数据类型:

id               int64
name             object
premium          float64
created_date     object
init_p           float64
term_number      float64
uprate           float64
value            float64
score            float64
group            int64
action_reason    object

推荐答案

Pandas + SQLAlchemy默认情况下将所有object(字符串)列保存为Oracle数据库中的 CLOB ,这使得插入 非常慢.

Pandas + SQLAlchemy per default save all object (string) columns as CLOB in Oracle DB, which makes insertion extremely slow.

以下是一些测试:

import pandas as pd
import cx_Oracle
from sqlalchemy import types, create_engine

#######################################################
### DB connection strings config
#######################################################
tns = """
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = my-db-scan)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = my_service_name)
    )
  )
"""

usr = "test"
pwd = "my_oracle_password"

engine = create_engine('oracle+cx_oracle://%s:%s@%s' % (usr, pwd, tns))

# sample DF [shape: `(2000, 11)`]
# i took your 2 rows DF and replicated it: `df = pd.concat([df]* 10**3, ignore_index=True)`
df = pd.read_csv('/path/to/file.csv')

DF信息:

In [61]: df.shape
Out[61]: (2000, 11)

In [62]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2000 entries, 0 to 1999
Data columns (total 11 columns):
id               2000 non-null int64
name             2000 non-null object
premium          2000 non-null float64
created_date     2000 non-null datetime64[ns]
init_p           2000 non-null float64
term_number      2000 non-null int64
uprate           1000 non-null float64
value            2000 non-null int64
score            2000 non-null float64
group            2000 non-null int64
action_reason    2000 non-null object
dtypes: datetime64[ns](1), float64(4), int64(4), object(2)
memory usage: 172.0+ KB

让我们检查将其存储到Oracle DB需要多长时间:

Let's check how long will it take to store it to Oracle DB:

In [57]: df.shape
Out[57]: (2000, 11)

In [58]: %timeit -n 1 -r 1 df.to_sql('test_table', engine, index=False, if_exists='replace')
1 loop, best of 1: 16 s per loop

在Oracle DB中(请注意CLOB):

In Oracle DB (pay attention at CLOB's):

AAA> desc test.test_table
 Name                            Null?    Type
 ------------------------------- -------- ------------------
 ID                                       NUMBER(19)
 NAME                                     CLOB        #  !!!
 PREMIUM                                  FLOAT(126)
 CREATED_DATE                             DATE
 INIT_P                                   FLOAT(126)
 TERM_NUMBER                              NUMBER(19)
 UPRATE                                   FLOAT(126)
 VALUE                                    NUMBER(19)
 SCORE                                    FLOAT(126)
 group                                    NUMBER(19)
 ACTION_REASON                            CLOB        #  !!!

现在让我们指示熊猫将所有object列保存为VARCHAR数据类型:

Now let's instruct pandas to save all object columns as VARCHAR data types:

In [59]: dtyp = {c:types.VARCHAR(df[c].str.len().max())
    ...:         for c in df.columns[df.dtypes == 'object'].tolist()}
    ...:

In [60]: %timeit -n 1 -r 1 df.to_sql('test_table', engine, index=False, if_exists='replace', dtype=dtyp)
1 loop, best of 1: 335 ms per loop

这次是大约.快48倍

签入Oracle DB:

Check in Oracle DB:

 AAA> desc test.test_table
 Name                          Null?    Type
 ----------------------------- -------- ---------------------
 ID                                     NUMBER(19)
 NAME                                   VARCHAR2(13 CHAR)        #  !!!
 PREMIUM                                FLOAT(126)
 CREATED_DATE                           DATE
 INIT_P                                 FLOAT(126)
 TERM_NUMBER                            NUMBER(19)
 UPRATE                                 FLOAT(126)
 VALUE                                  NUMBER(19)
 SCORE                                  FLOAT(126)
 group                                  NUMBER(19)
 ACTION_REASON                          VARCHAR2(8 CHAR)        #  !!!

让我们用200.000行DF对其进行测试:

Let's test it with 200.000 rows DF:

In [69]: df.shape
Out[69]: (200000, 11)

In [70]: %timeit -n 1 -r 1 df.to_sql('test_table', engine, index=False, if_exists='replace', dtype=dtyp, chunksize=10**4)
1 loop, best of 1: 4.68 s per loop

在我的测试(不是最快的)环境中,200K行DF花费了大约5秒钟.

It took ~5 seconds for 200K rows DF in my test (not the fastest) environment.

结论:使用以下技巧可以在将DataFrame保存到Oracle DB时为object dtype的所有DF列显式指定dtype.否则,它将被保存为CLOB数据类型,这需要特殊处理并使其非常慢

Conclusion: use the following trick in order to explicitly specify dtype for all DF columns of object dtype when saving DataFrames to Oracle DB. Otherwise it'll be saved as CLOB data type, which requires special treatment and makes it very slow

dtyp = {c:types.VARCHAR(df[c].str.len().max())
        for c in df.columns[df.dtypes == 'object'].tolist()}

df.to_sql(..., dtype=dtyp)

这篇关于使用SqlAlchemy和cx_Oracle将Pandas DataFrame写入Oracle数据库时,加快to_sql()的速度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆