to_sql pyodbc计数字段不正确或语法错误 [英] to_sql pyodbc count field incorrect or syntax error

查看:95
本文介绍了to_sql pyodbc计数字段不正确或语法错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在从api网站下载Json数据,并使用sqlalchemy,pyodbc和pandas的to_sql函数将数据插入MSSQL服务器.

I am downloading Json data from an api website and using sqlalchemy, pyodbc and pandas' to_sql function to insert that data into a MSSQL server.

我最多可以下载10000行,但是必须将块大小限制为10,否则会出现以下错误:

I can download up to 10000 rows, however I have to limit the chunksize to 10 otherwise I get the following error:

DBAPIError:(pyodbc.Error)('07002','[07002] [Microsoft] [SQL Server Native Client 11.0] COUNT个字段不正确或语法错误(0) (SQLExecDirectW)')[SQL:将INERT插入[TEMP_production_entity_details]

DBAPIError: (pyodbc.Error) ('07002', '[07002] [Microsoft][SQL Server Native Client 11.0]COUNT field incorrect or syntax error (0) (SQLExecDirectW)') [SQL: 'INSERT INTO [TEMP_producing_entity_details]

大约有5亿行可供下载,它只是以这种速度爬行.有任何解决方法的建议吗?

There are around 500 Million rows to download, it's just crawling at this speed. Any advice on a workaround?

谢谢

推荐答案

更新:

pandas 0.23.1恢复了0.23.0中引入的有问题的更改.但是,原始性能的最佳解决方案仍然是CSV-> bcp方法,如下所述.

pandas 0.23.1 has reverted the problematic changes introduced in 0.23.0. However, the best solution for raw performance remains the CSV -> bcp approach as described below.

更新:

pandas 0.24.0显然已重新引入了该问题(请参阅:此处)

pandas 0.24.0 apparently has re-introduced the issue (ref: here)

(原始答案)

在熊猫版本0.23.0之前,to_sql将为数据表中的每一行生成一个单独的INSERT:

Prior to pandas version 0.23.0, to_sql would generate a separate INSERT for each row in the DataTable:

exec sp_prepexec @p1 output,N'@P1 int,@P2 nvarchar(6)',
    N'INSERT INTO df_to_sql_test (id, txt) VALUES (@P1, @P2)',
    0,N'row000'
exec sp_prepexec @p1 output,N'@P1 int,@P2 nvarchar(6)',
    N'INSERT INTO df_to_sql_test (id, txt) VALUES (@P1, @P2)',
    1,N'row001'
exec sp_prepexec @p1 output,N'@P1 int,@P2 nvarchar(6)',
    N'INSERT INTO df_to_sql_test (id, txt) VALUES (@P1, @P2)',
    2,N'row002'

大概是为了提高性能,pandas 0.23.0现在会生成一个表值构造函数,以便在每次调用时插入多行

Presumably to improve performance, pandas 0.23.0 now generates a table-value constructor to insert multiple rows per call

exec sp_prepexec @p1 output,N'@P1 int,@P2 nvarchar(6),@P3 int,@P4 nvarchar(6),@P5 int,@P6 nvarchar(6)',
    N'INSERT INTO df_to_sql_test (id, txt) VALUES (@P1, @P2), (@P3, @P4), (@P5, @P6)',
    0,N'row000',1,N'row001',2,N'row002'

问题在于SQL Server存储过程(包括系统存储过程,如sp_prepexec)仅限于2100个参数,因此,如果DataFrame具有100列,则to_sql一次只能插入约20行.

The problem is that SQL Server stored procedures (including system stored procedures like sp_prepexec) are limited to 2100 parameters, so if the DataFrame has 100 columns then to_sql can only insert about 20 rows at a time.

我们可以使用

# df is an existing DataFrame
#
# limit based on sp_prepexec parameter count
tsql_chunksize = 2097 // len(df.columns)
# cap at 1000 (limit for number of rows inserted by table-value constructor)
tsql_chunksize = 1000 if tsql_chunksize > 1000 else tsql_chunksize
#
df.to_sql('tablename', engine, if_exists='replace', index=False, chunksize=tsql_chunksize)

但是,最快的方法仍然可能是:

However, the fastest approach is still likely to be:

  • 将DataFrame转储到CSV文件(或类似文件),然后

  • dump the DataFrame to a CSV file (or similar), and then

已使用Python调用SQL Server bcp实用程序将该文件上传到表中.

have Python call the SQL Server bcp utility to upload that file into the table.

这篇关于to_sql pyodbc计数字段不正确或语法错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆