pyodbc 在 sp_prepexec 之后调用 sp_unprepare.这会影响参数化查询的性能吗? [英] pyodbc calls sp_unprepare after sp_prepexec. Does that affect performance of parameterized queries?

查看:58
本文介绍了pyodbc 在 sp_prepexec 之后调用 sp_unprepare.这会影响参数化查询的性能吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

继续

SqlAlchemy+pymssql.原始参数化查询会使用相同的执行计划吗?

我从 pymssql 切换到 pyodbc 试图将参数化查询发送到 SQL Server.带有 Microsoft 驱动程序的 pyodbc 可以解决问题,但我觉得有些奇怪:

I switched from pymssql to pyodbc tried to get parametrized queries sent to SQL Server. pyodbc with Microsoft driver does the trick, but something seems strange to me:

declare @p1 int
set @p1=6
exec sp_prepexec @p1 output,N'@P1 nvarchar(6),@P2 bigint,@P3 bigint,@P4 bigint',N'
                SELECT *
                FROM CC_sold
                WHERE id_contract =@P1
                    AND id_tip_cont=@P2
                    AND CC_sold.anul =@P3
                    AND CC_sold.luna =@P4
                ORDER BY CC_sold.anul, CC_sold.luna DESC
            ',N'176914',6,2016,9
select @p1
exec sp_unprepare 6

我的担忧与此声明有关:

My worries are related to this statement:

exec sp_unprepare 6

如果我理解正确,这会取消查询执行计划,我不会从查询参数化中获得任何优化.是这样吗?

If I understand right, this cancels query execution plan and I will not get any optimization from query parameterization. Is that the case?

推荐答案

我做了一些测试,你不必担心.当 cursor 对象关闭时,pyodbc 仅发送一个 sp_unprepare(用于最后执行的 sp_prepexec).也就是说,它不会sp_unprepare 每个 sp_prepexec 它发送.

I did a bit of testing and you need not be concerned. pyodbc only sends a single sp_unprepare (for the last sp_prepexec executed) when the cursor object is closed. That is, it does not sp_unprepare every sp_prepexec that it sends.

此外,实际的计时测试揭示了 pyodbc(显然可以利用缓存的执行计划)和 pymssql(显然不能)之间的性能差异.使用以下代码...

Furthermore, actual timing tests reveal the difference in performance between pyodbc (which apparently can take advantage of cached execution plans) and pymssql (which apparently cannot). With the following code ...

crsr = conn.cursor()
crsr.execute("DBCC FREEPROCCACHE")
if 'pyodbc' in repr(conn):
    sql = "SELECT COUNT(*) AS n FROM table1 WHERE cola=? AND colb=? AND colc=? AND cold=?"
else:
    sql = "SELECT COUNT(*) AS n FROM table1 WHERE cola=%s AND colb=%s AND colc=%s AND cold=%s"
t0 = time.time()
limit = 10
for a in range(limit):
    for b in range(limit):
        for c in range(limit):
            for d in range(limit):
                params = (a, b, c, d)
                crsr.execute(sql, params)
                n = crsr.fetchone()[0]
print(time.time() - t0)
crsr.close()
conn.close()

... for limit = 10 四 (4) 个嵌套的 for 循环总共运行 10,000 个查询.在我的带有本地 SQL Server 实例的 Windows 测试机器上,pymssql 执行大约需要 130 秒(仅 2 分钟多一点),而 pyodbc 始终在 5 秒内运行相同的代码.

... for limit = 10 the four (4) nested for loops run a total of 10,000 queries. On my Windows test machine with a local SQL Server instance, pymssql takes about 130 seconds (just over 2 minutes) to execute, while pyodbc consistently runs the same code in under 5 seconds.

这篇关于pyodbc 在 sp_prepexec 之后调用 sp_unprepare.这会影响参数化查询的性能吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆