使用SQL插入大量数据的效率 [英] Insertion efficiency of a large amount of data with SQL

查看:133
本文介绍了使用SQL插入大量数据的效率的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个程序,我用来读取CSV文件并将数据插入数据库。我遇到麻烦,因为它需要能够一次插入大记录(最多10,000行)的数据。起初,我让它循环并插入每个记录一次一个。这是缓慢的,因为它调用插入函数10,000次...下一步我试图将它组在一起,所以它一次插入50行。我想通过这种方式,它将不得不连接到数据库少,但它仍然太慢。什么是将多行CSV文件插入数据库的有效方法?此外,在进入数据库之前,我必须编辑一些数据(例如,如果两个相同,添加一个用户名)。

I have a program that I use to read CSV file and insert the data into a database. I am having trouble with it because it needs to able to insert big records ( up to 10,000 rows ) of data at a time. At first I had it looping through and inserting each record one at a time. That is slow because it calls an insert function 10,000 times... Next I tried to group it together so it inserted 50 rows at a time. I figured this way it would have to connect to the database less, but it is still too slow. What is an efficient way to insert many rows of a CSV file into a database? Also, I have to edit some data(such as add a 1 to a username if two are the same) before it goes into the database.

推荐答案

所以我最终使用fputcsv把我改变的数据放入一个新的CSV文件,然后我使用LOAD DATA INFILE命令将新的csv文件的数据放入表中。这改变了从120秒的超时到1000个条目,到大约10秒做10000个条目。感谢所有回覆的人。

So I ended up using the fputcsv to put the data I changed into a new CSV file, then I used the LOAD DATA INFILE command to put the data from the new csv file into the table. This changed it from timing out at 120 secs for 1000 entries, to taking about 10 seconds to do 10,000 entries. Thank you to everyone that replied.

这篇关于使用SQL插入大量数据的效率的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆