从Python多处理访问MySQL连接池 [英] Accessing a MySQL connection pool from Python multiprocessing
问题描述
我正在尝试建立一个MySQL连接池,并让我的工作进程访问已建立的池,而不是每次都建立一个新的连接.
I'm trying to set up a MySQL connection pool and have my worker processes access the already established pool instead of setting up a new connection each time.
我很困惑是否应该将数据库游标传递给每个进程,或者是否还有其他方法可以做到这一点? MySql.connector不应该自动进行池化吗?当我检查日志文件时,会打开和关闭许多连接……每个进程一个连接.
I'm confused if I should pass the database cursor to each process, or if there's some other way to do this? Shouldn't MySql.connector do the pooling automatically? When I check my log files, many, many connections are opened and closed ... one for each process.
我的代码如下:
PATH = "/tmp"
class DB(object):
def __init__(self):
connected = False
while not connected:
try:
cnxpool = mysql.connector.pooling.MySQLConnectionPool(pool_name = "pool1",
**config.dbconfig)
self.__cnx = cnxpool.get_connection()
except mysql.connector.errors.PoolError:
print("Sleeping.. (Pool Error)")
sleep(5)
except mysql.connector.errors.DatabaseError:
print("Sleeping.. (Database Error)")
sleep(5)
self.__cur = self.__cnx.cursor(cursor_class=MySQLCursorDict)
def execute(self, query):
return self.__cur.execute(query)
def isValidFile(self, name):
return True
def readfile(self, fname):
d = DB()
d.execute("""INSERT INTO users (first_name) VALUES ('michael')""")
def main():
queue = multiprocessing.Queue()
pool = multiprocessing.Pool(None, init, [queue])
for dirpath, dirnames, filenames in os.walk(PATH):
full_path_fnames = map(lambda fn: os.path.join(dirpath, fn),
filenames)
full_path_fnames = filter(is_valid_file, full_path_fnames)
pool.map(readFile, full_path_fnames)
if __name__ == '__main__':
sys.exit(main())
推荐答案
首先,您要为DB
类的每个实例创建一个不同的连接池.具有相同名称的池不会使它们成为相同的池
First, you're creating a different connection pool for each instance of your DB
class. The pools having the same name doesn't make them the same pool
来自文档:
多个池具有相同的名称不是错误.必须通过其
pool_name
属性来区分池的应用程序应使用不同的名称创建每个池.
It is not an error for multiple pools to have the same name. An application that must distinguish pools by their
pool_name
property should create each pool with a distinct name.
此外,在不同进程之间共享数据库连接(或连接池)将是一个坏主意(我非常怀疑它甚至可以正常工作),因此,每个使用它自己的连接的进程实际上都是您的目标.
Besides that, sharing a database connection (or connection pool) between different processes would be a bad idea (and i highly doubt it would even work correctly), so each process using it's own connections is actually what you should aim for.
您只需将init
初始化程序中的池初始化为全局变量,然后使用它即可.
很简单的例子:
You could just initialize the pool in your init
initializer as a global variable and use that instead.
Very simple example:
from multiprocessing import Pool
from mysql.connector.pooling import MySQLConnectionPool
from mysql.connector import connect
import os
pool = None
def init():
global pool
print("PID %d: initializing pool..." % os.getpid())
pool = MySQLConnectionPool(...)
def do_work(q):
con = pool.get_connection()
print("PID %d: using connection %s" % (os.getpid(), con))
c = con.cursor()
c.execute(q)
res = c.fetchall()
con.close()
return res
def main():
p = Pool(initializer=init)
for res in p.map(do_work, ['select * from test']*8):
print(res)
p.close()
p.join()
if __name__ == '__main__':
main()
或者只使用一个简单的连接而不是连接池,因为无论如何一次在每个进程中只有一个连接处于活动状态.
multiprocessing.Pool
的大小隐式地限制了同时使用的连接数.
Or just use a simple connection instead of a connection pool, as only one connection will be active in each process at a time anyway.
The number of concurrently used connections is implicitly limited by the size of the multiprocessing.Pool
.
这篇关于从Python多处理访问MySQL连接池的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!