psycopg2错误:DatabaseError:错误,没有来自libpq的消息 [英] psycopg2 error: DatabaseError: error with no message from the libpq

查看:163
本文介绍了psycopg2错误:DatabaseError:错误,没有来自libpq的消息的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个应用程序,可以将csv文件中的数据解析并将其加载到Postgres 9.3数据库中。在串行执行中,插入语句/游标执行没有问题。

I have an application that parses and loads data from csv files into a Postgres 9.3 database. In serial execution insert statements/cursor executions work without an issue.

我在混合中添加了celery,以添加并行解析和数据文件的插入。解析工作正常。但是,我去运行插入语句,然后得到:

I added celery in the mix to add parallel parsing and inserting of the data files. Parsing works fine. However, I go to run insert statements and I get:

[2015-05-13 11:30:16,464:  ERROR/Worker-1] ingest_task.work_it: Exception
    Traceback (most recent call last):
    File "ingest_tasks.py", line 86, in work_it
        rowcount = ingest_data.load_data(con=con, statements=statements)
    File "ingest_data.py", line 134, in load_data
        ingest_curs.execute(statement)
    DatabaseError: error with no message from the libpq


推荐答案

在多处理 engine.execute( )。我终于解决了这个问题,就像在 engine.dispose() http://docs.sqlalchemy.org/en/rel_0_9/core/connections.html#engine-disposal rel = noreferrer>官方文件

I encountered a similar problem when multiprocessing engine.execute(). I solved this problem finally by just adding engine.dispose() right in the first line under the function where the subprocess is supposed to enter, as suggested in the official document:


当程序使用多处理或 fork() Engine 对象时如果
复制到子进程中,则应调用 Engine.dispose(),以便
引擎在该派生本地创建全新的数据库连接。数据库
连接通常跨进程边界传播。

When a program uses multiprocessing or fork(), and an Engine object is copied to the child process, Engine.dispose() should be called so that the engine creates brand new database connections local to that fork. Database connections generally do not travel across process boundaries.

这篇关于psycopg2错误:DatabaseError:错误,没有来自libpq的消息的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆