使用py2neo查询neo4j的写入性能 [英] Query writing performance on neo4j with py2neo

查看:233
本文介绍了使用py2neo查询neo4j的写入性能的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

目前,我在寻找一种高效的方式方面很挣扎,使用py2neo运行多个查询.我的问题是在python中有大量需要将其写入neo4j的写入查询.

Currently im struggle on finding a performant way, running multiple queries with py2neo. My problem is a have a big list of write queries in python that need to be written to neo4j.

我现在尝试了多种方法来解决此问题.对我来说最好的工作方法是:

I tried multiple ways to solve the issue right now. The best working approach for me was the following one:

from py2neo import Graph
queries = ["create (n) return id(n)","create (n) return id(n)",...] ## list of queries
g = Graph()
t = graph.begin(autocommit=False)
for idx, q in enumerate(queries):
    t.run(q)
    if idx % 100 == 0:
        t.commit()
        t = graph.begin(autocommit=False)
t.commit()

编写查询仍然需要很长时间.我也尝试了许多从apoc运行的尝试,但都没有成功,查询从未完成.我也尝试使用自动提交使用相同的编写方法.有一个更好的方法吗?有什么技巧,例如先删除索引,然后在插入数据后添加索引?

It it still takes to long for writing the queries. I also tried the run many from apoc without success, query was never finished. I also tried the same writing method with auto commit. Is there a better way to do this? Are there any tricks like dropping indexes first and then adding them after inserting the data?

-附加信息:

我正在使用Neo4j 3.4,Py2neo v4和Python 3.7

I'm using Neo4j 3.4, Py2neo v4 and Python 3.7

推荐答案

您可能想阅读Michael Hunger的

You may want to read up on Michael Hunger's tips and tricks for fast batched updates.

关键技巧是使用 UNWIND 进行转换将元素列为行,然后每行执行后续操作.

The key trick is using UNWIND to transform list elements into rows, and then subsequent operations are performed per row.

有一些支持功能可以轻松为您创建列表,例如 range().

There are supporting functions that can easily create lists for you, like range().

例如,如果您要创建1万个节点并添加name属性,然后返回节点名称及其图形ID,则可以执行以下操作:

As an example, if you wanted to create 10k nodes and add a name property, then return the node name and its graph id, you could do something like this:

UNWIND range(1, 10000) as index
CREATE (n:Node {name:'Node ' + index})
RETURN n.name as name, id(n) as id

同样,如果要导入大量数据,则可以创建参数映射列表,调用查询,然后取消缠绕列表以一次对每个条目进行操作,类似于我们使用LOAD CSV处理CSV文件的方式

Likewise if you have a good amount of data to import, you can create a list of parameter maps, call the query, then UNWIND the list to operate on each entry at once, similar to how we process CSV files with LOAD CSV.

这篇关于使用py2neo查询neo4j的写入性能的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆