Neo4j超级节点问题-扇出模式 [英] Neo4j super node issue - fanning out pattern

查看:373
本文介绍了Neo4j超级节点问题-扇出模式的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是图数据库领域的新手,研究Neo4j并学习Cypher,我们正在尝试为图数据库建模,这是一个非常简单的模型,我们有 users 电影用户可以查看 电影评分 电影,创建播放列表播放列表可以拥有 电影.

问题与超级节点性能问题有关.我会引用一本我目前正在阅读的非常好的书中的内容- Rik Van Bruggen的《学习Neo4j》 ,所以这里是:

然后在图的某些部分的数据集中出现一个非常有趣的问题 都连接到同一节点.此节点,也称为密集节点或 超级节点成为图遍历的真正问题,因为图数据库 管理系统将必须评估所有关联关系,以 该节点,以确定下一步将在图形遍历中.

本书中提出的解决此问题的方法是,使一个具有100个连接的Meta节点,并将第101个连接链接到一个新的Meta节点,该新的Meta节点链接到先前的Meta Node.

我从官方Neo4j博客上看到一篇博客文章说,他们将在未来的将来解决此问题(该博客文章来自2013年1月)-

您对此问题有何看法?我们应该采用Meta节点扇出模式还是采用每个教程似乎都在使用的基本关系?还有其他建议吗?

这是一个很好的问题.这并不是真正的答案,但是为什么我们不能在这里讨论呢?从技术上讲,我认为我应该将您的问题标记为主要基于意见",因为您明确征求了意见,但我认为值得讨论.

无聊但诚实的答案是,它始终取决于您的查询模式.如果不知道要针对这种数据结构发出哪种查询,实际上就没有办法知道最佳"方法.

超级节点在其他领域也是一个问题.图形数据库有时很难以某种方式扩展,因为其中的数据很难分区.如果这是一个关系数据库,我们可以垂直或水平分区.在图数据库中,当您拥有超节点时,所有事物都接近"于其他事物. (阿拉斯加的一位农民喜欢Lady Gaga,纽约的一位银行家也很喜欢).超级节点不仅是图的遍历速度,而且对于各种可伸缩性来说都是一个大问题.

Rik的建议归结为鼓励您创建超级节点的子集群"或分区".对于某些查询模式,这可能是一个好主意,我没有敲打这个主意,但我认为这里隐藏的是集群策略的概念.您分配多少个元节点?每个元节点有多少个最大链接?您如何将该用户分配给此元节点(而不是其他用户)?根据您的查询,这些问题将很难回答,难以正确实施,或者两者兼而有之.

一种不同的(但在概念上非常相似)的方法是克隆Lady Gaga约一千次,并复制她的数据并使其在节点之间保持同步,然后在这些克隆之间声明一堆相同"的关系.这与元"方法没有什么不同,但是它具有将Lady Gaga的数据复制到克隆中的优点,并且元"节点不仅仅是用于导航的愚蠢占位符.大部分相同的问题都适用.

但是这里有一个不同的建议:您在这里遇到了一个大规模的多对多映射问题.如果这对您来说确实是一个很大的问题,则最好将其分解为包含两列(from_id, to_id)的单个关系表,每列均引用一个neo4j节点ID.然后,您可能会有一个主要使用图形的混合系统(但有一些例外).这里有很多权衡;当然,您根本无法遍历密码中的那个rel,但是它将更好地扩展和分区,并且查询特定的rel可能会更快.

这里有一个普遍的观察:无论我们是在谈论关系,图形,文档,K/V数据库还是其他东西,当数据库变得很大而性能需求变得非常强烈时,人们最终不可避免地会失败提供一种具有多种DBMS的混合解决方案.这是因为不可避免的事实,即所有数据库都擅长某些方面,而不擅长某些方面.因此,如果您需要一个在大多数情况下都能胜任的系统,那么您将不得不使用多种数据库. :)

在这些情况下,neo4j可能可以做很多优化工作,但是在我看来,该系统需要一些关于访问模式的提示才能做到这一点.在目前存在的2,000,000个关系中,端点如何最佳聚类?旧的关系比新的关系重要吗,反之亦然?

I'm new to the Graph Database scene, looking into Neo4j and learning Cypher, we're trying to model a graph database, it's a fairly simple one, we got users, and we got movies, users can VIEW movies, RATE movies, create playlists and playlists can HAVE movies.

The question is regarding the Super Node performance issue. And I will quote something from a very good book I am currently reading - Learning Neo4j by Rik Van Bruggen, so here it is:

A very interesting problem then occurs in datasets where some parts of the graph are all connected to the same node. This node, also referred to as a dense node or a supernode, becomes a real problem for graph traversals because the graph database management system will have to evaluate all of the connected relationships to that node in order to determine what the next step will be in the graph traversal.

The solution to this problem proposed in the book is to have a Meta node with 100 connections to it, and the 101th connection to be linked to a new Meta node that is linked to the previous Meta Node.

I have seen a blog post from the official Neo4j Blog saying that they will fix this problem in the upcoming future (the blog post is from January 2013) - http://neo4j.com/blog/2013-whats-coming-next-in-neo4j/

More exactly they say:

Another project we have planned around "bigger data" is to add some specific optimizations to handle traversals across densely-connected nodes, having very large numbers (millions) of relationships. (This problem is sometimes referred to as the "supernodes" problem.)

What are your opinions on this issue? Should we go with the Meta node fanning-out pattern or go with the basic relationship that every tutorial seem to be using? Any other suggestions?

解决方案

It's a good question. This isn't really an answer, but why shouldn't we be able to discuss this here? Technically I think I'm supposed to flag your question as "primarily opinion based" since you're explicitly soliciting opinions, but I think it's worth the discussion.

The boring but honest answer is that it always depends on your query patterns. Without knowing what kinds of queries you're going to issue against this data structure, there's really no way to know the "best" approach.

Supernodes are problems in other areas as well. Graph databases sometimes are very difficult to scale in some ways, because the data in them is hard to partition. If this were a relational database, we could partition vertically or horizontally. In a graph DB when you have supernodes, everything is "close" to everything else. (An Alaskan farmer likes Lady Gaga, so does a New York banker). Moreso than just graph traversal speed, supernodes are a big problem for all sorts of scalability.

Rik's suggestion boils down to encouraging you to create "sub-clusters" or "partitions" of the super-node. For certain query patterns, this might be a good idea, and I'm not knocking the idea, but I think hidden in here is the notion of a clustering strategy. How many meta nodes do you assign? How many max links per meta-node? How did you go about assigning this user to this meta node (and not some other)? Depending on your queries, those questions are going to be very hard to answer, hard to implement correctly, or both.

A different (but conceptually very similar) approach is to clone Lady Gaga about a thousand times, and duplicate her data and keep it in sync between nodes, then assert a bunch of "same as" relationships between the clones. This isn't that different than the "meta" approach, but it has the advantage that it copies Lady Gaga's data to the clone, and the "Meta" node isn't just a dumb placeholder for navigation. Most of the same problems apply though.

Here's a different suggestion though: you have a large-scale many-to-many mapping problem here. It's possible that if this is a really huge problem for you, you'd be better off breaking this out into a single relational table with two columns (from_id, to_id), each referencing a neo4j node ID. You then might have a hybrid system that's mostly graph (but with some exceptions). Lots of tradeoffs here; of course you couldn't traverse that rel in cypher at all, but it would scale and partition much better, and querying for a particular rel would probably be much faster.

One general observation here: whether we're talking about relational, graph, documents, K/V databases, or whatever -- when the databases get really big, and the performance requirements get really intense, it's almost inevitable that people end up with some kind of a hybrid solution with more than one kind of DBMS. This is because of the inescapable reality that all databases are good at some things, and not good at others. So if you need a system that's good at most everything, you're going to have to use more than one kind of database. :)

There is probably quite a bit neo4j can do to optimize in these cases, but it would seem to me that the system would need some kinds of hints on access patterns in order to do a really good job at that. Of the 2,000,000 relations present, how to the endpoints best cluster? Are older relationships more important than newer, or vice versa?

这篇关于Neo4j超级节点问题-扇出模式的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆