使用大型数据集的Firebase性能 [英] Performance of Firebase with large data sets

查看:123
本文介绍了使用大型数据集的Firebase性能的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在测试一个项目的firebase,这个项目可能有相当多的密钥,可能有数百万个。



我测试过使用几个10k的记录节点,负载性能看起来不错。但是,如果我展开根节点,FORGEWeb用户界面变得不可用,并且呈现每一条记录。



Firebase不是为这一数据量设计的,还是我做错了什么?

解决方案

这只是Forge UI的限制。它还是相当简陋的。

Firebase中的实时功能不仅适用于大数据集,而且适用于大型数据集。事实上,记录实时流是完美的。



与任何大型数据应用程序一样,性能就和您的实现一样好。所以这里有一些需要记住的大数据集。



分类,分类,分类

如果一个数据集将被迭代,并且它的记录可以以数千计算,那么将其存储在它自己的路径中。



这对迭代大数据集是不利的:

  / users / uid 
/ users / uid / profile
/ users / uid / chat_messages
/ users / uid / groups
/ users / uid / audit_record


$ p

这对迭代大数据集是很好的:

  / user_profiles / uid 
/ user_chat_messages / uid
/ user_groups / uid
/ user_audit_records / uid

避免在大型数据集上使用'value'$ $ b

使用 child_added 自 value 必须将整个记录集加载到客户端。



注意隐藏操作



当您调用 child_added ,你基本上在每个子记录上调用 value 。所以如果这些孩子包含大的列表,他们将不得不加载所有的数据返回。因此,上面的DENORMALIZE部分。


I'm testing firebase for a project that may have a reasonably large numbers of keys, potentially millions.

I've tested loading a few 10k of records using node, and the load performance appears good. However the "FORGE" Web UI becomes unusably slow and renders every single record if I expand my root node.

Is Firebase not designed for this volume of data, or am I doing something wrong?

解决方案

It's simply the limitations of the Forge UI. It's still fairly rudimentary.

The real-time functions in Firebase are not only suited for, but designed for large data sets. The fact that records stream in real-time is perfect for this.

Performance is, as with any large data app, only as good as your implementation. So here are a few gotchas to keep in mind with large data sets.

DENORMALIZE, DENORMALIZE, DENORMALIZE

If a data set will be iterated, and its records can be counted in thousands, store it in its own path.

This is bad for iterating large data sets:

/users/uid
/users/uid/profile
/users/uid/chat_messages
/users/uid/groups
/users/uid/audit_record

This is good for iterating large data sets:

/user_profiles/uid
/user_chat_messages/uid
/user_groups/uid
/user_audit_records/uid

Avoid 'value' on large data sets

Use the child_added since value must load the entire record set to the client.

Watch for hidden value operations on children

When you call child_added, you are essentially calling value on every child record. So if those children contain large lists, they are going to have to load all that data to return. Thus, the DENORMALIZE section above.

这篇关于使用大型数据集的Firebase性能的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆