Firebase数据结构 - Firefeed结构是否相关? [英] Firebase data structure - is the Firefeed structure relevant?

查看:129
本文介绍了Firebase数据结构 - Firefeed结构是否相关?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

Firefeed是一个非常好的例子,可以用Firebase实现 - 一个完全客户端的Twitter克隆。所以这里有这个页面: https://firefeed.io/about.html 其中逻辑背后的逻辑数据结构进行了解释。它有助于了解Firebase安全规则。



在演示结束时,有一段代码:

  var userid = info.id; //信息来自之前的login()调用。 
var sparkRef = firebase.child(sparks)。push();
var sparkRefId = sparkRef.name();

//将火花添加到全局列表。
sparkRef.set(spark);

//将火花ID添加到用户列表的火花。
var currentUser = firebase.child(users)。child(userid);
currentUser.child(sparks)。child(sparkRefId).set(true);

//将火花ID添加到此用户后面的每个人的提要中。
currentUser.child(followers)。once(value,function(list){
list.forEach(function(follower){
var childRef = firebase.child(users ).child(follower.name());
childRef.child(feed)。child(sparkRefId).set(true);
});
});

它显示了如何完成写入以保持读取的简单性 - 如下所示:


当我们需要显示特定用户的订阅源时,我们只需要查看一个地方
blockquote>

所以我明白这一点。但是如果我们看一下Twitter,我们可以看到一些账户拥有数百万的追随者(大多数追随者是Katy Perry,拥有超过6100万!)。这种结构和方法会发生什么?每当Katy发布一条新的推文时,就会进行6,100万次Write操作。这不会简单地杀死应用程序吗?甚至更多,是不是消耗了大量不必要的空间?

解决方案

使用非规范化数据,连接数据的唯一方法是写入到每个位置的读取。所以是的,要发布一个Twitter的6100万追随者将需要61万写入。



你不会在浏览器中这样做。服务器将侦听新消息的 child_added 事件,然后一群工作人员将会一次对负载进行分页。您可以优先考虑在线用户先写入。

使用规范化的数据,您只需编写一条推文,但支付阅读时的加入费用。如果您在Feed中缓存推文以避免触发每个请求的数据库,那么每个Katy Perry推文就有6,100万次写入redis。要实时推送推文,无论如何,您需要将推文写入每个在线追随者的套接字中。

Firefeed is a very nice example of what can be achieved with Firebase - a fully client side Twitter clone. So there is this page : https://firefeed.io/about.html where the logic behind the adopted data structure is explained. It helps a lot to understand Firebase security rules.

By the end of the demo, there is this snippet of code :

  var userid = info.id; // info is from the login() call earlier.
  var sparkRef = firebase.child("sparks").push();
  var sparkRefId = sparkRef.name();

  // Add spark to global list.
  sparkRef.set(spark);

  // Add spark ID to user's list of posted sparks.
  var currentUser = firebase.child("users").child(userid);
  currentUser.child("sparks").child(sparkRefId).set(true);

  // Add spark ID to the feed of everyone following this user.
  currentUser.child("followers").once("value", function(list) {
    list.forEach(function(follower) {
      var childRef = firebase.child("users").child(follower.name());
      childRef.child("feed").child(sparkRefId).set(true);
    });
  });

It's showing how the writing is done in order to keep the read simple - as stated :

When we need to display the feed for a particular user, we only need to look in a single place

So I do understand that. But if we take a look at Twitter, we can see that some accounts has several millions followers (most followed is Katy Perry with over 61 millions !). What would happen with this structure and this approach ? Whenever Katy would post a new tweet, it would make 61 millions Write operations. Wouldn't this simply kill the app ? And even more, isn't it consuming a lot of unnecessary space ?

解决方案

With denormalized data, the only way to connect data is to write to every location its read from. So yeah, to publish a tweet to 61 million followers would require 61 million writes.

You wouldn't do this in the browser. The server would listen for child_added events for new tweets, and then a cluster of workers would split up the load paginating a subset of followers at a time. You could potentially prioritize online users to get writes first.

With normalized data, you write the tweet once, but pay for the join on reads. If you cache the tweets in feeds to avoid hitting the database for each request, you're back to 61 million writes to redis for every Katy Perry tweet. To push the tweet in real time, you need to write the tweet to a socket for every online follower anyway.

这篇关于Firebase数据结构 - Firefeed结构是否相关?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆