可以在作出反应,终极版的应用程序真正解决规模扩张以及,说的骨干?即使重新选择。在移动 [英] Can a React-Redux app really scale as well as, say Backbone? Even with reselect. On mobile

查看:146
本文介绍了可以在作出反应,终极版的应用程序真正解决规模扩张以及,说的骨干?即使重新选择。在移动的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在终极版,每一个变化到店里触发通知在所有连接组件。这使事情变得对开发者很简单,但如果你有有N连接组件的应用程序,和N是非常大的?

每次更改存储,即使无关的部分,仍然运行在 shouldComponentUpdate 用一个简单的 === 商店重新选择编辑路径。这是快速的,对不对?当然,也许一次。但N次,为的每个的改变?在设计这一根本变化使我怀疑终极版真正的可扩展性。

作为进一步的优化,人们可以批量所有通知要求使用 _。去抖。即便如此,具有N ===为每个存储更改测试的的处理其他逻辑,例如观点的逻辑,似乎是达到目的的一种手段

我工作的一个健康与数以百万计的用户的健身社交移动网络混合应用程序,并从骨干正在过渡到终极版即可。在这个应用中,用户使用滑动式界面,允许他们的意见,类似Snapchat不同的堆栈之间导航,除了每个堆栈具有无限的深度psented $ P $。在最流行的类型来看,无尽的滚动有效地处理负载,渲染,安装,拆卸和饲料项目,如一个职位。对于订婚的用户,这种情况并不少见,通过数百或数千帖子中滚动,然后输入用户的饲料,然后其他用户的饲料等,即使重优化,连接组件的数量可以得到非常大的。

现在另一方面,骨干的设计允许每个视图pcisely听$ P $到影响它的模型,从而减少N至一个恒定

我缺少的东西,或者是终极版根本性的缺陷进行了大量的应用程序?


解决方案

这是不是固有的终极版恕我直言。

问题

顺便说一句,而不是试图在同一时间呈现100K的组件,你应该尝试假了像反应无穷大或类似的东西,只有使您的列表中可以看到(或接近是)项目。即使你成功的渲染和更新列表100K,它仍然不是高性能的,它需要大量的内存。这里有一些<一个href=\"https://engineering.linkedin.com/linkedin-ipad-5-techniques-smooth-infinite-scrolling-html5\">LinkedIn建议

这anwser会认为你还是尽量呈现在你的DOM 100K更新项目,你不想100K听众( store.subscribe())以被要求每一个微小变动。


2所学校

在开发中的功能方式的用户界面的应用程序,你基本上有2个选择:

从最高层

始终呈现

它工作得很好,但涉及更多的样板。这不完全建议Redux的方式,但是可以实现的,具有一定的缺点的。请注意,即使你设法得到一个终极版的连接,你仍然需要调用大量的 shouldComponentUpdate 的许多地方。如果你有意见(如递归)的无限叠加,则必须呈现为虚拟DOM所有的中间意见,以及和 shouldComponentUpdate 将在其中许多所谓的。因此,这不是,即使你有一个单独的连接。

真的更高效

如果你不打算使用的生命周期做出反应方法,但只使用纯渲染功能,那么你或许应该考虑将只专注于工作其他类似的选项,如的 deku (可与Redux的使用)

在我自己的经验这样做与之反应是不是对旧的移动设备足够高性能(就像我的Nexus4),特别是如果你链接的文本输入您的原子状态。

连接数据子组件

这是连接 - 终极版通过使用建议。所以,当状态变化,它只是涉及到更深层次的孩子,你只会使这个孩子,但没有渲染顶层组件每次像上下文提供商(终极版/国际/自定义...),也不是主要的应用程序布局。您也避免调用 shouldComponentUpdate 其他孩子的,因为它已经烤成监听器。调用了很多非常快的听众可能比呈现每次中间反应更快的组件,而且还允许与使用作出反应时减少了很多道具传递样板,所以对我来说是很有意义的。

还要注意,身份比较是非常快的,你可以在每一个变化很容易做到他们中的很多。记住角的脏检查:有些人还是设法建立与真正的应用程序!和身份比较快多了。


了解您的问题

我不知道要完全理解你所有的问题,但据我所知,你在它像100K项目的看法和你不知道,你应该使用连接所有这些100K项目,因为呼吁每一个微小变动100K的听众似乎昂贵的。

此问题似乎固有的做与UI函数式编程的本质:名单进行了更新,所以你必须重新渲染列表,但不幸的是这是一个很长的名单,似乎效率不高......随着骨干网你可以砍东西只会使孩子。即使你渲染阵营的孩子,你会触发一个必要的方式呈现,而不是仅仅宣布时,列表中的变化,重新渲染它。


解决您的问题

显然连接100K的列表项看起来很方便,但不是高性能因为调用100K反应,终极版的听众,即使他们是快的。

现在,如果你逐个连接的10万项,而不是每个项目的大名单,你只需拨打一个反应,终极版侦听器,然后必须在渲染一种有效的方式,列表中。


天真的解决方案

遍历100K项目,使它们,导致99999项 shouldComponentUpdate 返回false和单一重新渲染:

  list.map(项目=&GT; this.renderItem(项目))


高性能解决方案1:自定义的连接 +店内增强

阵营 - 终极版的连接方法只是一个<一个href=\"https://medium.com/@dan_abramov/mixins-are-dead-long-live-higher-order-components-94a0d2f9e750#.hmjddsszd\">Higher-Order该数据注入包装的成分组成(HOC)。要做到这一点,它会注册一个 store.subscribe(...)监听器为每个连接组件。

如果你想连接100K的单品名单,这是你的应用程序,是值得优化的关键路径。而不是使用默认的连接你可以建立你自己的。


  1. 商店增强

揭露的另一种方法 store.subscribeItem(的itemId,监听器)

调度,以便每当涉及到项目的操作被分派,你叫该项目的注册监听器(S)。

灵感此实现一个很好的来源可以是终极版 - 批处理订阅

<醇开始=2>
  • 自定义连接

  • 使用API​​创建一个高阶组件,如:

     项目= connectItem(项目)

    该HOC可以期待一个的itemId 属性。它可以使用从环境做出反应的终极版增强型存储,然后注册自己的监听器: store.subscribeItem(的itemId,回调)。来源$ C ​​$ C原连接可以作为基础的灵感。

    <醇开始=3>
  • 的HOC只会触发一个重新渲染,如果该项目更改

  • 相关答案: http://stackoverflow.com/a/34991164/82609

    相关反应,终极版的问题:<一href=\"https://github.com/rackt/react-redux/issues/269\">https://github.com/rackt/react-redux/issues/269


    高性能解决方案2:矢量尝试

    有一个更好的性能的方法会考虑使用一个持久数据结构像一个矢量特里

    如果您重新present你10万的项目清单为线索,每一个中间节点有可能短路渲染越快,它允许以避免大量 shouldComponentUpdate 在孩子的。

    此技术可以与 ImmutableJS 使用,你可以找到一些实验,我ImmutableJS所做的:<一href=\"http://stackoverflow.com/questions/30976722/react-performance-rendering-big-list-with-purerendermixin\">React性能:渲染大名单与PureRenderMixin
    它也有缺点但像ImmutableJs该库还没有暴露的公共/稳定的API来做到这一点(问题 ),和我的解决方案与污染一些无用的中间&LT的DOM;跨度&GT; 节点(的问题)。

    下面是一个的jsfiddle 演示如何100K项ImmutableJS列表可以有效地渲染。最初的渲染很长(但我猜你不100K项目初始化你的应用程序!),但之后你可以看到每个更新只会导致 shouldComponentUpdate 。在我的例子中,我只更新了第一项每一秒,你发现即使列表中有10万的项目,只需要像110呼叫 shouldComponentUpdate 这是更可以接受的! :)

    修改:看来ImmutableJS没有那么大,以preserve其不可改变的结构上的一些操作,如插入/在随机指数中删除的项目。这里是一个的jsfiddle 演示您可以根据名单上的操作所期望的性能。出人意料的是,如果你想许多项目在大名单的末尾;调用 list.push(值)很多时候似乎preserve多树结构比调用 list.concat(值)

    顺便说一句,这是记载该列表修改边缘时是有效的。我不认为在加/定索引处移除都跟我的技术,而是关系到底层ImmutableJs List实现这些不好的表演。


      

    列表deque的实现,具有高效添加和删除同时从端(PUSH,POP),并开始(不印字,移)。


    In Redux, every change to the store triggers a notify on all connected components. This makes things very simple for the developer, but what if you have an application with N connected components, and N is very large?

    Every change to the store, even if unrelated to the component, still runs a shouldComponentUpdate with a simple === test on the reselected paths of the store. That's fast, right? Sure, maybe once. But N times, for every change? This fundamental change in design makes me question the true scalability of Redux.

    As a further optimization, one can batch all notify calls using _.debounce. Even so, having N === tests for every store change and handling other logic, for example view logic, seems like a means to an end.

    I'm working on a health & fitness social mobile-web hybrid application with millions of users and am transitioning from Backbone to Redux. In this application, a user is presented with a swipeable interface that allows them to navigate between different stacks of views, similar to Snapchat, except each stack has infinite depth. In the most popular type of view, an endless scroller efficiently handles the loading, rendering, attaching, and detaching of feed items, like a post. For an engaged user, it is not uncommon to scroll through hundreds or thousands of posts, then enter a user's feed, then another user's feed, etc. Even with heavy optimization, the number of connected components can get very large.

    Now on the other hand, Backbone's design allows every view to listen precisely to the models that affect it, reducing N to a constant.

    Am I missing something, or is Redux fundamentally flawed for a large app?

    解决方案

    This is not a problem inherent to Redux IMHO.

    By the way, instead of trying to render 100k components at the same time, you should try to fake it with a lib like react-infinite or something similar, and only render the visible (or close to be) items of your list. Even if you succeed to render and update a 100k list, it's still not performant and it takes a lot of memory. Here are some LinkedIn advices

    This anwser will consider that you still try to render 100k updatable items in your DOM, and that you don't want 100k listeners (store.subscribe()) to be called on every single change.


    2 schools

    When developing an UI app in a functional way, you basically have 2 choices:

    Always render from the very top

    It works well but involves more boilerplate. It's not exactly the suggested Redux way but is achievable, with some drawbacks. Notice that even if you manage to have a single redux connection, you still have have to call a lot of shouldComponentUpdate in many places. If you have an infinite stack of views (like a recursion), you will have to render as virtual dom all the intermediate views as well and shouldComponentUpdate will be called on many of them. So this is not really more efficient even if you have a single connect.

    If you don't plan to use the React lifecycle methods but only use pure render functions, then you should probably consider other similar options that will only focus on that job, like deku (which can be used with Redux)

    In my own experience doing so with React is not performant enough on older mobile devices (like my Nexus4), particularly if you link text inputs to your atom state.

    Connecting data to child components

    This is what react-redux suggests by using connect. So when the state change and it's only related to a deeper child, you only render that child and do not have to render top-level components everytime like the context providers (redux/intl/custom...) nor the main app layout. You also avoid calling shouldComponentUpdate on other childs because it's already baked into the listener. Calling a lot of very fast listeners is probably faster than rendering everytime intermediate react components, and it also permits to reduce a lot of props-passing boilerplate so for me it makes sense when used with React.

    Also notice that identity comparison is very fast and you can do a lot of them easily on every change. Remember Angular's dirty checking: some people did manage to build real apps with that! And identity comparison is much faster.


    Understanding your problem

    I'm not sure to understand all your problem perfectly but I understand that you have views with like 100k items in it and you wonder if you should use connect with all those 100k items because calling 100k listeners on every single change seems costly.

    This problem seems inherent to the nature of doing functional programming with the UI: the list was updated, so you have to re-render the list, but unfortunatly it is a very long list and it seems unefficient... With Backbone you could hack something to only render the child. Even if you render that child with React you would trigger the rendering in an imperative way instead of just declaring "when the list changes, re-render it".


    Solving your problem

    Obviously connecting the 100k list items seems convenient but is not performant because of calling 100k react-redux listeners, even if they are fast.

    Now if you connect the big list of 100k items instead of each items individually, you only call a single react-redux listener, and then have to render that list in an efficient way.


    Naive solution

    Iterating over the 100k items to render them, leading to 99999 items returning false in shouldComponentUpdate and a single one re-rendering:

    list.map(item => this.renderItem(item))
    


    Performant solution 1: custom connect + store enhancer

    The connect method of React-Redux is just a Higher-Order Component (HOC) that injects the data into the wrapped component. To do so, it registers a store.subscribe(...) listener for every connected component.

    If you want to connect 100k items of a single list, it is a critical path of your app that is worth optimizing. Instead of using the default connect you could build your own one.

    1. Store enhancer

    Expose an additional method store.subscribeItem(itemId,listener)

    Wrap dispatch so that whenever an action related to an item is dispatched, you call the registered listener(s) of that item.

    A good source of inspiration for this implementation can be redux-batched-subscribe.

    1. Custom connect

    Create a Higher-Order component with an API like:

    Item = connectItem(Item)
    

    The HOC can expect an itemId property. It can use the Redux enhanced store from the React context and then register its listener: store.subscribeItem(itemId,callback). The source code of the original connect can serve as base inspiration.

    1. The HOC will only trigger a re-rendering if the item changes

    Related answer: http://stackoverflow.com/a/34991164/82609

    Related react-redux issue: https://github.com/rackt/react-redux/issues/269


    Performant solution 2: vector tries

    A more performant approach would consider using a persistent data structure like a vector trie:

    If you represent your 100k items list as a trie, each intermediate node has the possibility to short-circuit the rendering sooner, which permits to avoid a lot of shouldComponentUpdate in childs.

    This technique can be used with ImmutableJS and you can find some experiments I did with ImmutableJS: React performance: rendering big list with PureRenderMixin It has drawbacks however as the libs like ImmutableJs do not yet expose public/stable APIs to do that (issue), and my solution pollutes the DOM with some useless intermediate <span> nodes (issue).

    Here is a JsFiddle that demonstrates how a ImmutableJS list of 100k items can be rendered efficiently. The initial rendering is quite long (but I guess you don't initialize your app with 100k items!) but after you can notice that each update only lead to a small amount of shouldComponentUpdate. In my example I only update the first item every second, and you notice even if the list has 100k items, it only requires something like 110 calls to shouldComponentUpdate which is much more acceptable! :)

    Edit: it seems ImmutableJS is not so great to preserve its immutable structure on some operations, like inserting/deleting items at a random index. Here is a JsFiddle that demonstrates the performance you can expect according to the operation on the list. Surprisingly, if you want to append many items at the end of a large list, calling list.push(value) many times seems to preserve much more the tree structure than calling list.concat(values).

    By the way, it is documented that the List is efficient when modifying the edges. I don't think these bad performances on adding/removing at a given index are related to my technique but rather related to the underlying ImmutableJs List implementation.

    Lists implement Deque, with efficient addition and removal from both the end (push, pop) and beginning (unshift, shift).

    这篇关于可以在作出反应,终极版的应用程序真正解决规模扩张以及,说的骨干?即使重新选择。在移动的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆