Firebase Android离线性能 [英] Firebase Android offline performance

查看:80
本文介绍了Firebase Android离线性能的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当在单个节点下存储大约5000个子节点时,使用离线功能时,初始化Firebase变得非常慢。执行第一个查询需要大约30秒的时间。一旦初始化,执行后续查询(例如,列出前25个子节点)需要不到一秒的时间。



我使用以下属性来启用离线功能:
Firebase.getDefaultConfig()。setPersistenceEnabled(true);
firebase.keepSynced(true);

我的结构如下所示:

 <根> 
| -my-app-name
| - < uid>
| -node
| -sub节点1
| -...
| -sub节点5000
< uid> 节点上设置。子节点显示在回收站视图中。最好我想列出所有(而不是每页25),但我明白,这是不可能的,因为没有游标像机制(Android提供的SQLite)可用于与Firebase的工作。



这是否是由我设计的,是否修改了我的数据结构?或者我可以以另一种方式减少初始化时间?



我在下面提供了一些日志记录。正如你所看到的,大量的垃圾收集正在进行。 Firebase是否在初始化时评估整个数据库?



谢谢!
Niels

  04-01 15:59:12.029 2222-2245 / abcdef I / art:背景粘性并发标记扫描GC释放43005(1717KB)AllocSpace对象0(0B)LOS对象,4%免费,31MB / 32MB,暂停5.674ms总计57.402ms 
04-01 15:59:13.415 2222-2240 / abcdef W /艺术:暂停所有线程花费:6.600ms
04-01 15:59:13.424 2222-2245 / abcdef W / art:暂停所有线程花费:9.339ms
04-01 15:59:13.433 2222 -2245 / abcdef I / art:背景粘性并发标记扫描GC释放7097(281KB)AllocSpace对象,0(0B)LOS对象,0%空闲,32MB / 32MB,暂停11.175ms共27.105ms
04-01 15:59:13.821 2222-2245 / abcdef I / art:背景部分并发标记扫描GC释放101674(5MB)AllocSpace对象,18(530KB)LOS对象,35%免费,28MB / 44MB,暂停3.400ms总计152.664ms
04-01 15:59:15.107 2222-2245 / abcdef I / art:背景粘性并发标记扫描GC已释放394024(15MB)AllocSpace对象,0(0B)LOS对象,20%空闲,30MB / 38MB,暂停共计1.865ms 15 2.182ms
04-01 15:59:15.817 2222-2245 / abcdef I / art:背景粘性并发标记扫描GC释放218328(8MB)AllocSpace对象,0(0B)LOS对象,19%空闲,31MB / 38MB,暂停1.711ms总计112.325ms
04-01 15:59:16.451 2222-2240 / abcdef W / art:挂起所有线程花费:27.786ms
04-01 15:59:16.465 2222- 2245 / abcdef I / art:背景粘性并发标记扫描GC释放190591(7MB)AllocSpace对象,0(0B)LOS对象,18%空闲,31MB / 38MB,暂停1.832ms总计107.416ms
04-01 15 :59:16.472 2222-2245 / abcdef W / art:挂起所有线程花费:6.823ms
04-01 15:59:17.084 2222-2245 / abcdef I / art:背景粘性并发标记扫描GC释放178714 6MB)AllocSpace对象,0(0B)LOS对象,15%空闲,32MB / 38MB,暂停1.717ms总计105.529ms
04-01 15:59:17.629 2222-2245 / abcdef I / art:背景粘性并发标记扫描GC释放163584(6MB)AllocSpace对象,0(0B)LOS对象,14%免费,33MB / 38MB,暂停1.743ms总计110.764ms
04-01 15:59:18.9 41 2222-2240 / abcdef W / art:暂停所有线程花费:5.078ms
04-01 15:59:19.691 2222-2245 / abcdef I / art:背景粘性并发标记扫描GC已释放95627(3MB)AllocSpace对象,0(0B)LOS对象,8%免费,35MB / 38MB,暂停7.190ms共86.171ms
04-01 15:59:19.961 2222-2240 / abcdef W / art:暂停所有线程花费:18.208 ms
04-01 15:59:20.965 2222-2245 / abcdef W / art:暂停所有线程:5.254ms
04-01 15:59:20.990 2222-2245 / abcdef I / art:背景粘性并发标记扫描GC释放55899(2MB)AllocSpace对象,0(0B)LOS对象,5%免费,36MB / 38MB,暂停6.799ms总计66.923ms
04-01 15:59:22.495 2222-2240 / abcdef W / art:暂停所有线程花费:45.180ms
04-01 15:59:22.509 2222-2245 / abcdef W / art:暂停所有线程花费:14.254ms
04-01 15: 597.5222222-2245 / abcdef I / art:后台部分并发标记扫描GC释放198174(6MB)AllocSpace对象,3(487KB)LOS对象,32%空闲,33MB / 49MB,暂停16.949ms总计215。 369ms
04-01 15:59:23.811 2222-2245 / abcdef I / art:背景粘性并发标记扫描GC释放392437(15MB)AllocSpace对象,0(0B)LOS对象,18%免费,35MB / 43MB ,暂停1.936ms共168.222ms
04-01 15:59:24.480 2222-2240 / abcdef W / art:暂停所有线程花费:22.464ms
04-01 15:59:24.497 2222-2245 / abcdef I / art:背景粘性并发标记扫描GC已释放227043(8MB)AllocSpace对象,0(0B)LOS对象,18%免费,35MB / 43MB,暂停1.723ms总计117.855ms
04-01 15: 59:25.173 2222-2245 / abcdef I / art:背景粘性并发标记扫描GC释放203910(7MB)AllocSpace对象,0(0B)LOS对象,16%空闲,36MB / 43MB,暂停1.694ms总计112.618ms
04-01 15:59:25.181 2222-2245 / abcdef W / art:暂停所有线程花费:7.301ms
04-01 15:59:25.784 2222-2245 / abcdef I / art:背景粘性并发标记扫描GC释放185627(7MB)AllocSpace对象,0(0B)LOS对象,14%免费,37MB / 43MB,暂停1.719ms总计115.362ms
04-01 15:59:26.34 5 2222-2245 / abcdef I / art:背景粘性并发标记扫描GC释放167066(6MB)AllocSpace对象,0(0B)LOS对象,13%空闲,37MB / 43MB,暂停1.651ms总计106.055ms
04 -01 15:59:26.865 2222-2245 / abcdef I / art:背景粘性并发标记扫描GC释放154535(6MB)AllocSpace对象,0(0B)LOS对象,11%空闲,38MB / 43MB,暂停1.644ms总计104.888 ms
04-01 15:59:28.357 2222-2245 / abcdef I / art:背景粘性并发标记扫描GC释放151375(5MB)AllocSpace对象,33(671KB)LOS对象,9%免费,39MB / 43MB ,暂停2.740ms共104.176ms
04-01 15:59:29.006 2222-2240 / abcdef W / art:暂停所有线程花费:19.232ms
04-01 15:59:29.060 2222-2245 / abcdef I / art:背景粘性并发标记扫描GC释放133554(5MB)AllocSpace对象,29(580KB)LOS对象,10%免费,39MB / 43MB,暂停1.563ms总计100.220ms
04-01 15: 59:30.173 2222-2245 / abcdef I / art:背景粘性并发标记扫描GC已释放131062(4MB)AllocSpace对象, 31(637KB)LOS对象,9%免费,39MB / 43MB,暂停1.653ms总计102.705ms
04-01 15:59:31.245 2222-2245 / abcdef I / art:背景粘性并发标记扫描GC释放122085 (4MB)AllocSpace对象,26(522KB)LOS对象,8%免费,39MB / 43MB,暂停2.380ms共100.776ms
04-01 15:59:32.024 2222-2240 / abcdef W / art:暂停全部线程占用:20.662ms

PS:这是一个交叉帖子: https://groups.google.com/forum/#!topic/firebase-talk/migEAwv26ns

$因此,对数据进行分片处理,使得一个根节点最多包含200个子节点似乎是现在的答案。我在分片上设置了.keepSynced(true),这导致性能更好。

一个FirebaseArrays类,它是FirebaseArray的集合,它将多个数组聚合成一个可观察的集合。

https://gist.github.com/ndefeijter/2191f8a43ce903c5d9ea69f02c7ee7e9



我也调整了FirebaseRecyclerAdapter以使用 FirebaseArrays 作为底层的数据结构,而不是一个单独的 FirebaseArray 。该界面使用一些方法扩展,以添加额外的Firebase路径(即碎片)。

https ://gist.github.com/ndefeijter/d98eb5f643b5faf5476b8a611de912c1
$ b 这些路径被添加到加载更多事件上(例如在无限滚动的情况下)。

  private void loadMore(){

final View view = getView();
if(null!= view){
final RecyclerView recyclerView =(RecyclerView)view.findViewById(R.id.recycler_view);
final FirebaseRecyclerAdapter2< Visit,VisitViewHolder> adapter =(FirebaseRecyclerAdapter2< Visit,VisitViewHolder>)recyclerView.getAdapter();
adapter.addQuery(nextQuery());
}
}


When storing approximately 5000 sub nodes under a single node, initialising firebase becomes very slow when making use of the offline capabilities. It takes ~30 seconds before the first query is executed. Once initialised, executing subsequent queries (e.g. listing the first 25 sub nodes) takes less than a second.

I'm making use of the following properties to enable the offline capabilities: Firebase.getDefaultConfig().setPersistenceEnabled(true); firebase.keepSynced(true);

My structure looks like this:

<root>
 |-my-app-name
   |-<uid>
     |-node
       |-sub node 1
       |-...
       |-sub node 5000

Keep synced is set on the <uid> node. The sub nodes are presented in a Recycler View. Preferably, I would like to list all (instead of 25 per page) but I understand that this is not possible since there is no Cursor like mechanism (as Android provides for SQLite) available for working with Firebase.

Is this by design and do I revise my data structure? Or can I reduce initialisation time in another way?

I provided some logging below. As you can see, a lot of garbage collection is going on. Does Firebase evaluate the whole database when initializing?

Thanks! Niels

04-01 15:59:12.029 2222-2245/abcdef I/art: Background sticky concurrent mark sweep GC freed 43005(1717KB) AllocSpace objects, 0(0B) LOS objects, 4% free, 31MB/32MB, paused 5.674ms total 57.402ms
04-01 15:59:13.415 2222-2240/abcdef W/art: Suspending all threads took: 6.600ms
04-01 15:59:13.424 2222-2245/abcdef W/art: Suspending all threads took: 9.339ms
04-01 15:59:13.433 2222-2245/abcdef I/art: Background sticky concurrent mark sweep GC freed 7097(281KB) AllocSpace objects, 0(0B) LOS objects, 0% free, 32MB/32MB, paused 11.175ms total 27.105ms
04-01 15:59:13.821 2222-2245/abcdef I/art: Background partial concurrent mark sweep GC freed 101674(5MB) AllocSpace objects, 18(530KB) LOS objects, 35% free, 28MB/44MB, paused 3.400ms total 152.664ms
04-01 15:59:15.107 2222-2245/abcdef I/art: Background sticky concurrent mark sweep GC freed 394024(15MB) AllocSpace objects, 0(0B) LOS objects, 20% free, 30MB/38MB, paused 1.865ms total 152.182ms
04-01 15:59:15.817 2222-2245/abcdef I/art: Background sticky concurrent mark sweep GC freed 218328(8MB) AllocSpace objects, 0(0B) LOS objects, 19% free, 31MB/38MB, paused 1.711ms total 112.325ms
04-01 15:59:16.451 2222-2240/abcdef W/art: Suspending all threads took: 27.786ms
04-01 15:59:16.465 2222-2245/abcdef I/art: Background sticky concurrent mark sweep GC freed 190591(7MB) AllocSpace objects, 0(0B) LOS objects, 18% free, 31MB/38MB, paused 1.832ms total 107.416ms
04-01 15:59:16.472 2222-2245/abcdef W/art: Suspending all threads took: 6.823ms
04-01 15:59:17.084 2222-2245/abcdef I/art: Background sticky concurrent mark sweep GC freed 178714(6MB) AllocSpace objects, 0(0B) LOS objects, 15% free, 32MB/38MB, paused 1.717ms total 105.529ms
04-01 15:59:17.629 2222-2245/abcdef I/art: Background sticky concurrent mark sweep GC freed 163584(6MB) AllocSpace objects, 0(0B) LOS objects, 14% free, 33MB/38MB, paused 1.743ms total 110.764ms
04-01 15:59:18.941 2222-2240/abcdef W/art: Suspending all threads took: 5.078ms
04-01 15:59:19.691 2222-2245/abcdef I/art: Background sticky concurrent mark sweep GC freed 95627(3MB) AllocSpace objects, 0(0B) LOS objects, 8% free, 35MB/38MB, paused 7.190ms total 86.171ms
04-01 15:59:19.961 2222-2240/abcdef W/art: Suspending all threads took: 18.208ms
04-01 15:59:20.965 2222-2245/abcdef W/art: Suspending all threads took: 5.254ms
04-01 15:59:20.990 2222-2245/abcdef I/art: Background sticky concurrent mark sweep GC freed 55899(2MB) AllocSpace objects, 0(0B) LOS objects, 5% free, 36MB/38MB, paused 6.799ms total 66.923ms
04-01 15:59:22.495 2222-2240/abcdef W/art: Suspending all threads took: 45.180ms
04-01 15:59:22.509 2222-2245/abcdef W/art: Suspending all threads took: 14.254ms
04-01 15:59:22.562 2222-2245/abcdef I/art: Background partial concurrent mark sweep GC freed 198174(6MB) AllocSpace objects, 3(487KB) LOS objects, 32% free, 33MB/49MB, paused 16.949ms total 215.369ms
04-01 15:59:23.811 2222-2245/abcdef I/art: Background sticky concurrent mark sweep GC freed 392437(15MB) AllocSpace objects, 0(0B) LOS objects, 18% free, 35MB/43MB, paused 1.936ms total 168.222ms
04-01 15:59:24.480 2222-2240/abcdef W/art: Suspending all threads took: 22.464ms
04-01 15:59:24.497 2222-2245/abcdef I/art: Background sticky concurrent mark sweep GC freed 227043(8MB) AllocSpace objects, 0(0B) LOS objects, 18% free, 35MB/43MB, paused 1.723ms total 117.855ms
04-01 15:59:25.173 2222-2245/abcdef I/art: Background sticky concurrent mark sweep GC freed 203910(7MB) AllocSpace objects, 0(0B) LOS objects, 16% free, 36MB/43MB, paused 1.694ms total 112.618ms
04-01 15:59:25.181 2222-2245/abcdef W/art: Suspending all threads took: 7.301ms
04-01 15:59:25.784 2222-2245/abcdef I/art: Background sticky concurrent mark sweep GC freed 185627(7MB) AllocSpace objects, 0(0B) LOS objects, 14% free, 37MB/43MB, paused 1.719ms total 115.362ms
04-01 15:59:26.345 2222-2245/abcdef I/art: Background sticky concurrent mark sweep GC freed 167066(6MB) AllocSpace objects, 0(0B) LOS objects, 13% free, 37MB/43MB, paused 1.651ms total 106.055ms
04-01 15:59:26.865 2222-2245/abcdef I/art: Background sticky concurrent mark sweep GC freed 154535(6MB) AllocSpace objects, 0(0B) LOS objects, 11% free, 38MB/43MB, paused 1.644ms total 104.888ms
04-01 15:59:28.357 2222-2245/abcdef I/art: Background sticky concurrent mark sweep GC freed 151375(5MB) AllocSpace objects, 33(671KB) LOS objects, 9% free, 39MB/43MB, paused 2.740ms total 104.176ms
04-01 15:59:29.006 2222-2240/abcdef W/art: Suspending all threads took: 19.232ms
04-01 15:59:29.060 2222-2245/abcdef I/art: Background sticky concurrent mark sweep GC freed 133554(5MB) AllocSpace objects, 29(580KB) LOS objects, 10% free, 39MB/43MB, paused 1.563ms total 100.220ms
04-01 15:59:30.173 2222-2245/abcdef I/art: Background sticky concurrent mark sweep GC freed 131062(4MB) AllocSpace objects, 31(637KB) LOS objects, 9% free, 39MB/43MB, paused 1.653ms total 102.705ms
04-01 15:59:31.245 2222-2245/abcdef I/art: Background sticky concurrent mark sweep GC freed 122085(4MB) AllocSpace objects, 26(522KB) LOS objects, 8% free, 39MB/43MB, paused 2.380ms total 100.776ms
04-01 15:59:32.024 2222-2240/abcdef W/art: Suspending all threads took: 20.662ms

PS: This is a cross post: https://groups.google.com/forum/#!topic/firebase-talk/migEAwv26ns

解决方案

So sharding the data so one root node contains at most 200 sub nodes seems to be the answer for now. I'm setting .keepSynced(true) on the shards and this results in much better performance.

In order to present the sharded list in a single recycler view, I created a class FirebaseArrays which is a collection of FirebaseArray that aggregates multiple arrays into a single observable collection.
https://gist.github.com/ndefeijter/2191f8a43ce903c5d9ea69f02c7ee7e9

I also adapted the FirebaseRecyclerAdapter to use a FirebaseArrays as underlying data structure instead of a single FirebaseArray. The interface is extended using some methods to add additional Firebase paths (i.e. shards).
https://gist.github.com/ndefeijter/d98eb5f643b5faf5476b8a611de912c1

These paths are added upon a 'load more' event (e.g. in case endless scrolling).

private void loadMore() {

    final View view = getView();
    if (null != view) {
        final RecyclerView recyclerView = (RecyclerView) view.findViewById(R.id.recycler_view);
        final FirebaseRecyclerAdapter2<Visit, VisitViewHolder> adapter = (FirebaseRecyclerAdapter2<Visit, VisitViewHolder>) recyclerView.getAdapter();
        adapter.addQuery(nextQuery());
    }
}

这篇关于Firebase Android离线性能的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆