'点对点路径'd3可视化性能问题 [英] 'Point-along-path' d3 visualization performance issue

查看:60
本文介绍了'点对点路径'd3可视化性能问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经通过以下代码完成了点-路径" d3可视化:

画布A具有曲线(基数)和圆形标记(上面的链接),画布B具有直线(多段)和正方形标记.

您可能会想到,如果仅渲染100个节点,则可以每秒60帧的速度渲染1000个过渡节点的机器和脚本将具有相当大的额外容量.

如果过渡位置和渲染计算是主要活动,并且CPU使用率为100%,则一半的节点应释放大约一半的CPU容量.在上面最慢的画布示例中,我的机器记录了200个节点,以每秒60帧的速度沿基数曲线过渡(然后开始下降,表明CPU容量限制了帧速率,因此使用率应接近100%),其中有100个节点我们有〜50%的CPU使用率:

水平中心线是CPU使用率的50%,过渡重复了6次

但是关键的节省是从降低复杂的基数曲线中找到的-如果可能的话,使用直线.其他关键的节省来自自定义脚本以使其具有特定目的.

将以上内容与直线(多段静止)和正方形节点进行比较:

同样,水平中心线是CPU使用率的50%,过渡重复了6次

上面是1000条3段路径上的1000个过渡节点-比使用曲线和圆形标记好一个数量级.

其他选项

这些可以与上面的方法结合使用.

不要对每个刻度线的每个点进行动画处理

如果不能在下一个动画帧之前的每个过渡跳动之前将所有节点定位,则将使用接近所有CPU容量的位置.一种选择是不必在每个刻度上放置每个节点-您不必这样做.这是一个复杂的解决方案-但每个刻度线放置三分之一的圆圈-每个圆圈仍然可以每秒定位20帧(相当平滑),但是每帧的计算量是其他情况下的1/3.对于画布,您仍然必须渲染每个节点-但是您可以跳过计算三分之二节点的位置的操作.对于SVG,这有点容易,因为您可以修改d3-transition以包含 every()方法,该方法设置在重新计算过渡值之前经过了多少个滴答声(因此每个滴答声都会对三分之一进行过渡)).

缓存

根据情况,缓存也不是一个坏主意-但所有计算的前端(或数据加载)可能会导致动画启动过程中不必要的延迟-或首次运行速度较慢.这种方法确实为我带来了积极的结果,但是在另一个答案中进行了讨论,因此在这里我不再赘述.

I have gone through the 'point-along-path' d3 visualization through the code: https://bl.ocks.org/mbostock/1705868. I have noticed that while the point is moving along its path it consumes 7 to 11% of CPU Usage.

Current scenario, I have around 100 paths and on each path, I will have to move points(circles) from sources to destinations. So it consumes more than 90% of the CPU memory as more number of points are moving at the same time.

I have tried as:

                   function translateAlong(path) {
                      var l = path.getTotalLength();
                      return function(d, i, a) {
                          return function(t) {
                               var p = path.getPointAtLength(t * l);
                               return "translate(" + p.x + "," + p.y + ")";
                          };
                       };
                    }

                    // On each path(100 paths), we are moving circles from source to destination.
                    var movingCircle = mapNode.append('circle')
                        .attr('r', 2)
                        .attr('fill', 'white')

                    movingCircle.transition()
                        .duration(10000)
                        .ease("linear")
                        .attrTween("transform", translateAlong(path.node()))
                        .each("end", function() {
                            this.remove();
                        });

So what should be the better way to reduce the CPU usage? Thanks.

解决方案

There are a few approaches to this, which vary greatly in potential efficacy.

Ultimately, you are conducting expensive operations every animation frame to calculate each point's new location and to re render it. So, every effort should be made to reduce the cost of those operations.

If frame rate is dropping below 60, it probably means we're nearing CPU capacity. I've used frame rate below to help indicate CPU capacity as it is more easily measured than CPU usage (and probably less invasive).

I had all sorts of charts and theory for this approach, but once typed it seemed like it should be intuitive and I didn't want to dwell on it.

Essentially the goal is to maximize how many transitions I can show at 60 frames per second - this way I can scale back the number of transitions and gain CPU capacity.


Ok, let's get some transitions running with more than 100 nodes along more than 100 paths at 60 frames per second.

D3v4

First, d3v4 likely offers some benefits here. v4 synchronized transitions, which appears to have had the effect of slightly improved times. d3.transition is very effective and low cost in any event, so this isn't the most useful - but upgrading isn't a bad idea.

There are also minor browser specific gains to be had by using different shaped nodes, positioning by transform or by cx,cy etc. I didn't implement any of those because the gains are relatively trivial.

Canvas

Second, SVG just can't move fast enough. Manipulating the DOM takes time, additional elements slows down operations and takes up more memory. I realize canvas can be less convenient from a coding perspective but canvas is faster than SVG for this sort of task. Use detached circle elements to represent each node (the same as with the paths), and transition these.

Save more time by drawing two canvases: one to draw once and to hold the paths (if needed) and another to be redrawn each frame showing the points. Save further time by setting the datum of each circle to the length of the path it is on: no need to call path.getTotalLength() each time.

Maybe something like this

Canvas Simplified Lines

Third, we still have a detached node that has SVG paths so we can use path.getPointAtLength() - and this is actually pretty effective. A major point slowing this down though is the use of curved lines. If you can do it, draw straight lines (multiple segments are fine) - the difference is substantial.

As a further bonus, use context.fillRect() instead of context.arc()

Pure JS and Canvas

Lastly, D3 and the detached nodes for each path (so we can use path.getTotalLength()) can start to get in the way. If need be leave them behind using typed arrays, context.imageData, and your own formula for positioning nodes on paths. Here's a quick bare bones example (100 000 nodes, 500 000 nodes, 1 000 000 nodes (Chrome is best for this, possible browser limitations. Since the paths now essentially color the entire canvas a solid color I don't show them but the nodes follow them still). These can transition 700 000 nodes at 10 frames per second on my slow system. Compare those 7 million transition positioning calculations and renderings/second against about 7 thousand transition positioning calculations and renderings/second I got with d3v3 and SVG (three orders of magnitude difference):

canvas A is with curved lines (cardinal) and circle markers (link above), canvas B is with straight (multi-segment) lines and square markers.

As you might imagine, a machine and script that can render 1000 transitioning nodes at 60 frames per second will have a fair bit of extra capacity if only rendering 100 nodes.

If the transition position and rendering calculations are the primary activity and CPU usage is at 100%, then half the nodes should free up roughly half the CPU capacity. In the slowest canvas example above, my machine logged 200 nodes transitioning along cardinal curves at 60 frames per second (it then started to drop off, indicating that CPU capacity was limiting frame rate and consequently usage should be near 100%), with 100 nodes we have a pleasant ~50% CPU usage:

Horizontal centerline is 50% CPU usage, transition repeated 6 times

But the key savings are to be found from dropping complex cardinal curves - if possible use straight lines. The other key savings are from customizing your scripts to be purpose built.

Compare the above with straight lines (multi segment still) and square nodes:

Again, horizontal centerline is 50% CPU usage, transition repeated 6 times

The above is 1000 transitioning nodes on 1000 3 segment paths - more than an order of magnitude better than with curved lines and circular markers.

Other Options

These can be combined with methods above.

Don't animate every point each tick

If you can't position all nodes each transition tick before the next animation frame you'll be using close to all of your CPU capacity. One option is don't position each node each tick - you don't have to. This is a complex solution - but position one third of circles each tick - each circle still can be positioned 20 frames per second (pretty smooth), but the amount of calculations per frame are 1/3 of what they would be otherwise. For canvas you still have to render each node - but you could skip calculating the position for two thirds of the nodes. For SVG this is a bit easier as you could modify d3-transition to include an every() method that sets how many ticks pass before transition values are re-calculated (so that one third are transitioned each tick).

Caching

Depending on circumstance, caching is also not a bad idea - but the front-ending of all calculations (or loading of data) may lead to unnecessary delays in the commencement of animation - or slowness on first run. This approach did lead to positive outcomes for me, but is discussed in another answer so I won't go into it here.

这篇关于'点对点路径'd3可视化性能问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆