d3 - 逐步绘制大型数据集 [英] d3 — Progressively draw a large dataset

查看:154
本文介绍了d3 - 逐步绘制大型数据集的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用d3.js将一个80,000行.tsv的内容绘制到图表上。



我遇到的问题是,这么多的数据,页面变得无响应aprox 5秒,而整个数据集一次通过。



有一个简单的方法来处理数据逐步,如果它的传播在较长的时间?

解决方案

























$ b / div>

我想你必须将你的数据块并使用setInterval或setTimeout在组中显示。这将给UI一些喘息的空间,在中间跳。



基本方法是:
1)chunk数据集
2)分别渲染每个块
3)



下面是一个示例:

  var dataPool = chunkArray数据,100); 
function updateVisualization(){
group = canvas.append(g)。selectAll(circle)
.data(dataPool [poolPosition])
.enter
.append(circle)
/ * ... presentation stuff .... * /

}

iterator = setInterval(updateVisualization ,100);

你可以看到一个演示小提琴 - 在我喝咖啡之前 - p>

http://jsfiddle.net/thudfactor/R42uQ/



请注意,我正在为每个数组块创建一个新的组,其中包含自己的数据连接。如果您继续添加到同一个数据连接(数据(oldData.concat(nextChunk)),整个数据集仍然被处理和比较,即使你只使用enter()选择,因此它不采取长的东西开始爬行。


I'm using d3.js to plot the contents of an 80,000 row .tsv onto a chart.

The problem I'm having is that since there is so much data, the page becomes unresponsive for aprox 5 seconds while the entire dataset is churned through at once.

Is there an easy way to process the data progressively if it's spread over a longer period of time? Ideally the page would remain responsive, and the data would be plotted as it became available, instead of in one big hit at the end

解决方案

I think you'll have to chunk your data and display it in groups using setInterval or setTimeout. This will give the UI some breathing room to jump in the middle.

The basic approach is: 1) chunk the data set 2) render each chunk separately 3) keep track of each rendered group

Here's an example:

var dataPool = chunkArray(data,100);
function updateVisualization() {
    group = canvas.append("g").selectAll("circle")
    .data(dataPool[poolPosition])
    .enter()
    .append("circle")
    /* ... presentation stuff .... */

}

iterator = setInterval(updateVisualization,100);

You can see a demo fiddle of this -- done before I had coffee -- here:

http://jsfiddle.net/thudfactor/R42uQ/

Note that I'm making a new group, with its own data join, for each array chunk. If you keep adding to the same data join over time ( data(oldData.concat(nextChunk) ), the entire data set still gets processed and compared even if you're only using the enter() selection, so it doesn't take long for things to start crawling.

这篇关于d3 - 逐步绘制大型数据集的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆