如何在不冻结DOM的情况下使用JavaScript加载非常大的字典? [英] How can I load a very large dictionary with JavaScript without freezing the DOM?

查看:78
本文介绍了如何在不冻结DOM的情况下使用JavaScript加载非常大的字典?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我写了一个拼写检查脚本,它使用了很长的单词列表。我将这个大的单词列表格式化为JavaScript对象,然后将列表作为脚本加载。因此,当它被加载时,解析该非常大的对象。

I wrote a spell-checking script that utilizes a long list of words. I formatted this large list of words as a JavaScript object, and I load the list as a script. So when it's loaded, that very big object is parsed.

dictionary.js

var dictionary = {
   "apple" : "apple",
   "banana" : "banana",
   "orange" : "orange"
}

以这种方式格式化以进行即时单词有效性检查:

Formatted this way for instant word validity checking:

if (dictionary["apple"]){
    //word is valid
}

问题是这个被解析的巨型对象会导致DOM冻结。

The problem is that this giant object being parsed causes a significant DOM freeze.

如何以某种方式加载我的数据结构以便我可以逐个解析它?如何存储/加载我的数据结构以便DOM可以在不冻结的情况下处理它?<​​/ p >

How can I load my data structure in a way so that I can parse it piece by piece? How can I store/load my data structure so that the DOM can handle it without freezing?

推荐答案

在表格中写下你的JS文件

Write your JS file in the form

var words = JSON.parse('{"aardvark": "aardvark", ...}');

JSON解析将比JS解析器快几个数量级。

JSON parse will be several orders of magnitude faster than the JS parser.

根据我的测量,实际查找大约是0.01ms。

The actual lookup will be about 0.01ms by my measurement.

在这种情况下考虑性能时需要考虑几个方面,包括下载带宽,解析,预处理或构建(如果需要),内存和检索。在这种情况下,所有其他性能问题都被JS解析时间所淹没,对于120K入口哈希,这可能长达10秒。如果从服务器下载,2.5MB大小可能是一个问题,但这将缩小到原始大小的20%左右。在检索性能方面,JS哈希已经针对快速检索进行了优化;实际检索时间可能小于0.01ms,特别是对于后续访问同一个密钥。在内存方面,似乎没有多少方法可以优化它,但是大多数浏览器都可以保持这样大小的对象而不会出汗。

There are several aspects to consider when thinking about performance in this situation, including download bandwidth, parsing, preprocessing or building if needed, memory, and retrieval. In this case, all other performance issues are overwhelmed by the JS parsing time, which could be up to 10 seconds for the 120K entry hash. If downloading from a server, the 2.5MB size could be an issue, but this will zip down to 20% or so of the original size. In terms of retrieval performance, JS hashes are already optimized for fast retrieval; the actual retrieval time might be less than 0.01ms, especially for subsequent accesses to the same key. In terms of memory, there seem to be few ways to optimize this, but most browsers could hold an object this size without breaking a sweat.

Patricia trie方法很有趣,主要解决内存使用和下载带宽问题,但在这种情况下它们似乎不是主要的问题区域。

The Patricia trie approach is interesting, and addresses mainly the memory usage and download bandwidth issues, but they do not seem in this case to be the main problem areas.

这篇关于如何在不冻结DOM的情况下使用JavaScript加载非常大的字典?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆