构建网络搜索引擎 [英] Building a web search engine

查看:53
本文介绍了构建网络搜索引擎的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直对开发网络搜索引擎很感兴趣.什么是好的起点?我听说过 Lucene,但我不是 Java 大佬.还有其他好的资源或开源项目吗?

I've always been interested in developing a web search engine. What's a good place to start? I've heard of Lucene, but I'm not a big Java guy. Any other good resources or open source projects?

我知道这是一项艰巨的任务,但这也是吸引力的一部分.我不打算创建下一个 Google,我只是想用它来搜索我可能感兴趣的网站子集.

I understand it's a huge under-taking, but that's part of the appeal. I'm not looking to create the next Google, just something I can use to search a sub-set of sites that I might be interested in.

推荐答案

搜索引擎有几个部分.从广义上讲,以一种无可救药的通用方式(伙计们,如果您觉得可以添加更好的描述、链接等,请随意编辑):

There are several parts to a search engine. Broadly speaking, in a hopelessly general manner (folks, feel free to edit if you feel you can add better descriptions, links, etc):

  1. 爬虫.这是浏览网络、抓取页面并将有关它们的信息存储到某个中央数据存储中的部分.除了文本本身,您还需要诸如您访问它的时间等信息.爬虫需要足够聪明才能知道访问某些域的频率,遵守 robots.txt 约定等.

  1. The crawler. This is the part that goes through the web, grabs the pages, and stores information about them into some central data store. In addition to the text itself, you will want things like the time you accessed it, etc. The crawler needs to be smart enough to know how often to hit certain domains, to obey the robots.txt convention, etc.

解析器.这会读取爬虫获取的数据,对其进行解析,保存它需要的任何元数据,丢弃垃圾,并可能就下次获取的内容向爬虫提出建议.

The parser. This reads the data fetched by the crawler, parses it, saves whatever metadata it needs to, throws away junk, and possibly makes suggestions to the crawler on what to fetch next time around.

索引器.读取解析器解析的内容,并为网页上的术语创建倒排索引.它可以像您希望的那样智能——应用 NLP 技术来制作概念索引、交叉链接事物、加入同义词等.

The indexer. Reads the stuff the parser parsed, and creates inverted indexes into the terms found on the webpages. It can be as smart as you want it to be -- apply NLP techniques to make indexes of concepts, cross-link things, throw in synonyms, etc.

排名引擎.给定与apple"匹配的数千个 URL,您如何确定哪个结果最好?只是索引不会为您提供该信息.您需要分析文本、链接结构以及您想查看的任何其他部分,并创建一些分数.这可能完全是即时完成的(这真的很难),或者基于一些预先计算的专家"概念(参见 PageRank 等).

The ranking engine. Given a few thousand URLs matching "apple", how do you decide which result is the best? Jut the index doesn't give you that information. You need to analyze the text, the linking structure, and whatever other pieces you want to look at, and create some scores. This may be done completely on the fly (that's really hard), or based on some pre-computed notions of "experts" (see PageRank, etc).

前端.有些东西需要接收用户查询,点击中央引擎并做出响应;这个东西需要在缓存结果方面很聪明,可能会混合来自其他来源的结果等.它有自己的一系列问题.

The front end. Something needs to receive user queries, hit the central engine, and respond; this something needs to be smart about caching results, possibly mixing in results from other sources, etc. It has its own set of problems.

我的建议——选择您最感兴趣的那些,下载 Lucene 或 Xapian 或任何其他开源项目,取出执行上述任务之一的位,并尝试替换它.希望有更好的东西:-).

My advice -- choose which of these interests you the most, download Lucene or Xapian or any other open source project out there, pull out the bit that does one of the above tasks, and try to replace it. Hopefully, with something better :-).

一些可能证明有用的链接:"Agile web-crawler",来自爱沙尼亚的论文(用英语)Sphinx 搜索引擎,一个索引和搜索 API.专为大型数据库设计,但模块化和开放式."Information Retrieval,Manning 等人关于 IR 的教科书. 很好地概述了索引的构建方式、出现的各种问题,以及对抓取等的一些讨论.免费在线版本(目前)!

Some links that may prove useful: "Agile web-crawler", a paper from Estonia (in English) Sphinx Search engine, an indexing and search api. Designed for large DBs, but modular and open-ended. "Information Retrieval, a textbook about IR from Manning et al. Good overview of how the indexes are built, various issues that come up, as well as some discussion of crawling, etc. Free online version (for now)!

这篇关于构建网络搜索引擎的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆