无法验证存储在hbase中的已抓取数据 [英] Unable to verify crawled data stored in hbase

查看:99
本文介绍了无法验证存储在hbase中的已抓取数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用'nutch'将HBase作为存储后端抓取网站。我已经提到了这个教程link- http://wiki.apache.org/nutch/Nutch2Tutorial



Nutch版本为2.2.1,HBase版本为0.90.4,Solr版本为4.7.1

以下是我使用的步骤 -

./ runtime / local / bin / nutch注入url


$ b ./ runtime / local / bin / nutch生成-topN 100 -adddays 30



./ runtime / local / bin / nutch fetch -all

./ runtime / local / bin / nutch fetch -all
$ b ./ runtime / local / bin / nutch updatedb

 。 / runtime / local / bin / nutch solrindex http:// localhost:8983 / solr / -all 

我的url / seed.txt文件包含 -
http://www.xyzshoppingsite.com/mobiles/



我在'regex-urlfilter.txt'文件中只保留下线(所有其他正则表达式都被注释掉)。

+ ^ http://([a-z0-9] * \。)* xyzshoppingsite.com / mobile / *



最后爬行,我可以看到一个ta在HBase中创建的网页,但我无法验证是否所有和完整的数据已被抓取。
在Solr搜索时,它什么也不显示,0结果。

我的最终目的是在上面的URL中获取所有移动网页中的完整数据。

您能否让我知道,


  • 如何验证检索到的数据存在于HBase中?

  • Solr日志目录包含0个文件,所以我无法获得突破。 HBase命令的输出扫描网页只显示时间戳记数据和其他数据。 数据为

    ' value = \x0A\x0请等待...重定向至< a href =/ mobiles><<< ; b> http://www.xyzshoppingsite.com/mobiles< / b>< / a>请稍候...重定向至< a href =/ mobiles>< b> http:// www .xyzshoppingsite.com / mobiles< / b>< / a> '




在这里,为什么数据会像这样抓取,而不是重定向后页面的实际内容?



请帮忙。



感谢和问候!

解决方案

执行所有这些步骤,您可以使用下面的命令

  ./ bin / crawl url / seed.txt shoppingcrawl http:// localhost :8080 / solr 2 

如果您能够成功执行,将在hbase中创建一个表,名称为 shoppingcrawl _webpage。



我们可以通过在hbase shell中执行以下命令来检查:

  hbase> ;列表

然后我们可以扫描特定的表格。在这种情况下

  hbase>扫描'shoppingcrawl_webpage'


I have crawled website using 'nutch' with HBase as a storage back-end. I have referred this tutorial link- http://wiki.apache.org/nutch/Nutch2Tutorial.

Nutch version is 2.2.1, HBase version 0.90.4 and Solr version 4.7.1

Here are the steps I used-

./runtime/local/bin/nutch inject urls

./runtime/local/bin/nutch generate -topN 100 -adddays 30

./runtime/local/bin/nutch fetch -all

./runtime/local/bin/nutch fetch -all

./runtime/local/bin/nutch updatedb

./runtime/local/bin/nutch solrindex http://localhost:8983/solr/ -all

My url/seed.txt file contains- http://www.xyzshoppingsite.com/mobiles/

And I have kept only below line in 'regex-urlfilter.txt' file (all other regex are commented).

+^http://([a-z0-9]*\.)*xyzshoppingsite.com/mobile/*

At the end of the crawl, I can see a table "webpage" created in the HBase but I am unable to verify whether all and complete data have been crawled or not. When searched in Solr, it shows nothing, 0 result.

My ultimate intention is to get the complete data present in all pages under mobile in above URL.

Could you please let me know,

  • How to verify crawled data present in HBase?

  • Solr log directory contains 0 files so I am unable to get a breakthrough. How to resolve this?

  • Output of HBase command scan "webpage" shows only timestamp data and other data as

    'value=\x0A\x0APlease Wait ... Redirecting to <a href="/mobiles"><b>http://www.xyzshoppingsite.com/mobiles</b></a>Please Wait ... Redirecting to <a href="/mobiles"><b>http://www.xyzshoppingsite.com/mobiles</b></a>'

Here, why is the data crawled like this and not the actual contents of page after redirection?

Please help. Thanks in advance.

Thanks and Regards!

解决方案

Instead of executing all those steps, can you use below command

./bin/crawl url/seed.txt shoppingcrawl http://localhost:8080/solr 2

If you are able to execute successfully, a table will be created in hbase , with name, shoppingcrawl_webpage.

we can check by executing below command in hbase shell

hbase> list

Then we can scan for specific table. In this case

 hbase> scan 'shoppingcrawl_webpage'

这篇关于无法验证存储在hbase中的已抓取数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆