我如何刮只​​有在<身体GT;标签关闭网站的 [英] How do I scrape only the <body> tag off of a website

查看:265
本文介绍了我如何刮只​​有在<身体GT;标签关闭网站的的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我工作的一个WebCrawler的。目前,我凑的全部内容,然后使用常规的前pression我删除<元&GT中,<脚本>中<风格> 等标签和获得身体的含量

I'm working on a webcrawler. At the moment i scrape the whole content and then using regular expression i remove <meta>, <script>, <style> and other tags and get the content of the body.

不过,我想,以优化性能,我想知道如果有一种方法,我可以凑只有&LT;身体GT; 网​​页

However, I'm trying to optimise the performance and I was wondering if there's a way I could scrape only the <body> of the page?

namespace WebScrapper
{
    public static class KrioScraper
    {    
        public static string scrapeIt(string siteToScrape)
        {
            string HTML = getHTML(siteToScrape);
            string text = stripCode(HTML);
            return text;
        }

        public static string getHTML(string siteToScrape)
        {
            string response = "";
            HttpWebResponse objResponse;
            HttpWebRequest objRequest = 
                (HttpWebRequest) WebRequest.Create(siteToScrape);
            objRequest.UserAgent = "Mozilla/4.0 (compatible; MSIE 6.0; " +
                "Windows NT 5.1; .NET CLR 1.0.3705)";
            objResponse = (HttpWebResponse) objRequest.GetResponse();
            using (StreamReader sr =
                new StreamReader(objResponse.GetResponseStream()))
            {
                response = sr.ReadToEnd();
                sr.Close();
            }
            return response;
        }

        public static string stripCode(string the_html)
        {
            // Remove google analytics code and other JS
            the_html = Regex.Replace(the_html, "<script.*?</script>", "", 
                RegexOptions.Singleline | RegexOptions.IgnoreCase);
            // Remove inline stylesheets
            the_html = Regex.Replace(the_html, "<style.*?</style>", "", 
                RegexOptions.Singleline | RegexOptions.IgnoreCase);
            // Remove HTML tags
            the_html = Regex.Replace(the_html, "</?[a-z][a-z0-9]*[^<>]*>", "");
            // Remove HTML comments
            the_html = Regex.Replace(the_html, "<!--(.|\\s)*?-->", "");
            // Remove Doctype
            the_html = Regex.Replace(the_html, "<!(.|\\s)*?>", "");
            // Remove excessive whitespace
            the_html = Regex.Replace(the_html, "[\t\r\n]", " ");

            return the_html;
        }
    }
}

的Page_Load 我称之为 scrapeIt()方法传递给它,我从一个文本框得到字符串该页面。

From Page_Load I call the scrapeIt() method passing to it the string that I get from a textbox from the page.

推荐答案

我觉得你最好的选择是使用一个轻量级的HTML解析器(的像雄伟壮观的12 ,这基于我的测试中比HTML敏捷性包快约50%-100%),并只处理你感兴趣的(之间的任何节点&LT;身体GT; &LT; /身体GT; )。雄伟壮观的12是有点难度比HTML敏捷性包使用,但如果你正在寻找的表现则肯定会帮助你!

I think that your best option is to use a lightweight HTML parser (something like Majestic 12, which based on my tests is roughly 50-100% faster than HTML Agility Pack) and only process the nodes which you're interested in (anything between <body> and </body>). Majestic 12 is a little harder to use than HTML Agility Pack, but if you're looking for performance then it will definitely help you!

这将让你的闭合,你问什么,但你仍然需要下载整个页面。我不认为有办法解决。实际上产生什么您保存为所有的其他内容(除了体)的DOM节点。你必须分析他们,但你可以跳过一个节点的全部内容你不感兴趣的处理。

This will get you the closes to what you're asking for, but you will still have to download the entire page. I don't think there is a way around that. What you will save on is actually generating the DOM nodes for all the other content (aside from the body). You will have to parse them, but you can skip the entire content of a node which you're not interested in processing.

下面是如何使用M12解析器一个很好的例子。

我没有对如何抓住身体现成的例子,但我也有一个如何只抢通一和很少的修改,将到达那里。下面是粗糙的版本:

I don't have a ready example of how to grab the body, but I do have one of how to only grab the links and with little modification it will get there. Here is the rough version:

GrabBody(ParserTools.OpenM12Parser(_response.BodyBytes));

您需要打开M12分析器(附带M12有评论认为,细节究竟如何所有这些选项会影响性能的示例项目,他们这样做!):

You need to Open the M12 Parser (the example project that comes with M12 has comments that detail exactly how all of these options affect performance, AND THEY DO!!!):

public static HTMLparser OpenM12Parser(byte[] buffer)
{
    HTMLparser parser = new HTMLparser();
    parser.SetChunkHashMode(false);
    parser.bKeepRawHTML = false;
    parser.bDecodeEntities = true;
    parser.bDecodeMiniEntities = true;

    if (!parser.bDecodeEntities && parser.bDecodeMiniEntities)
        parser.InitMiniEntities();

    parser.bAutoExtractBetweenTagsOnly = true;
    parser.bAutoKeepScripts = true;
    parser.bAutoMarkClosedTagsWithParamsAsOpen = true;
    parser.CleanUp();
    parser.Init(buffer);
    return parser;
}

解析正文:

public void GrabBody(HTMLparser parser)
{

    // parser will return us tokens called HTMLchunk -- warning DO NOT destroy it until end of parsing
    // because HTMLparser re-uses this object
    HTMLchunk chunk = null;

    // we parse until returned oChunk is null indicating we reached end of parsing
    while ((chunk = parser.ParseNext()) != null)
    {
        switch (chunk.oType)
        {
            // matched open tag, ie <a href="">
            case HTMLchunkType.OpenTag:
                if (chunk.sTag == "body")
                {
                    // Start generating the DOM node (as shown in the previous example link)
                }
                break;

            // matched close tag, ie </a>
            case HTMLchunkType.CloseTag:
                break;

            // matched normal text
            case HTMLchunkType.Text:
                break;

            // matched HTML comment, that's stuff between <!-- and -->
            case HTMLchunkType.Comment:
                break;
        };
    }
}

生成DOM节点是棘手的,但<一个href="http://stackoverflow.com/questions/100358/looking-for-c-html-parser/624410#624410">Majestic12ToXml课程将帮助你做到这一点。的就像我说的,这绝不​​等同于3衬你看到的HTML敏捷包,但一旦你的工具下来,你就可以得到正是你需要为的性能成本的一小部分和code可能是一样多行。

Generating the DOM nodes is tricky, but the Majestic12ToXml class will help you do that. Like I said, this is by no means equivalent to the 3-liner you saw with HTML agility pack, but once you get the tools down you will be able to get exactly what you need for a fraction of the performance cost and probably just as many lines of code.

这篇关于我如何刮只​​有在&lt;身体GT;标签关闭网站的的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆