提取Google搜索结果 [英] Extract Google Search Results

查看:140
本文介绍了提取Google搜索结果的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想定期检查Google列出了哪些子域.

I would like to periodically check what sub-domains are being listed by Google.

要获取子域列表,请在Google搜索框中输入"site:example.com"-这会列出所有子域结果(本域超过20页).

To obtain list of sub-domains, I type 'site:example.com' in Google search box - this lists all the sub-domain results (over 20 pages for our domain).

仅提取"site:example.com"搜索返回的地址的URL的最佳方法是什么?

What is the best way to extract only the URL of the addresses returned by the 'site:example.com' search?

我当时正在考虑编写一个小的python脚本,该脚本将执行上述搜索并从搜索结果中对URL进行正则表达式(在所有结果页面上均重复).这是一个好的开始吗?会有更好的方法吗?

I was thinking of writing a little python script that will do the above search and regex the URLs from the search results (repeat on all result pages). Is this a good start? Could there be a better methodology?

干杯.

推荐答案

Regex对于解析HTML是个坏主意.阅读并依赖格式正确的HTML是很神秘的.

Regex is a bad idea for parsing HTML. It's cryptic to read and relies of well-formed HTML.

针对Python尝试 BeautifulSoup .这是一个示例脚本,该脚本从site:domain.com Google查询的前10个页面返回URL.

Try BeautifulSoup for Python. Here's an example script that returns URLs from the first 10 pages of a site:domain.com Google query.

import sys # Used to add the BeautifulSoup folder the import path
import urllib2 # Used to read the html document

if __name__ == "__main__":
    ### Import Beautiful Soup
    ### Here, I have the BeautifulSoup folder in the level of this Python script
    ### So I need to tell Python where to look.
    sys.path.append("./BeautifulSoup")
    from BeautifulSoup import BeautifulSoup

    ### Create opener with Google-friendly user agent
    opener = urllib2.build_opener()
    opener.addheaders = [('User-agent', 'Mozilla/5.0')]

    ### Open page & generate soup
    ### the "start" variable will be used to iterate through 10 pages.
    for start in range(0,10):
        url = "http://www.google.com/search?q=site:stackoverflow.com&start=" + str(start*10)
        page = opener.open(url)
        soup = BeautifulSoup(page)

        ### Parse and find
        ### Looks like google contains URLs in <cite> tags.
        ### So for each cite tag on each page (10), print its contents (url)
        for cite in soup.findAll('cite'):
            print cite.text

输出:

stackoverflow.com/
stackoverflow.com/questions
stackoverflow.com/unanswered
stackoverflow.com/users
meta.stackoverflow.com/
blog.stackoverflow.com/
chat.meta.stackoverflow.com/
...

当然,您可以将每个结果附加到列表中,以便可以将其解析为子域.几天前,我刚接触Python并抓取内容,但这应该可以帮助您入门.

Of course, you could append each result to a list so you can parse it for subdomains. I just got into Python and scraping a few days ago, but this should get you started.

这篇关于提取Google搜索结果的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆