如何正确使用mechanize来刮取AJAX网站 [英] How to properly use mechanize to scrape AJAX sites

查看:101
本文介绍了如何正确使用mechanize来刮取AJAX网站的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

所以我对网络抓取相当新。这个站点上有一个表,表的值由Javascript控制。这些值将确定我的浏览器被告知从Javascript请求的未来值的地址。这些新页面有JSON响应,脚本在我的浏览器中更新表。

So I am fairly new to web scraping. There is this site that has a table on it, the values of the table are controlled by Javascript. The values will determine the address of future values that my browser is told to request from the Javascript. These new pages have JSON responses that the script updates the table with in my browser.

所以我想用一个机制化方法构建一个类,该方法接收一个url和spits出于身体反应,第一次是HTML,之后,身体反应将是JSON,用于剩余的迭代。

So I wanted to build a class with a mechanize method that takes in an url and spits out the body response, the first time a HTML, afterwards, the body response will be JSON, for remaining iterations.

我有一些有效的但我想知道是否我做得对,或者有更好的方法。

I have something that works but I want to know if I am doing it right or if there is a better way.

class urlMaintain2:    
    def __init__(self):

        self.first_append = 0
        self.response = ''

    def pageResponse(self,url):
        import mechanize
        import cookielib        

        br = mechanize.Browser()

        #Cookie Jar
        cj = cookielib.LWPCookieJar()
        br.set_cookiejar(cj)

        #Browser options
        br.set_handle_equiv(True)
        br.set_handle_gzip(False)
        br.set_handle_redirect(True)
        br.set_handle_referer(True)
        br.set_handle_robots(False)

        br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1)

        br.addheaders = [('User-agent','Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.16) Gecko/20110319 Firefox/3.6.16'),
                              ('Accept-Encoding','gzip')]
        if self.first_append == 1:
            br.addheaders.append(['Accept', ' application/json, text/javascript, */*'])
            br.addheaders.append(['Content-Type', 'application/x-www-form-urlencoded; charset=UTF-8'])
            br.addheaders.append(['X-Requested-With', 'XMLHttpRequest'])
            br.addheaders.append(['User-agent','Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.16) Gecko/20110319 Firefox/3.6.16'])
            br.addheaders.append(['If-Modified-Since', 'Thu, 1 Jan 1970 00:00:00 GMT'])
            cj.add_cookie_header(br)   

        response = br.open(url)
        headers = response.info()

        if headers['Content-Encoding']=='gzip':
            import gzip
            gz = gzip.GzipFile(fileobj=response, mode='rb')
            html = gz.read()
            gz.close()
            headers["Content-type"] = "text/html; charset=utf-8"
            response.set_data(html)
        br.close()
        return response

self.first_append在从主页面html中提取数据后变为正数,因此br.addheaders.append不会在第一次运行,因为正文中没有JSON响应,但所有其他身体响应都是JSON。这是正确的方法吗?有更有效的方法吗?

self.first_append becomes positive after the data has been extracted from the main page html, so the br.addheaders.append don't run the first time through since there is no JSON in the body response, but all the other body responses are JSON. Is this the correct way to do this? Is there a more efficient way?

self.first_append在数据后变为正数已从主页面html中提取,因此br.addheaders.append不会在第一次运行,因为正文响应中没有JSON,但所有其他正文响应都是JSON。这是正确的方法吗?这是否有更有效的方法?还有其他语言/库可以做得更好吗?

self.first_append becomes positive after the data has been extracted from the main page html, so the br.addheaders.append don't run the first time through since there is no JSON in the body response, but all the other body responses are JSON. Is this the correct way to do this? Is there a more efficient way? Are there other languages/libraries that do this better?

经过长时间的运行后,我收到以下错误信息:

After a long period of running I get this error message:

File "C:\Users\Donkey\My Documents\Aptana Studio Workspace\UrlMaintain2\src\UrlMaintain2.py", line 55, in pageResponse response = br.open(url) 
File "C:\Python27\lib\mechanize_mechanize.py", line 203, in open return self._mech_open(url, data, timeout=timeout) 
File "C:\Python27\lib\mechanize_mechanize.py", line 230, in _mech_open response = UserAgentBase.open(self, request, data) 
File "C:\Python27\lib\mechanize_opener.py", line 193, in open response = urlopen(self, req, data) 
File "C:\Python27\lib\mechanize_urllib2_fork.py", line 344, in _open '_open', req) File "C:\Python27\lib\mechanize_urllib2_fork.py", line 332, in _call_chain result = func(*args) 
File "C:\Python27\lib\mechanize_urllib2_fork.py", line 1142, in http_open return self.do_open(httplib.HTTPConnection, req) 
File "C:\Python27\lib\mechanize_urllib2_fork.py", line 1118, in do_open raise URLError(err) urllib2.URLError: 

这让我迷失了,不知道为什么会产生但是我需要在我看到之前有大量的迭代。

It kind of has me lost, not sure why it is being generated but I need to have tons of iterations before I see it.

推荐答案

来自 mechanise faq mechanize不提供对JavaScript的任何支持,然后详细说明你的选项(选择不是很好)。

From the mechanize faq "mechanize does not provide any support for JavaScript", it then elaborates on your options (the choices aren't great).

如果你有一些工作,那么很好,但使用selenium webdriver是一个比机械化更好的刮除ajax网站的解决方案

If you've got something working then great but using selenium webdriver is a much better solution for scraping ajax sites than mechanize

这篇关于如何正确使用mechanize来刮取AJAX网站的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆