如何使用Python抓取具有动态生成的URL的页面? [英] How do I scrape pages with dynamically generated URLs using Python?

查看:340
本文介绍了如何使用Python抓取具有动态生成的URL的页面?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试删除 http://www .dailyfinance.com/quote/NYSE/international-business-machines/IBM/financial-ratios ,但是传统的URL字符串构建技术不起作用,因为完整的公司名称已插入-the-path"字符串.确切的全公司名称"是事先未知的.只有公司符号"IBM"是已知的.

I am trying to scrape http://www.dailyfinance.com/quote/NYSE/international-business-machines/IBM/financial-ratios, but the traditional url string building technique doesn't work because the "full-company-name-is-inserted-in-the-path" string. And the exact "full-company-name" isn't known in advance. Only the company symbol, "IBM" is known.

本质上,我抓取的方法是循环浏览公司符号数组并构建url字符串,然后再将其发送到urllib2.urlopen(url).但是在这种情况下,那是不可能的.

Essentially, the way I scrape is by looping through an array of company symbol and build the url string before sending it to urllib2.urlopen(url). But in this case, that can't be done.

例如,CSCO字符串为

For example, CSCO string is

http://www.dailyfinance.com/quote/NASDAQ/cisco-systems-inc/CSCO/financial-ratios

另一个示例网址字符串是AAPL:

and another example url string is AAPL:

http://www.dailyfinance.com/quote/NASDAQ/apple/AAPL/financial-ratios

因此,为了获取URL,我必须在主页上的输入框中搜索符号:

So in order to get the url, I had to search the symbol in the input box on the main page:

http://www.dailyfinance.com/

我注意到,当我键入"CSCO"并检查( http://www.dailyfinance.com/quote/NASDAQ/apple/AAPL/financial-ratios ,我注意到get请求正在发送至

I've noticed that when I type "CSCO" and inspect the search input at (http://www.dailyfinance.com/quote/NASDAQ/apple/AAPL/financial-ratios in Firefox web developer network tab, I noticed that the get request is sending to

http://j.foolcdn.com/tmf/predictivesearch?callback=_predictiveSearch_csco&term=csco&domain=dailyfinance.com

并且引荐来源实际上提供了我要捕获的路径

and that the referer actually gives the path that I want to capture

Host: j.foolcdn.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:28.0) Gecko/20100101 Firefox/28.0
Accept: */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: http://www.dailyfinance.com/quote/NASDAQ/cisco-systems-inc/CSCO/financial-ratios?source=itxwebtxt0000007
Connection: keep-alive

很抱歉,冗长的解释.因此,问题是如何提取引荐来源网址中的网址?如果那不可能,我应该如何解决这个问题?还有另一种方法吗?

Sorry for the long explanation. So the question is how do I extract the url in the Referer? If that is not possible, how should I approach this problem? Is there another way?

非常感谢您的帮助.

推荐答案

我喜欢这个问题.因此,我将给出一个非常彻底的答案.为此,我将使用我最喜欢的Requests库以及BeautifulSoup4.如果您真的想使用,请移植到Mechanize,这取决于您.请求可以为您省去很多麻烦.

I like this question. And because of that, I'll give a very thorough answer. For this, I'll use my favorite Requests library along with BeautifulSoup4. Porting over to Mechanize if you really want to use that is up to you. Requests will save you tons of headaches though.

首先,您可能正在寻找POST请求.但是,如果搜索功能带来立即转到您要查找的页面,则通常不需要POST请求.那么,让我们检查一下吧?

First off, you're probably looking for a POST request. However, POST requests are often not needed if a search function brings you right away to the page you're looking for. So let's inspect it, shall we?

当我进入基本URL http://www.dailyfinance.com/时,可以通过Firebug或Chrome的检查工具进行简单检查,当我在搜索栏上放入CSCO或AAPL并启用跳转"时,会出现一个状态代码.这是什么意思?

When I land on the base URL, http://www.dailyfinance.com/, I can do a simple check via Firebug or Chrome's inspect tool that when I put in CSCO or AAPL on the search bar and enable the "jump", there's a 301 Moved Permanently status code. What does this mean?

简单来说,我 被转移 .此GET请求的网址如下:

In simple terms, I was transferred somewhere. The URL for this GET request is the following:

http://www.dailyfinance.com/quote/jump?exchange-input=&ticker-input=CSCO

现在,我们通过简单的URL操作来测试它是否适用于AAPL.

Now, we test if it works with AAPL by using a simple URL manipulation.

import requests as rq

apl_tick = "AAPL"
url = "http://www.dailyfinance.com/quote/jump?exchange-input=&ticker-input="
r = rq.get(url + apl_tick)
print r.url

上面给出了以下结果:

http://www.dailyfinance.com/quote/nasdaq/apple/aapl
[Finished in 2.3s]

查看响应的URL如何更改?通过将以下内容附加到上面的代码中,找到/financial-ratios页面,使我们进一步进行URL操作:

See how the URL of the response changed? Let's take the URL manipulation one step further by looking for the /financial-ratios page by appending the below to the above code:

new_url = r.url + "/financial-ratios"
p = rq.get(new_url)
print p.url

运行时,将得到以下结果:

When ran, this gives is the following result:

http://www.dailyfinance.com/quote/nasdaq/apple/aapl
http://www.dailyfinance.com/quote/nasdaq/apple/aapl/financial-ratios
[Finished in 6.0s]

现在,我们走在正确的轨道上.现在,我将尝试使用BeautifulSoup解析数据.我完整的代码如下:

Now we're on the right track. I will now try to parse the data using BeautifulSoup. My complete code is as follows:

from bs4 import BeautifulSoup as bsoup
import requests as rq

apl_tick = "AAPL"
url = "http://www.dailyfinance.com/quote/jump?exchange-input=&ticker-input="
r = rq.get(url + apl_tick)
new_url = r.url + "/financial-ratios"
p = rq.get(new_url)

soup = bsoup(p.content)
div = soup.find("div", id="clear").table
rows = table.find_all("tr")
for row in rows:
    print row

然后我尝试运行此代码,但遇到以下回溯错误:

I then try running this code, only to encounter an error with the following traceback:

  File "C:\Users\nanashi\Desktop\test.py", line 13, in <module>
    div = soup.find("div", id="clear").table
AttributeError: 'NoneType' object has no attribute 'table'

值得注意的是'NoneType' object...行.这意味着我们的目标div不存在!哎呀,为什么我看到以下内容?!

Of note is the line 'NoneType' object.... This means our target div does not exist! Egads, but why am I seeing the following?!

只有一种解释:动态加载表!老鼠让我们看看是否可以找到该表的另一个源.我研究了页面,发现底部有滚动条.这可能意味着该表已装入框架中,或完全直接从另一个来源装入并放置在页面的div中.

There can only be one explanation: the table is loaded dynamically! Rats. Let's see if we can find another source for the table. I study the page and see that there are scrollbars at the bottom. This might mean that the table was loaded inside a frame or was loaded straight from another source entirely and placed into a div in the page.

我刷新页面并再次观看GET请求.宾果游戏,我发现了一些看起来很有前途的东西:

I refresh the page and watch the GET requests again. Bingo, I found something that seems a bit promising:

第三方源URL,请看,使用股票代码可以轻松操作!让我们尝试将其加载到新选项卡中.这就是我们得到的:

A third-party source URL, and look, it's easily manipulable using the ticker symbol! Let's try loading it into a new tab. Here's what we get:

哇!现在,我们有了非常准确的数据源.但是,当我们尝试使用此字符串提取CSCO数据时,最后一个障碍将起作用(请记住,我们进入CSCO-> AAPL,然后又回到CSCO,因此请不要感到困惑).让我们清理一下字符串,并完全放弃www.dailyfinance.com的角色.我们的新网址如下:

WOW! We now have the very exact source of our data. The last hurdle though is will it work when we try to pull the CSCO data using this string (remember we went CSCO -> AAPL and now back to CSCO again, so you're not confused). Let's clean up the string and ditch the role of www.dailyfinance.com here completely. Our new url is as follows:

http://www.motleyfool.idmanagedsolutions.com/stocks/financial_ratios.idms?SYMBOL_US=AAPL

让我们尝试在最终的刮板中使用它!

Let's try using that in our final scraper!

from bs4 import BeautifulSoup as bsoup
import requests as rq

csco_tick = "CSCO"
url = "http://www.motleyfool.idmanagedsolutions.com/stocks/financial_ratios.idms?SYMBOL_US="
new_url = url + csco_tick

r = rq.get(new_url)
soup = bsoup(r.content)

table = soup.find("div", id="clear").table
rows = table.find_all("tr")
for row in rows:
    print row.get_text()

我们对CSCO财务比率数据的原始结果如下:

And our raw results for CSCO's financial ratios data is as follows:

Company
Industry


Valuation Ratios


P/E Ratio (TTM)
15.40
14.80


P/E High - Last 5 Yrs 
24.00
28.90


P/E Low - Last 5 Yrs
8.40
12.10


Beta
1.37
1.50


Price to Sales (TTM)
2.51
2.59


Price to Book (MRQ)
2.14
2.17


Price to Tangible Book (MRQ)
4.25
3.83


Price to Cash Flow (TTM)
11.40
11.60


Price to Free Cash Flow (TTM)
28.20
60.20


Dividends


Dividend Yield (%)
3.30
2.50


Dividend Yield - 5 Yr Avg (%)
N.A.
1.20


Dividend 5 Yr Growth Rate (%)
N.A.
144.07


Payout Ratio (TTM)
45.00
32.00


Sales (MRQ) vs Qtr 1 Yr Ago (%)
-7.80
-3.70


Sales (TTM) vs TTM 1 Yr Ago (%)
5.50
5.60


Growth Rates (%)


Sales - 5 Yr Growth Rate (%)
5.51
5.12


EPS (MRQ) vs Qtr 1 Yr Ago (%)
-54.50
-51.90


EPS (TTM) vs TTM 1 Yr Ago (%)
-54.50
-51.90


EPS - 5 Yr Growth Rate (%)
8.91
9.04


Capital Spending - 5 Yr Growth Rate (%)
20.30
20.94


Financial Strength


Quick Ratio (MRQ)
2.40
2.70


Current Ratio (MRQ)
2.60
2.90


LT Debt to Equity (MRQ)
0.22
0.20


Total Debt to Equity (MRQ)
0.31
0.25


Interest Coverage (TTM)
18.90
19.10


Profitability Ratios (%)


Gross Margin (TTM)
63.20
62.50


Gross Margin - 5 Yr Avg
66.30
64.00


EBITD Margin (TTM)
26.20
25.00


EBITD - 5 Yr Avg
28.82
0.00


Pre-Tax Margin (TTM)
21.10
20.00


Pre-Tax Margin - 5 Yr Avg
21.60
18.80


Management Effectiveness (%)


Net Profit Margin (TTM)
17.10
17.65


Net Profit Margin - 5 Yr Avg
17.90
15.40


Return on Assets (TTM)
8.30
8.90


Return on Assets - 5 Yr Avg
8.90
8.00


Return on Investment (TTM)
11.90
12.30


Return on Investment - 5 Yr Avg
12.50
10.90


Efficiency


Revenue/Employee (TTM)
637,890.00
556,027.00


Net Income/Employee (TTM)
108,902.00
98,118.00


Receivable Turnover (TTM)
5.70
5.80


Inventory Turnover (TTM)
11.30
9.70


Asset Turnover (TTM)
0.50
0.50

[Finished in 2.0s]

清理数据取决于您.

从此抓取中汲取的一个很好的教训是,并非所有数据都单独包含在一页中.很高兴看到它来自另一个静态站点.如果它是通过JavaScript或AJAX调用等产生的,则我们的方法可能会遇到一些困难.

One good lesson to learn from this scrape is not all data are contained in one page alone. It's pretty nice to see it coming from another static site. If it was produced via JavaScript or AJAX calls or the like, we would likely have some difficulties with our approach.

希望您从中学到了一些东西.让我们知道这是否有帮助,祝您好运.

Hopefully you learned something from this. Let us know if this helps and good luck.

这篇关于如何使用Python抓取具有动态生成的URL的页面?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆