在python 3中使用requests.get获取数据之前等待页面加载 [英] Wait page to load before getting data with requests.get in python 3

查看:72
本文介绍了在python 3中使用requests.get获取数据之前等待页面加载的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个页面,我需要获取与 BS4 一起使用的源,但是页面中间需要 1 秒(可能更少)来加载内容,并且 requests.get 在该部分之前捕获页面的源加载,我如何才能在获取数据之前等待一秒钟?

I have a page that i need to get the source to use with BS4, but the middle of the page takes 1 second(maybe less) to load the content, and requests.get catches the source of the page before the section loads, how can I wait a second before getting the data?

r = requests.get(URL + self.search, headers=USER_AGENT, timeout=5 )
    soup = BeautifulSoup(r.content, 'html.parser')
    a = soup.find_all('section', 'wrapper')

页面

<section class="wrapper" id="resultado_busca">

推荐答案

看起来不是等待的问题,看起来元素是由JavaScript创建的,requests不能通过 JavaScript 处理动态生成的元素.一个建议是将 seleniumPhantomJS 获取页面源码,然后你可以使用 BeautifulSoup 进行解析,下面显示的代码将完全做到这一点:

It doesn't look like a problem of waiting, it looks like the element is being created by JavaScript, requests can't handle dynamically generated elements by JavaScript. A suggestion is to use selenium together with PhantomJS to get the page source, then you can use BeautifulSoup for your parsing, the code shown below will do exactly that:

from bs4 import BeautifulSoup
from selenium import webdriver

url = "http://legendas.tv/busca/walking%20dead%20s03e02"
browser = webdriver.PhantomJS()
browser.get(url)
html = browser.page_source
soup = BeautifulSoup(html, 'lxml')
a = soup.find('section', 'wrapper')

此外,如果您只查找一个元素,则无需使用 .findAll.

Also, there's no need to use .findAll if you are only looking for one element only.

这篇关于在python 3中使用requests.get获取数据之前等待页面加载的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆