等待页面加载,然后在python 3中获取带有request.get的数据 [英] Wait page to load before getting data with requests.get in python 3

查看:830
本文介绍了等待页面加载,然后在python 3中获取带有request.get的数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个页面,我需要获取与BS4一起使用的源,但是页面中间需要1秒(可能更少)来加载内容,而requests.get会在该部分之前捕获页面的源加载后,如何等待一秒钟才能获取数据?

I have a page that i need to get the source to use with BS4, but the middle of the page takes 1 second(maybe less) to load the content, and requests.get catches the source of the page before the section loads, how can I wait a second before getting the data?

r = requests.get(URL + self.search, headers=USER_AGENT, timeout=5 )
    soup = BeautifulSoup(r.content, 'html.parser')
    a = soup.find_all('section', 'wrapper')

页面

<section class="wrapper" id="resultado_busca">

推荐答案

看起来好像没有等待的问题,看起来该元素是由JavaScript创建的,requests不能处理动态生成的元素, JavaScript.建议将 selenium PhantomJS 获取页面源代码,然后可以使用BeautifulSoup进行解析,下面显示的代码将完全做到这一点:

It doesn't look like a problem of waiting, it looks like the element is being created by JavaScript, requests can't handle dynamically generated elements by JavaScript. A suggestion is to use selenium together with PhantomJS to get the page source, then you can use BeautifulSoup for your parsing, the code shown below will do exactly that:

from bs4 import BeautifulSoup
from selenium import webdriver

url = "http://legendas.tv/busca/walking%20dead%20s03e02"
browser = webdriver.PhantomJS()
browser.get(url)
html = browser.page_source
soup = BeautifulSoup(html, 'lxml')
a = soup.find('section', 'wrapper')

此外,如果您仅查找一个元素,则无需使用.findAll.

Also, there's no need to use .findAll if you are only looking for one element only.

这篇关于等待页面加载,然后在python 3中获取带有request.get的数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆