如何控制 Selenium PDF 和 Excel 文件下载行为? [英] How do I control Selenium PDF and Excel files download behavior?

查看:57
本文介绍了如何控制 Selenium PDF 和 Excel 文件下载行为?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想从这个网址下载所有招标文件'http://www.ha.org.hk/haho/ho/bssd/T18G014Pc.htm'

I want to download all the tender documents from this url 'http://www.ha.org.hk/haho/ho/bssd/T18G014Pc.htm'

我正在使用 selenium 浏览每个招标链接并下载文件.

I'm using selenium to go through each tender links and download the files.

但是,我的抓取工具无法处理 Excel 下载行为.目前,它可以很好地处理 PDF 文件.

However, my scraper couldn't handle the Excel download behavior. Currently, it handles PDF files pretty well.

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
import pandas as pd
from bs4 import BeautifulSoup
import os
from urllib.request import urlretrieve



driver = webdriver.Chrome(executable_path='chromedriver_win32/chromedriver.exe')
# open url in browser

driver.get('http://www.ha.org.hk/haho/ho/bssd/TN_236490_000952a.htm')

# get html file source
html = driver.page_source
soup = BeautifulSoup(html, "lxml")

# extract table
table_body=soup.find('tbody')

# extract all tender links
table_url = soup.find_all('a')
for url in table_url:
    print("Opening url:", url['href'])
    print("Subject matter:", url.getText().strip())
    driver.get(url['href'])
    # get html file source
    html = driver.page_source
    soup = BeautifulSoup(html, "lxml")
    # look for url links which may contain downloadable documents
    doc_urls = soup.find_all('a')

    if doc_urls[0].has_attr('href'): # some a tag doesn't have any href, so we skip
        driver.get(doc_urls[0]['href'])
        tender_document = driver.current_url
        print(doc_urls[0].getText().strip(),'.pdf', sep='')

    # loop through all urls
    for doc_url in doc_urls:
        if doc_url.has_attr('href'): # some a tag doesn't have any href, so we skip
        #open the doc url
        driver.get(doc_url['href'])
       # get the tender pdf file path
        tender_document = driver.current_url
        # download file
        folder_location = 'C:\\Users\\user1\\Desktop\\tender_documents'
        print(doc_url.getText().strip(),'.pdf', sep='')
        fullfilename = os.path.join(folder_location, filename)
        urlretrieve(tender_document, fullfilename)

推荐答案

尝试 下载所有文件:

Try requests and beautifulsoup to download all documents:

import requests
from bs4 import BeautifulSoup
import re


base_url = "http://www.ha.org.hk"
tender = "T18G014Pc"

with requests.Session() as session:
    r = session.get(f"{base_url}/haho/ho/bssd/{tender}.htm")

    # get all documents links
    docs = BeautifulSoup(r.text, "html.parser").select("a[href]")
    for doc in docs:
        href = doc.attrs["href"]
        name = doc.text
        print(f"name: {name}, href: {href}")

        # open document page
        r = session.get(href)

        # get file path
        file_path = re.search("(?<=window.open\\(')(.*)(?=',)", r.text).group(0)
        file_name = file_path.split("/")[-1]

        # get file and save
        r = session.get(f"{base_url}/{file_path}")
        with open(file_name, 'wb') as f:
            f.write(r.content)

这篇关于如何控制 Selenium PDF 和 Excel 文件下载行为?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆