使用python中的selenium webdriver下载在embed标签中具有stream-url是chrome扩展名的文件 [英] Download the File which has stream-url is the chrome extension in the embed tag using selenium webdriver in python

查看:402
本文介绍了使用python中的selenium webdriver下载在embed标签中具有stream-url是chrome扩展名的文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

根据我的代码,我试图单击包含隐藏文档的查看"按钮,我需要在python中使用Selenium Webdriver下载该文档.当我检查时,我得到了stream-url = chrome-extension://mhjfbmdgcfjbbpaeojofohoefgiehjai/85967fa5-7853-412e-bbe5-c96406308ec6 我在embed标签中找到的此流网址.我没有下载该文件的方法.

According to my code I have tried to click on the View button which contain the hidden document, I need to download that document using selenium webdriver in python. When I inspect, I got the stream-url = chrome-extension://mhjfbmdgcfjbbpaeojofohoefgiehjai/85967fa5-7853-412e-bbe5-c96406308ec6 this stream-url I found in the embed tag. I am not getting how to download that document.

enter code here
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
import urllib.request
from bs4 import BeautifulSoup
import os
from selenium.webdriver.support.select import Select
import time
import pandas as pd
url = 'https://maharerait.mahaonline.gov.in'
chrome_path = r'C:/Users/User/AppData/Local/Programs/Python/Python36/Scripts/chromedriver.exe'

driver = webdriver.Chrome(executable_path=chrome_path)
driver.get(url)
WebDriverWait(driver, 
    20).until(EC.element_to_be_clickable((By.XPATH,"//div[@class='search- 
    pro-details']//a[contains(.,'Search Project Details')]"))).click()
Registered_Project_radio= 
     WebDriverWait(driver,
     10).until(EC.element_to_be_clickable((By.ID,"Promoter")))

driver.execute_script("arguments[0].click();",Registered_Project_radio)
Application = driver.find_element_by_id("CertiNo")
Application.send_keys("P50500000005")
Search = WebDriverWait(driver, 
     10).until(EC.element_to_be_clickable((By.ID,"btnSearch")))
driver.execute_script("arguments[0].click();",Search)
View = [item.get_attribute('href') for item in 
     driver.find_elements_by_tag_name("a") if
     item.get_attribute('href') is not None]
View = View[0]
request = urllib.request.Request(View)
driver.get(View)
html = urllib.request.urlopen(request).read()
soup = BeautifulSoup(html, 'html.parser')
divPInfo = soup.find("div", {"id": "DivDocument"})
title = divPInfo.find("div", {'class': 'x_panel'}, 
       recursive=False).find("div", {'class': 'x_title'}).find(
      "h2").text.strip()
print(title)
with open("uploads.csv" , "a") as csv_file:
    csv_file.write(title + "\n")
    csv_file.close()    
table = pd.read_html(driver.page_source)[11]                 
print(table)
table.to_csv("uploads.csv" , sep=',',index = False)
btn = WebDriverWait(driver, 
    20).until(EC.element_to_be_clickable((By.XPATH, "//button[@class='btn 
    btn-info btn-xs' and @id='btnShow_10']")))
driver.execute_script("arguments[0].click();",btn)

推荐答案

在Firefox页面中,使用<object data="...">显示带有扫描的PDF. 上载的文档"部分中有按钮可显示其他扫描.

In Firefox page uses <object data="..."> to display PDF with scan. There are buttons in section "Uploaded Documents" to display other scans.

此代码使用这些按钮来显示扫描结果,从<object>获取数据并将其保存在文件document-0.pdfdocument-1.pdf等中.

This code uses these buttons to display scans, get data from <object> and save in files document-0.pdf, document-1.pdf, etc.

我使用的是您在回答上一个问题时看到的相同代码:
使用python中的硒网络驱动程序保存pdf

I use the same code you could see in my answer to your previous question:
Save the pdf using the selenium webdriver in python

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
import time

url = 'https://maharerait.mahaonline.gov.in'

#chrome_path = r'C:/Users/User/AppData/Local/Programs/Python/Python36/Scripts/chromedriver.exe'
#driver = webdriver.Chrome(executable_path=chrome_path)

driver = webdriver.Firefox()

driver.get(url)

WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH,"//div[@class='search-pro-details']//a[contains(.,'Search Project Details')]"))).click()
registered_project_radio = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.ID,"Promoter")))
driver.execute_script("arguments[0].click();", registered_project_radio)

application = driver.find_element_by_id("CertiNo")
application.send_keys("P50500000005")

search = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.ID,"btnSearch")))
driver.execute_script("arguments[0].click();", search)

time.sleep(5)

View = [item.get_attribute('href')
         for item in driver.find_elements_by_tag_name("a")
          if item.get_attribute('href') is not None]

# if there is list then get first element
if View:
    View = View[0]

#-----------------------------------------------------------------------------

# load page    
driver.get(View)

# find buttons in section `Uploaded Documents`
buttons = driver.find_elements_by_xpath('//div[@id="DivDocument"]//button')

# work with all buttons 
for i, button in enumerate(buttons):

    # click button
    button.click()

    # wait till page display scan
    print('wait for object:', i)
    search = WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.TAG_NAME, "object")))

    # get data from object    
    print('get data:', i)    
    import base64

    obj = driver.find_element_by_tag_name('object')
    data = obj.get_attribute('data')
    text = data.split(',')[1]
    bytes = base64.b64decode(text)

    # save scan in next PDF     
    print('save: document-{}.pdf'.format(i))    
    with open('document-{}.pdf'.format(i), 'wb') as fp:
        fp.write(bytes)

    # close scan        
    print('close document:', i)    
    driver.find_element_by_xpath('//button[text()="Close"]').click()    

# --- end ---

driver.close()

这篇关于使用python中的selenium webdriver下载在embed标签中具有stream-url是chrome扩展名的文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆