再次从网站下载bs4并保存到文本文件 [英] bs4 again from website and save to text file

查看:168
本文介绍了再次从网站下载bs4并保存到文本文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在学习如何现在从网站中提取数据,并设法获得了很多信息.但是对于我的下一个网站,由于某种未知原因,我失败了,因为没有任何内容保存到文本文件中,也没有得到任何打印输出.这是我的代码:

I am learning on how to extract data from websites now and have managed to get alot of information. However for my next website I am failing for some unknown reason as nothing is saved to the text files nor do I get any output in print. Here is my piece of code:

import json
import urllib.request
from bs4 import BeautifulSoup
import requests


url = 'https://www.jaffari.org/'
request = urllib.request.Request(url,headers={'User-Agent': 'Mozilla/5.0'})
response = urllib.request.urlopen(request)
html = response.read()
soup = BeautifulSoup(html.decode("utf-8"), "html.parser")

table = soup.find('div', attrs={"class":"textwidget"})
name = table.text.encode('utf-8').strip()

with open('/home/pi/test.txt', 'w') as outfile:
    json.dump(name, outfile)
print (name)

任何人都可以帮忙吗?

推荐答案

祈祷时间由 java-scripts 呈现,因此您需要使用selenium之类的浏览器工具加载页面,然后用漂亮的汤来获取数据.

The prayer times are rendered by java-scripts therefore you need to use browser tool like selenium to load the page and then use beautiful soup to get the data.

您需要从此链接下载兼容的 ChromeDriver ,并按照我提供的方法通过chrome驱动程序路径

You need to download compatible ChromeDriver from this link and passed the chrome driver path as i have provided.

此处编码以获取nameprayer times并保存在text文件中.

Code here to fetch name and prayer times and saved in a text file.

from selenium.webdriver.chrome.options import Options
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from bs4 import BeautifulSoup
import re

options = Options()
# Runs Chrome in headless mode.
options.add_argument("--headless")
#path of the chrome driver
driver=webdriver.Chrome(executable_path="D:\Software\chromedriver.exe", chrome_options=options)
driver.headless=True
driver.get('https://www.jaffari.org/')
WebDriverWait(driver,20).until(EC.visibility_of_element_located((By.CSS_SELECTOR,'div.sidebar-widget.widget_text>div>table')))
print("Data rendered successfully!!!")
#Get the page source
html=driver.page_source
soup=BeautifulSoup(html,'html.parser')
#Close the driver
driver.close()

with open('testPrayers.txt', 'w') as outfile:
     for row in soup.select("div.sidebar-widget.widget_text>div>table tr"):
         name=row.select("td")[0].text.strip()
         time=re.findall('(\d{1,2}:?\d{1,2}\W[A|P]M$)',row.select("td")[1].text.strip())
         outfile.write(name + " " + time[0] + "\n")
         print(name + " " + time[0])

outfile.close()
print('Done')


具有不同文件名的更新数据.


Updated data with different file name.

from selenium.webdriver.chrome.options import Options
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from bs4 import BeautifulSoup
import re

options = Options()
# Runs Chrome in headless mode.
options.add_argument("--headless")
#path of the chrome driver
driver=webdriver.Chrome(executable_path="D:\Software\chromedriver.exe", chrome_options=options)
driver.headless=True
driver.get('https://www.jaffari.org/')
WebDriverWait(driver,20).until(EC.visibility_of_element_located((By.CSS_SELECTOR,'div.sidebar-widget.widget_text>div>table')))
print("Data rendered successfully!!!")
#Get the page source
html=driver.page_source
soup=BeautifulSoup(html,'html.parser')
#Close the driver
driver.close()

for row in soup.select("div.sidebar-widget.widget_text>div>table tr"):
    name=row.select("td")[0].text.strip()
    time=re.findall('(\d{1,2}:?\d{1,2}\W[A|P]M$)',row.select("td")[1].text.strip())

    print(name + " " + time[0])
    with open(name+'.txt', 'w') as outfile:
        outfile.write(time[0])
        outfile.close()


print('Done')

这篇关于再次从网站下载bs4并保存到文本文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆