如何抓取网站内的特定链接? [英] How to crawl for specific links inside a website?

查看:55
本文介绍了如何抓取网站内的特定链接?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经成功抓取了标题链接.

我想用链接中的主要文章替换摘要标签(因为标题和摘要还是一样.)

I would like to replace the Summary tab with The Main Article from the link (Since the Title and Summary are same anyways. )

link = "https://www.vanglaini.org" + article.a['href']

(例如 https://www.vanglaini.org/tualchhung/103834 )

请帮助我修改代码.

下面是我的代码.

import pandas as pd
import requests
from bs4 import BeautifulSoup

source = requests.get('https://www.vanglaini.org/').text
soup = BeautifulSoup(source, 'lxml')

list_with_headlines = []
list_with_summaries = []
list_with_links = []

for article in soup.find_all('article'):
    if article.a is None:
        continue
    headline = article.a.text.strip()
    summary = article.p.text.strip()
    link = "https://www.vanglaini.org" + article.a['href']
    list_with_headlines.append(headline)
    list_with_summaries.append(summary)
    list_with_links.append(link)

news_csv = pd.DataFrame({
    'Headline': list_with_headlines,
    'Summary': list_with_summaries,
    'Link' : list_with_links,
})

print(news_csv)
news_csv.to_csv('test.csv')

推荐答案

只需在for循环内再次请求并获取标签文本即可.

Just do request again inside for loop and get the tag text.

import pandas as pd
import requests
from bs4 import BeautifulSoup

source = requests.get('https://www.vanglaini.org/').text
soup = BeautifulSoup(source, 'lxml')

list_with_headlines = []
list_with_summaries = []
list_with_links = []

for article in soup.find_all('article'):
    if article.a is None:
        continue
    headline = article.a.text.strip()
    link = "https://www.vanglaini.org" + article.a['href']
    list_with_headlines.append(headline)
    list_with_links.append(link)
    soup = BeautifulSoup(requests.get(link).text, 'lxml')
    list_with_summaries.append(soup.select_one(".pagesContent").text)

news_csv = pd.DataFrame({
    'Headline': list_with_headlines,
    'Summary': list_with_summaries,
    'Link' : list_with_links,
})

print(news_csv)
news_csv.to_csv('test.csv')

Csv将是这样.

这篇关于如何抓取网站内的特定链接?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆