获取 Youtube 搜索结果的链接 [英] Getting links of Youtube search result
问题描述
我正在尝试获取出现在 YouTube 上特定查询的搜索结果中的视频链接.我正在使用 BeautifulSoup 并请求 Python 库,这是我所做的:
I am trying to get links of videos that appear in search result for a particular query on YouTube. I am using BeautifulSoup and requests library of Python and here is what I did:
from bs4 import BeautifulSoup as bs
import requests
import pandas as pd
base="https://www.youtube.com/results?search_query="
query="mickey+mouse"
r = requests.get(base+query)
page=r.text
soup=bs(page,'html.parser')
vids = soup.findAll('a',attrs={'class':'yt-uix-tile-link'})
videolist=[]
for v in vids:
tmp = 'https://www.youtube.com' + v['href']
videolist.append(tmp)
pd.DataFrame(videolist).to_excel(<PATH>, header=False, index=False)
这会查找搜索结果并将前 20 个视频(出现在页面中)的链接保存到一个 excel 文件中.但是,我希望获得与同一查询相关的 400 或 500 个链接.我怎么能这样做?我知道如何从特定频道获取所有链接,但如何获取特定搜索查询的链接?
This looks for search results and saves the links for first 20 videos (that appear in a page) to an excel file. However, I wish to obtain, say, 400 or 500 links related to the same query. How can I do so ? I know how to do get all links from a particular channel but how to get links for a particular search query ?
推荐答案
User dk1(代码审查) 除了导出到 Excel 之外,几乎完全创建了您想要的内容,而是导出到 CSV:
User dk1 (Code Review) created pretty much exactly what you're after apart from the exporting to Excel but rather does export to CSV:
#!/usr/bin/python
# http://docs.python-requests.org/en/latest/user/quickstart/
# http://www.crummy.com/software/BeautifulSoup/bs4/doc/
import csv
import re
import requests
import time
from bs4 import BeautifulSoup
# scrapes the title
def getTitle():
d = soup.find_all("h1", "branded-page-header-title")
for i in d:
name = i.text.strip().replace('
',' ').replace(',','').encode("utf-8")
f.write(str(name) + ',')
print(f' {name}')
# scrapes the subscriber and view count
def getStats():
b = soup.find_all("li", "about-stat ") # trailing space is required.
for i in b:
value = i.b.text.strip().replace(',','')
name = i.b.next_sibling.strip().replace(',','')
f.write(value+',')
print(' %s = %s') % (name, value)
# scrapes the description
def getDescription():
c = soup.find_all("div", "about-description")
for i in c:
description = i.text.strip().replace('
',' ').replace(',','').encode("utf-8")
f.write(str(description) + ',')
#print(' %s') % (description)
# scrapes all the external links
def getLinks():
a = soup.find_all("a", "about-channel-link ") # trailing space is required.
for i in a:
url = i.get('href')
f.write(url+',')
print(f' {url}')
# scrapes the related channels
def getRelated():
s = soup.find_all("h3", "yt-lockup-title")
for i in s:
t = i.find_all(href=re.compile("user"))
for i in t:
url = 'https://www.youtube.com'+i.get('href')
rCSV.write(url+'
')
print(f' {i.text}, {url}')
f = open("youtube-scrape-data.csv", "w+")
rCSV = open("related-channels.csv", "w+")
visited = []
base = "https://www.youtube.com/results?search_query="
q = ['search+query+here']
page = "&page="
features="html.parser"
count = 1
pagesToScrape = 20
for query in q:
while count <= pagesToScrape:
scrapeURL = base + str(query) + page + str(count)
print(f'Scraping {scrapeURL}
')
r = requests.get(scrapeURL)
soup = BeautifulSoup(r.text)
users = soup.find_all("div", "yt-lockup-byline")
for each in users:
a = each.find_all(href=re.compile("user"))
for i in a:
url = 'https://www.youtube.com'+i.get('href')+'/about'
if url in visited:
print(f' {url} has already been scraped
')
else:
r = requests.get(url)
soup = BeautifulSoup(r.text)
f.write(url+',')
print(f' {url}')
getTitle()
getStats()
getDescription()
getLinks()
getRelated()
f.write('
')
print('
')
visited.append(url)
time.sleep(3)
count += 1
time.sleep(3)
print('
')
count = 1
print('
')
f.close()
这篇关于获取 Youtube 搜索结果的链接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!