来自Firefoxbookmark.html源代码的YouTube video_id [几乎存在] [英] YouTube video_id from Firefox bookmark.html source code [almost there]

查看:83
本文介绍了来自Firefoxbookmark.html源代码的YouTube video_id [几乎存在]的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

bookmarks.html看起来像这样:

bookmarks.html looks like this:

<DT><A HREF="http://www.youtube.com/watch?v=Gg81zi0pheg" ADD_DATE="1320876124" LAST_MODIFIED="1320878745" ICON_URI="http://s.ytimg.com/yt/favicon-vflZlzSbU.ico" ICON="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAABEElEQVQ4jaWSPU7DQBCF3wGo7DZp4qtEQqJJlSu4jLgBNS0dRDR0QARSumyPRJogkIJiWiToYhrSEPJR7Hptx/kDRhrNm93nb3ZXFsD4psPRwR4AzbjHxd0ru4a8kEgvbf1NePfzbQdJfro52UcS8fHQDyjWCuDz7QpJTOYLZk5nH0zmi/8Dzg/DEqgCmL1fW/PXNwADf4V7AMbuis24txrw1xBJAlHgMizoLdkI4CVBREEZWTKGK9bKqa3G3QDO2G7Z2ljqAYyxPmPgI4XHpw2A7ES+d/unZzlwM2BNnwEKb1QFGJNPabdg9GB1v2993W71BNOamNZEWpfXy5nW8/3U9UQBkqTyy677F6rrkvQDptjzJ/PSbekAAAAASUVORK5CYII=">http://www.youtube.com/watch?v=Gg81zi0pheg</A>
<DT><A HREF="http://www.youtube.com/watch?v=pP9VjGmmhfo" ADD_DATE="1320876156" LAST_MODIFIED="1320878756" ICON_URI="http://s.ytimg.com/yt/favicon-vflZlzSbU.ico" ICON="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAABEElEQVQ4jaWSPU7DQBCF3wGo7DZp4qtEQqJJlSu4jLgBNS0dRDR0QARSumyPRJogkIJiWiToYhrSEPJR7Hptx/kDRhrNm93nb3ZXFsD4psPRwR4AzbjHxd0ru4a8kEgvbf1NePfzbQdJfro52UcS8fHQDyjWCuDz7QpJTOYLZk5nH0zmi/8Dzg/DEqgCmL1fW/PXNwADf4V7AMbuis24txrw1xBJAlHgMizoLdkI4CVBREEZWTKGK9bKqa3G3QDO2G7Z2ljqAYyxPmPgI4XHpw2A7ES+d/unZzlwM2BNnwEKb1QFGJNPabdg9GB1v2993W71BNOamNZEWpfXy5nW8/3U9UQBkqTyy677F6rrkvQDptjzJ/PSbekAAAAASUVORK5CYII=">http://www.youtube.com/watch?v=pP9VjGmmhfo</A>
<DT><A HREF="http://www.youtube.com/watch?v=yTA1u6D1fyE" ADD_DATE="1320876163" LAST_MODIFIED="1320878762" ICON_URI="http://s.ytimg.com/yt/favicon-vflZlzSbU.ico" ICON="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAABEElEQVQ4jaWSPU7DQBCF3wGo7DZp4qtEQqJJlSu4jLgBNS0dRDR0QARSumyPRJogkIJiWiToYhrSEPJR7Hptx/kDRhrNm93nb3ZXFsD4psPRwR4AzbjHxd0ru4a8kEgvbf1NePfzbQdJfro52UcS8fHQDyjWCuDz7QpJTOYLZk5nH0zmi/8Dzg/DEqgCmL1fW/PXNwADf4V7AMbuis24txrw1xBJAlHgMizoLdkI4CVBREEZWTKGK9bKqa3G3QDO2G7Z2ljqAYyxPmPgI4XHpw2A7ES+d/unZzlwM2BNnwEKb1QFGJNPabdg9GB1v2993W71BNOamNZEWpfXy5nW8/3U9UQBkqTyy677F6rrkvQDptjzJ/PSbekAAAAASUVORK5CYII=">http://www.youtube.com/watch?v=yTA1u6D1fyE</A>
<DT><A HREF="http://www.youtube.com/watch?v=4v8HvQf4fgE" ADD_DATE="1320876186" LAST_MODIFIED="1320878767" ICON_URI="http://s.ytimg.com/yt/favicon-vflZlzSbU.ico" ICON="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAABEElEQVQ4jaWSPU7DQBCF3wGo7DZp4qtEQqJJlSu4jLgBNS0dRDR0QARSumyPRJogkIJiWiToYhrSEPJR7Hptx/kDRhrNm93nb3ZXFsD4psPRwR4AzbjHxd0ru4a8kEgvbf1NePfzbQdJfro52UcS8fHQDyjWCuDz7QpJTOYLZk5nH0zmi/8Dzg/DEqgCmL1fW/PXNwADf4V7AMbuis24txrw1xBJAlHgMizoLdkI4CVBREEZWTKGK9bKqa3G3QDO2G7Z2ljqAYyxPmPgI4XHpw2A7ES+d/unZzlwM2BNnwEKb1QFGJNPabdg9GB1v2993W71BNOamNZEWpfXy5nW8/3U9UQBkqTyy677F6rrkvQDptjzJ/PSbekAAAAASUVORK5CYII=">http://www.youtube.com/watch?v=4v8HvQf4fgE</A>
<DT><A HREF="http://www.youtube.com/watch?v=e9zG20wQQ1U" ADD_DATE="1320876195" LAST_MODIFIED="1320878773" ICON_URI="http://s.ytimg.com/yt/favicon-vflZlzSbU.ico" ICON="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAABEElEQVQ4jaWSPU7DQBCF3wGo7DZp4qtEQqJJlSu4jLgBNS0dRDR0QARSumyPRJogkIJiWiToYhrSEPJR7Hptx/kDRhrNm93nb3ZXFsD4psPRwR4AzbjHxd0ru4a8kEgvbf1NePfzbQdJfro52UcS8fHQDyjWCuDz7QpJTOYLZk5nH0zmi/8Dzg/DEqgCmL1fW/PXNwADf4V7AMbuis24txrw1xBJAlHgMizoLdkI4CVBREEZWTKGK9bKqa3G3QDO2G7Z2ljqAYyxPmPgI4XHpw2A7ES+d/unZzlwM2BNnwEKb1QFGJNPabdg9GB1v2993W71BNOamNZEWpfXy5nW8/3U9UQBkqTyy677F6rrkvQDptjzJ/PSbekAAAAASUVORK5CYII=">http://www.youtube.com/watch?v=e9zG20wQQ1U</A>
<DT><A HREF="http://www.youtube.com/watch?v=khL4s2bvn-8" ADD_DATE="1320876203" LAST_MODIFIED="1320878782" ICON_URI="http://s.ytimg.com/yt/favicon-vflZlzSbU.ico" ICON="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAABEElEQVQ4jaWSPU7DQBCF3wGo7DZp4qtEQqJJlSu4jLgBNS0dRDR0QARSumyPRJogkIJiWiToYhrSEPJR7Hptx/kDRhrNm93nb3ZXFsD4psPRwR4AzbjHxd0ru4a8kEgvbf1NePfzbQdJfro52UcS8fHQDyjWCuDz7QpJTOYLZk5nH0zmi/8Dzg/DEqgCmL1fW/PXNwADf4V7AMbuis24txrw1xBJAlHgMizoLdkI4CVBREEZWTKGK9bKqa3G3QDO2G7Z2ljqAYyxPmPgI4XHpw2A7ES+d/unZzlwM2BNnwEKb1QFGJNPabdg9GB1v2993W71BNOamNZEWpfXy5nW8/3U9UQBkqTyy677F6rrkvQDptjzJ/PSbekAAAAASUVORK5CYII=">http://www.youtube.com/watch?v=khL4s2bvn-8</A>
<DT><A HREF="http://www.youtube.com/watch?v=XTndQ7bYV0A" ADD_DATE="1320876271" LAST_MODIFIED="1320876271">Paramore - Walmart Soundcheck 6-For a pessimist(HQ)</A>
<DT><A HREF="http://www.youtube.com/watch?v=xTT2MqgWRRc" ADD_DATE="1320876284" LAST_MODIFIED="1320876284">Paramore - Walmart Soundcheck 5-Pressure(HQ)</A>
<DT><A HREF="http://www.youtube.com/watch?v=J2ZYQngwSUw" ADD_DATE="1320876291" LAST_MODIFIED="1320876291">Paramore - Wal-Mart Soundcheck Interview</A>
<DT><A HREF="http://www.youtube.com/watch?v=9RZwvg7unrU" ADD_DATE="1320878207" LAST_MODIFIED="1320878207">Paramore - 08 - Interview [ Wal-Mart Soundcheck ]</A>
<DT><A HREF="http://www.youtube.com/watch?v=vz3qOYWwm10" ADD_DATE="1320878295" LAST_MODIFIED="1320878295">Paramore - 04 - That&#39;s What You Get [ Wal-Mart Soundcheck ]</A>
<DT><A HREF="http://www.youtube.com/watch?v=yarv52QX_Yw" ADD_DATE="1320878301" LAST_MODIFIED="1320878301">Paramore - 05 - Pressure [ Wal-Mart Soundcheck ]</A>
<DT><A HREF="http://www.youtube.com/watch?v=LRREY1H3GCI" ADD_DATE="1320878317" LAST_MODIFIED="1320878317">Paramore - Walmart Promo</A>

这是Firefox中的标准书签导出文件.

It's a standard bookmarks export file from Firefox.

我将其送入如下所示的bookmarks.py中:

I feed it into bookmarks.py which looks like this:

#!/usr/bin/env python

import sys
import BeautifulSoup as bs
from BeautifulSoup import BeautifulSoup 

url_list = sys.argv[1]
urls = [tag['href'] for tag in 
BeautifulSoup(open(url_list)).findAll('a')] 

print urls

这将返回更为整洁的网址列表:

This returns a much more clean list of urls:

[u'http://www.youtube.com/watch?v=Gg81zi0pheg', u'http://www.youtube.com/watch?v=pP9VjGmmhfo', u'http://www.youtube.com/watch?v=yTA1u6D1fyE', u'http://www.youtube.com/watch?v=4v8HvQf4fgE', u'http://www.youtube.com/watch?v=e9zG20wQQ1U', u'http://www.youtube.com/watch?v=khL4s2bvn-8', u'http://www.youtube.com/watch?v=XTndQ7bYV0A', u'http://www.youtube.com/watch?v=xTT2MqgWRRc', u'http://www.youtube.com/watch?v=J2ZYQngwSUw', u'http://www.youtube.com/watch?v=9RZwvg7unrU', u'http://www.youtube.com/watch?v=vz3qOYWwm10', u'http://www.youtube.com/watch?v=yarv52QX_Yw', u'http://www.youtube.com/watch?v=LRREY1H3GCI']

下一步是将每个youtube网址放入video_info.py

my next step is to get each of the youtube urls into video_info.py

#!/usr/bin/python

import urlparse
import sys
import gdata.youtube
import gdata.youtube.service
import re
import urlparse
import urllib2

youtube_url = sys.argv[1]
url_data = urlparse.urlparse(youtube_url)
query = urlparse.parse_qs(url_data.query)
youtube_id = query["v"][0]

print youtube_id

yt_service = gdata.youtube.service.YouTubeService()
yt_service.developer_key = 'AI39si4yOmI0GEhSTXH0nkiVDf6tQjCkqoys5BBYLKEr-PQxWJ0IlwnUJAcdxpocGLBBCapdYeMLIsB7KVC_OA8gYK0VKV726g'

entry = yt_service.GetYouTubeVideoEntry(video_id=youtube_id)

print 'Video title: %s' % entry.media.title.text
print 'Video view count: %s' % entry.statistics.view_count

当此网址为"http://www.youtube.com/watch?v=aXrgwC1rsw4"时,输出如下所示:

when this url "http://www.youtube.com/watch?v=aXrgwC1rsw4" the output looks like this:

aXrgwC1rsw4
Video title: OneRepublic   Good Life  Live Walmart Soundcheck
Video view count: 202

我该如何将bookmarks.py中的URL列表输入到video_info.py中?

*用于输出为csv格式的加号和 在将数据传递给video_info.py *

*extra points for output to csv format and extra extra points of checking of duplicates in bookmarks.html before passing data to video_info.py*

感谢您的所有帮助人员.由于Stackoverflow,我已经走了这么远.

Thanks for all your help guys. Because of Stackoverflow I've gotten this far.

大卫

所以结合起来,我现在有了:

So combined I now have:

#!/usr/bin/env python

import urlparse
import gdata.youtube
import gdata.youtube.service
import re
import urlparse
import urllib2
import sys
import BeautifulSoup as bs
from BeautifulSoup import BeautifulSoup 

yt_service = gdata.youtube.service.YouTubeService()
yt_service.developer_key = 'AI39si4yOmI0GEhSTXH0nkiVDf6tQjCkqoys5BBYLKEr-PQxWJ0IlwnUJAcdxpocGLBBCapdYeMLIsB7KVC_OA8gYK0VKV726g'

url_list = sys.argv[1]
urls = [tag['href'] for tag in 
BeautifulSoup(open(url_list)).findAll('a')] 

print urls

youtube_url = urls
url_data = urlparse.urlparse(youtube_url)
query = urlparse.parse_qs(url_data.query)
youtube_id = query["v"][0]

#list(set(my_list))

entry = yt_service.GetYouTubeVideoEntry(video_id=youtube_id)

myyoutubes = []
myyoutubes.append(", ".join([youtube_id, entry.media.title.text,entry.statistics.view_count]))
print "\n".join(myyoutubes)

如何将网址列表传递给youtube_url变量? 我认为他们需要进一步清理并一次通过

我现在要解决这个问题:

I've got it down to this now:

#!/usr/bin/env python

import urlparse
import gdata.youtube
import gdata.youtube.service
import re
import urlparse
import urllib2
import sys
import BeautifulSoup as bs
from BeautifulSoup import BeautifulSoup 

yt_service = gdata.youtube.service.YouTubeService()
yt_service.developer_key = 'AI39si4yOmI0GEhSTXH0nkiVDf6tQjCkqoys5BBYLKEr-PQxWJ0IlwnUJAcdxpocGLBBCapdYeMLIsB7KVC_OA8gYK0VKV726g'

url_list = sys.argv[1]
urls = [tag['href'] for tag in 
    BeautifulSoup(open(url_list)).findAll('a')] 

for url in urls:
    youtube_url = url

url_data = urlparse.urlparse(youtube_url)
query = urlparse.parse_qs(url_data.query)
youtube_id = query["v"][0]

#list(set(my_list))

entry = yt_service.GetYouTubeVideoEntry(video_id=youtube_id)

myyoutubes = []
myyoutubes.append(", ".join([youtube_id, entry.media.title.text,entry.statistics.view_count]))
print "\n".join(myyoutubes)

我可以将bookmarks.html传递给Combined.py,但是它只返回第一行.

I can pass bookmarks.html to combined.py but it only returns the first line.

如何遍历youtube_url的每一行?

推荐答案

您应为BeautifulSoup提供一个字符串:

You should provide a string to BeautifulSoup:

# parse bookmarks.html
with open(sys.argv[1]) as bookmark_file:
    soup = BeautifulSoup(bookmark_file.read())

# extract youtube video urls
video_url_regex = re.compile('http://www.youtube.com/watch')
urls = [link['href'] for link in soup('a', href=video_url_regex)]

从下载更长的统计信息中分离出非常快速的url解析:

Separate a very fast url parsing from a much longer downloading of the stats:

# extract video ids from the urls
ids = [] # you could use `set()` and `ids.add()` to avoid duplicates
for video_url in urls:
    url = urlparse.urlparse(video_url)
    video_id = urlparse.parse_qs(url.query).get('v')
    if not video_id: continue # no video_id in the url
    ids.append(video_id[0])

您不需要对只读请求进行身份验证:

You don't need to authenticate for readonly requests:

# get some statistics for the videos
yt_service = YouTubeService()
yt_service.ssl = True #NOTE: it works for readonly requests
yt_service.debug = True # show requests

将一些统计信息保存到命令行上提供的csv文件中.如果某些视频导致错误,请不要停止:

Save some statistics to a csv file provided on the command-line. Don't stop if some video causes an error:

writer = csv.writer(open(sys.argv[2], 'wb')) # save to cvs file
for video_id in ids:
    try:
        entry = yt_service.GetYouTubeVideoEntry(video_id=video_id)
    except Exception, e:
        print >>sys.stderr, "Failed to retrieve entry video_id=%s: %s" %(
            video_id, e)
    else:
        title = entry.media.title.text
        print "Title:", title
        view_count = entry.statistics.view_count
        print "View count:", view_count
        writer.writerow((video_id, title, view_count)) # write it

这是完整的脚本,请按一下播放以观看其写法.

Here's a full script, press playback to watch how it was written.

$ python download-video-stats.py neudorfer.html out.csv
send: u'GET https://gdata.youtube.com/feeds/api/videos/Gg81zi0pheg HTTP/1.1\r\nAcc
ept-Encoding: identity\r\nHost: gdata.youtube.com\r\nContent-Type: application/ato
m+xml\r\nUser-Agent: None GData-Python/2.0.15\r\n\r\n'                           
reply: 'HTTP/1.1 200 OK\r\n'
header: X-GData-User-Country: RU
header: Content-Type: application/atom+xml; charset=UTF-8
header: Expires: Thu, 10 Nov 2011 19:31:23 GMT
header: Date: Thu, 10 Nov 2011 19:31:23 GMT
header: Cache-Control: private, max-age=300, no-transform
header: Vary: *
header: GData-Version: 1.0
header: Last-Modified: Wed, 02 Nov 2011 08:58:11 GMT
header: Transfer-Encoding: chunked
header: X-Content-Type-Options: nosniff
header: X-Frame-Options: SAMEORIGIN
header: X-XSS-Protection: 1; mode=block
header: Server: GSE
Title: Paramore - Let The Flames Begin [Wal-Mart Soundcheck]
View count: 27807

out.csv

Gg81zi0pheg,Paramore - Let The Flames Begin [Wal-Mart Soundcheck],27807
pP9VjGmmhfo,Paramore: Wal-Mart Soundcheck,1363078
yTA1u6D1fyE,Paramore-Walmart Soundcheck 7-CrushCrushCrush(HQ),843
4v8HvQf4fgE,Paramore-Walmart Soundcheck 4-That's What You Get(HQ),1429
e9zG20wQQ1U,Paramore-Walmart Soundcheck 8-Interview(HQ),1306
khL4s2bvn-8,Paramore-Walmart Soundcheck 3-Emergency(HQ),796
XTndQ7bYV0A,Paramore-Walmart Soundcheck 6-For a pessimist(HQ),599
xTT2MqgWRRc,Paramore-Walmart Soundcheck 5-Pressure(HQ),963
J2ZYQngwSUw,Paramore - Wal-Mart Soundcheck Interview,10261
9RZwvg7unrU,Paramore - 08 - Interview [Wal-Mart Soundcheck],1674
vz3qOYWwm10,Paramore - 04 - That's What You Get [Wal-Mart Soundcheck],1268
yarv52QX_Yw,Paramore - 05 - Pressure [Wal-Mart Soundcheck],1296
LRREY1H3GCI,Paramore - Walmart Promo,523

这篇关于来自Firefoxbookmark.html源代码的YouTube video_id [几乎存在]的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆