如何凑一个网站,需要登录使用Python和beautifulsoup? [英] How to scrape a website which requires login using python and beautifulsoup?
问题描述
如果我想凑一个网站,首先需要使用密码登录,我怎么能开始使用beautifulsoup4库蟒蛇刮呢?下面是我对那些不需要登录网站做的。
从BS4进口BeautifulSoup
进口的urllib2
URL = urllib2.urlopen(http://www.python.org)
内容= url.read()
汤= BeautifulSoup(内容)
应如何code被改变,以适应登录?假设网站,我想刮是一个论坛,需要登录。一个例子是 http://forum.arduino.cc/index.php
您可以使用机械化:
进口机械化
从BS4进口BeautifulSoup
进口的urllib2
进口cookielibCJ = cookielib.CookieJar()
BR = mechanize.Browser()
br.set_cookiejar(CJ)
br.open(https://id.arduino.cc/auth/login/)br.select_form(NR = 0)
br.form [用户名] ='用户名'
br.form ['密码'] ='密码'。
br.submit()打印br.response()。阅读()
或者urllib的 - 使用的urllib2 <登录网站/ p> If I want to scrape a website that requires login with password first, how can I start scraping it with python using beautifulsoup4 library? Below is what I do for websites that do not require login. How should the code be changed to accommodate login? Assume that the website I want to scrape is a forum that requires login. An example is http://forum.arduino.cc/index.php You can use mechanize: Or urllib - Login to website using urllib2 这篇关于如何凑一个网站,需要登录使用Python和beautifulsoup?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!from bs4 import BeautifulSoup
import urllib2
url = urllib2.urlopen("http://www.python.org")
content = url.read()
soup = BeautifulSoup(content)
import mechanize
from bs4 import BeautifulSoup
import urllib2
import cookielib
cj = cookielib.CookieJar()
br = mechanize.Browser()
br.set_cookiejar(cj)
br.open("https://id.arduino.cc/auth/login/")
br.select_form(nr=0)
br.form['username'] = 'username'
br.form['password'] = 'password.'
br.submit()
print br.response().read()