从Twitter上刮取用户位置 [英] Scrape User Location from Twitter

查看:159
本文介绍了从Twitter上刮取用户位置的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图从用户名中刮取Twitter用户的经度和纬度。用户名列表是一个csv文件,在一个输入文件中有超过50个名称。以下是我迄今为止所做的两项试验。他们都没有工作。

我有 User_names 和I的列表我正在尝试查找用户配置文件,并从配置文件或时间线中提取 geolocation 。我无法通过互联网找到很多样本。

我正在寻找更好的方法从Twitter获取用户的地理位置。我甚至找不到一个参考User_name或user_id显示收获用户位置的示例。输入:输入文件超过50k行

  AfsarTamannaah,6.80E + 17,12 / 24/2015,#chennaifloods 
DEEPU_S_GIRI,6.80E + 17,12 / 24/2015,#chennaifloods
DEEPU_S_GIRI,6.80E + 17,12 / 24/2015,#weneverletyoudownstr
ndtv,6.80E + 17,12 / 24/2015,#chennaifloods
1andonlyharsha,6.79E + 17,12 / 21/2015,#chennaifloods
Shashkya,6.79E + 17,12 / 21/2015,#moneyonmobile
Shashkya,6.79E + 17,12 / 21/2015,#chennaifloods
timesofindia,6.79E + 17,12 / 20 / 2015,#chennaifloods
ANI_news,6.78E + 17,12 / 20/2015,#chennaifloods
DrAnbumaniPMK,6.78E + 17,12 / 19/2015,#chennaifloods
timesofindia,6.78 E + 17,12 / 18/2015,#chennaifloods
SRKCHENNAIFC,6.78E + 17,12 / 18/2015,#dilwalefdfs
SRKCHENNAIFC,6.78E + 17,12 / 18/2015,#chennaifloods
AmeriCares,6.77E + 17,12 / 16/2015,#印度
AmeriCares,6.77E + 17,12 / 16/2015,#chennaifloods
ChennaiRainsH,6.77E + 17,12 / 15/2015,#chennairainshelp
ChennaiRainsH,6.77E + 17,12 / 15/2015,#chennai洪水
AkkiPritam,6.77E + 17,12 / 15/2015,#chennaifloods



 从tweepy导入tweepy 
从tweepy.streaming导入Stream
导入StreamListener
从tweepy import OAuthHandler
将pandas导入为pd
导入json
导入csv
导入sys
导入时间

CONSUMER_KEY ='XYZ'
CONSUMER_SECRET ='XYZ'
ACCESS_KEY ='XYZ'
ACCESS_SECRET ='XYZ'
$ b auth = OAuthHandler(CONSUMER_KEY,CONSUMER_SECRET)
api = tweepy.API (auth)
auth.set_access_token(ACCESS_KEY,ACCESS_SECRET)

data = pd.read_csv('user_keyword.csv')

df = ['user_name', 'user_id','date','keyword']

test = api.lookup_users(user_ids = ['user_name'])

for test in test:
打印user.user_name
打印user.user_id
打印user.date
打印user.keyword
打印user.geolocation

错误:

 追踪(最近一次调用最后一次):
在< module>文件中的第24行user_profile_location.py
test = api.lookup_users(user_ids = ['user_name'])
文件/usr/lib/python2.7/dist-packages/tweepy/api.py,第150行,在lookup_users
返回self._lookup_users(list_to_csv(user_ids),list_to_csv(screen_names))
文件/usr/lib/python2.7/dist-packages/tweepy/binder.py,第197行,在_call
返回method.execute()
文件/usr/lib/python2.7/dist-packages/tweepy/binder.py,第173行,执行
raise TweepError(error_msg,resp )
tweepy.error.TweepError:[{'message':'没有用户匹配指定条款','code':17}]

我了解每个用户都不会共享地理位置,但是如果我可以获得地理位置,那些保留公开地位的人应该很棒。



用户位置名称和/或纬度是我正在寻找的。

如果这种方法不正确,那么我可以替代品也是如此。

更新一:经过一番深入的搜索后, und this 网站提供了一个非常接近的解决方案,但是我在尝试从输入文件中读取 userName



这表示只有100个用户的信息可以被抓住,解除这个限制的最好方法是什么?

代码:

 导入sys 
导入字符串
导入simplejson
从twython导入Twython
导入csv
导入熊猫作为pd

#WE将使用变量日,月和年来输出我们的输出文件名
导入日期时间
现在=日期时间。 datetime.now()
day = int(now.day)
month = int(now.month)
year = int(now.year)


#FOR OAUTH AUTHENTICATION - 需要访问TWITTER API
t = Twython(app_key ='ABC',
app_secret ='ABC',
oauth_token ='ABC',
oauth_token_secret ='ABC')

#INPUT HAS NO HEADER NO INDEX
ids = pd.read_csv('user_keyword.csv',header = ['userName','userID', 'Date','Keyword'],usecols = ['userName'])

#ACCESS TWITTER API的LOOKUP_USER方法 - GRAB INFO ON 100 I DS与每个API呼叫

用户= t.lookup_user(user_id = ids)

#NAME我们的输出文件 - %我将被当前月份,日期和年份替换
outfn =twitter_user_data_%i。%i。%i.csv%(now.month,now.day,now.year)

#NAMES用于输出文件中的标题行
fields =id,screen_name,name,created_at,url,followers_count,friends_count,statuses_count,\
favourites_count,listed_count,\
contributors_enabled,description,protected,location,lang,expanded_url .split()

#INITIALIZE OUTPUT FILE和WRITE HEADER ROW
outfp = open(outfn,w)
outfp.write(string.join(fields,\\ \\ t)+\ n)#header

#THE VARIABLE'USERS'Contains of the 32 TWITTER USER IDS above above
#THIS BLOCK will LOOP OVER OVER OES OES OF THESESE IDS,CREATE VARIABLES和OUTPUT TO FILE
用于用户输入:
#CREATE EMPTY DICTIONARY
r = {}
for f in fields:
r [f] =
#分配'ID'字段值字段在JSON中'ID'字段在我们的字典
r ['id'] = entry ['id']
#SAME WITH'SCREEN_NAME '这里,和变量的其余部分
r ['screen_name'] = entry ['screen_name']
r ['name'] = entry ['name']
r ['created_at'] = entry ['created_at']
r ['url'] = entry ['url']
r ['followers_count'] = entry ['followers_count']
r ['friends_count'] = entry ['friends_count']
r ['statuses_count'] =条目['statuses_count']
r ['favourites_count'] =条目['favourites_count']
r ['listed_count'] =条目['' listed_count']
r ['contributors_enabled'] =条目['contributors_enabled']
r ['description'] =条目['description']
r ['protected'] =条目['protected' ]
r ['location'] = entry ['location']
r ['lang'] = entry ['lang']
#NOT每个ID都会有一个'URL'键,所以如果条目['实体']中有'url':
r ['expanded_url'] = entry ['entities '] ['url'] ['urls'] [0] ['expanded_url']
else:
r ['expanded_url'] =''
print r
#CREATE EMPTY LIST
lst = []
#ADD数据对于每个变量
对于字段中的f:
lst.append(unicode(r [f])。replace(\ (utf-8)+/,/ \ n)

outfp.close()

错误:

 文件user_profile_location.py,第35行,位于< module> 
ids = pd.read_csv('user_keyword.csv',header = ['userName','userID','Date','Keyword'],usecols = ['userName'])
文件 /usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py,第562行,在parser_f
return _read(filepath_or_buffer,kwds)
文件/ usr / local /lib/python2.7/dist-packages/pandas/io/parsers.py,第315行,在_read
parser = TextFileReader(filepath_or_buffer,** kwds)
文件/ usr / local / lib / python2.7 / dist-packages / pandas / io / parsers.py,第645行,在__init__
self._make_engine(self.engine)
文件/ usr / local / lib / python2在/usr/p/us/pub/sample.py中添加.7 / dist-packages / pandas / io / parsers.py,第799行,在_make_engine
self._engine = CParserWrapper(self.f,** self.options)
文件/ usr / local / lib / python2.7 / dist-packages / pandas / io / parsers.py,行1202,位于__init__
ParserBase .__ init __(self,kwds)
文件/ usr / local / lib /python2.7/dist-packages/pandas/io/parsers.py,第918行,在__init__
中增加值错误(
时无法指定usecols ValueError:指定多索引头时无法指定usecols


解决方案

假设您只想获取放置在其个人资料页面中的用户位置,则可以使用 API.get_user 。以下是工作代码。

 #!/ usr / bin / env python $ b $ from __future__ import print_function 

#从tweepy库导入必要的方法
从tweepy导入tweepy
导入OAuthHandler


#用于访问Twitter API的用户凭据
access_token =您的访问令牌在这里
access_token_secret =您的访问令牌密钥在这里
consumer_key =您的用户密钥在这里
consumer_secret =您的用户密钥在这里


def get_user_details(用户名):
userobj = api.get_user(用户名)
返回userobj


if __name__ ==' __main__':
#authenticating app(https://apps.twitter.com/)
auth = tweepy.auth.OAuthHandler(consumer_key,consumer_secret)
auth.set_access_token(access_token,access_token_secret )
api = tweepy.API(auth)

#用于用户名列表,把它们放在iterable中并调用函数
usern ame ='thinkgeek'
userOBJ = get_user_details(username)
print(userOBJ.location)

注意:这是一个粗略的实现。编写适当的卧铺功能以遵守Twitter API访问限制。


I am trying to scrape latitude and longitude of user from Twitter with respect to user names. The user name list is a csv file with more than 50 names in one input file. The below are two trials that I have made by far. Neither of them seems to be working. Corrections in any one of the program or an entirely new approach is welcome.

I have list of User_names and I am trying to lookup user profile and pull the geolocation from the profile or timeline. I could not find much of samples anywhere over Internet.

I am looking for a better approach to get geolocations of users from Twitter. I could not even find a single example that shows harvesting User location with reference to User_name or user_id. Is It even possible in first place?

Input: The input files have more than 50k rows

AfsarTamannaah,6.80E+17,12/24/2015,#chennaifloods
DEEPU_S_GIRI,6.80E+17,12/24/2015,#chennaifloods
DEEPU_S_GIRI,6.80E+17,12/24/2015,#weneverletyoudownstr
ndtv,6.80E+17,12/24/2015,#chennaifloods
1andonlyharsha,6.79E+17,12/21/2015,#chennaifloods
Shashkya,6.79E+17,12/21/2015,#moneyonmobile
Shashkya,6.79E+17,12/21/2015,#chennaifloods
timesofindia,6.79E+17,12/20/2015,#chennaifloods
ANI_news,6.78E+17,12/20/2015,#chennaifloods
DrAnbumaniPMK,6.78E+17,12/19/2015,#chennaifloods
timesofindia,6.78E+17,12/18/2015,#chennaifloods
SRKCHENNAIFC,6.78E+17,12/18/2015,#dilwalefdfs
SRKCHENNAIFC,6.78E+17,12/18/2015,#chennaifloods
AmeriCares,6.77E+17,12/16/2015,#india
AmeriCares,6.77E+17,12/16/2015,#chennaifloods
ChennaiRainsH,6.77E+17,12/15/2015,#chennairainshelp
ChennaiRainsH,6.77E+17,12/15/2015,#chennaifloods
AkkiPritam,6.77E+17,12/15/2015,#chennaifloods

Code:

import tweepy
from tweepy import Stream
from tweepy.streaming import StreamListener
from tweepy import OAuthHandler
import pandas as pd
import json
import csv
import sys
import time

CONSUMER_KEY = 'XYZ'
CONSUMER_SECRET = 'XYZ'
ACCESS_KEY = 'XYZ'
ACCESS_SECRET = 'XYZ'

auth = OAuthHandler(CONSUMER_KEY,CONSUMER_SECRET)
api = tweepy.API(auth)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)

data = pd.read_csv('user_keyword.csv')

df = ['user_name', 'user_id', 'date', 'keyword']

test = api.lookup_users(user_ids=['user_name'])

for user in test:
    print user.user_name
    print user.user_id
    print user.date
    print user.keyword
    print user.geolocation

Error:

Traceback (most recent call last):
  File "user_profile_location.py", line 24, in <module>
    test = api.lookup_users(user_ids=['user_name'])
  File "/usr/lib/python2.7/dist-packages/tweepy/api.py", line 150, in lookup_users
    return self._lookup_users(list_to_csv(user_ids), list_to_csv(screen_names))
  File "/usr/lib/python2.7/dist-packages/tweepy/binder.py", line 197, in _call
    return method.execute()
  File "/usr/lib/python2.7/dist-packages/tweepy/binder.py", line 173, in execute
    raise TweepError(error_msg, resp)
tweepy.error.TweepError: [{'message': 'No user matches for specified terms.', 'code': 17}]

I understand every user does not share the geolocation, but those who keep the profile publicly open from the if I can get geolocation shall be great.

User locations as name and/or lat lon is what I am looking for.

If this approach isn't correct then I am open to alternatives also.

Update One: After some deep search I found this website that provides a very close solution, But I am getting error while trying to read the userName from the input file.

This says only 100 user's information can be grabbed what is the better way to lift that limitation ?

Code:

import sys
import string
import simplejson
from twython import Twython
import csv
import pandas as pd

#WE WILL USE THE VARIABLES DAY, MONTH, AND YEAR FOR OUR OUTPUT FILE NAME
import datetime
now = datetime.datetime.now()
day=int(now.day)
month=int(now.month)
year=int(now.year)


#FOR OAUTH AUTHENTICATION -- NEEDED TO ACCESS THE TWITTER API
t = Twython(app_key='ABC', 
    app_secret='ABC',
    oauth_token='ABC',
    oauth_token_secret='ABC')

#INPUT HAS NO HEADER NO INDEX
ids = pd.read_csv('user_keyword.csv', header=['userName', 'userID', 'Date', 'Keyword'], usecols=['userName'])

#ACCESS THE LOOKUP_USER METHOD OF THE TWITTER API -- GRAB INFO ON UP TO 100 IDS WITH EACH API CALL

users = t.lookup_user(user_id = ids)

#NAME OUR OUTPUT FILE - %i WILL BE REPLACED BY CURRENT MONTH, DAY, AND YEAR
outfn = "twitter_user_data_%i.%i.%i.csv" % (now.month, now.day, now.year)

#NAMES FOR HEADER ROW IN OUTPUT FILE
fields = "id, screen_name, name, created_at, url, followers_count, friends_count, statuses_count, \
    favourites_count, listed_count, \
    contributors_enabled, description, protected, location, lang, expanded_url".split()

#INITIALIZE OUTPUT FILE AND WRITE HEADER ROW   
outfp = open(outfn, "w")
outfp.write(string.join(fields, "\t") + "\n")  # header

#THE VARIABLE 'USERS' CONTAINS INFORMATION OF THE 32 TWITTER USER IDS LISTED ABOVE
#THIS BLOCK WILL LOOP OVER EACH OF THESE IDS, CREATE VARIABLES, AND OUTPUT TO FILE
for entry in users:
    #CREATE EMPTY DICTIONARY
    r = {}
    for f in fields:
        r[f] = ""
    #ASSIGN VALUE OF 'ID' FIELD IN JSON TO 'ID' FIELD IN OUR DICTIONARY
    r['id'] = entry['id']
    #SAME WITH 'SCREEN_NAME' HERE, AND FOR REST OF THE VARIABLES
    r['screen_name'] = entry['screen_name']
    r['name'] = entry['name']
    r['created_at'] = entry['created_at']
    r['url'] = entry['url']
    r['followers_count'] = entry['followers_count']
    r['friends_count'] = entry['friends_count']
    r['statuses_count'] = entry['statuses_count']
    r['favourites_count'] = entry['favourites_count']
    r['listed_count'] = entry['listed_count']
    r['contributors_enabled'] = entry['contributors_enabled']
    r['description'] = entry['description']
    r['protected'] = entry['protected']
    r['location'] = entry['location']
    r['lang'] = entry['lang']
    #NOT EVERY ID WILL HAVE A 'URL' KEY, SO CHECK FOR ITS EXISTENCE WITH IF CLAUSE
    if 'url' in entry['entities']:
        r['expanded_url'] = entry['entities']['url']['urls'][0]['expanded_url']
    else:
        r['expanded_url'] = ''
    print r
    #CREATE EMPTY LIST
    lst = []
    #ADD DATA FOR EACH VARIABLE
    for f in fields:
        lst.append(unicode(r[f]).replace("\/", "/"))
    #WRITE ROW WITH DATA IN LIST
    outfp.write(string.join(lst, "\t").encode("utf-8") + "\n")

outfp.close()    

Error:

File "user_profile_location.py", line 35, in <module>
    ids = pd.read_csv('user_keyword.csv', header=['userName', 'userID', 'Date', 'Keyword'], usecols=['userName'])
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 562, in parser_f
    return _read(filepath_or_buffer, kwds)
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 315, in _read
    parser = TextFileReader(filepath_or_buffer, **kwds)
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 645, in __init__
    self._make_engine(self.engine)
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 799, in _make_engine
    self._engine = CParserWrapper(self.f, **self.options)
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 1202, in __init__
    ParserBase.__init__(self, kwds)
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 918, in __init__
    raise ValueError("cannot specify usecols when "
ValueError: cannot specify usecols when specifying a multi-index header

解决方案

Assuming that you just want to get the location of the user that is put up in his/her profile page, you can just use the API.get_user from Tweepy. Below is the working code.

#!/usr/bin/env python
from __future__ import print_function

#Import the necessary methods from tweepy library
import tweepy
from tweepy import OAuthHandler


#user credentials to access Twitter API 
access_token = "your access token here"
access_token_secret = "your access token secret key here"
consumer_key = "your consumer key here"
consumer_secret = "your consumer secret key here"


def get_user_details(username):
        userobj = api.get_user(username)
        return userobj


if __name__ == '__main__':
    #authenticating the app (https://apps.twitter.com/)
    auth = tweepy.auth.OAuthHandler(consumer_key, consumer_secret)
    auth.set_access_token(access_token, access_token_secret)
    api = tweepy.API(auth)

    #for list of usernames, put them in iterable and call the function
    username = 'thinkgeek'
    userOBJ = get_user_details(username)
    print(userOBJ.location)

Note: This is a crude implementation. Write a proper sleeper function to obey Twitter API access limits.

这篇关于从Twitter上刮取用户位置的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆