使用Facebook Marketing API不会暂停广告见解 [英] Not getting paused ads insights using Facebook Marketing API

查看:81
本文介绍了使用Facebook Marketing API不会暂停广告见解的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我编写了此脚本,该脚本返回了包含其统计信息的广告列表,但显然,我仅获得有效广告的见解,而不是暂停的见解-对于暂停的我,我只是获得广告系列名称及其ID!

I wrote this script that returns a list of ads with their stats but apprently I'm getting only insights for active ads and not paused ones - For paused ones, I'm just getting the campaign name and its id !

我尝试使用如下所示的过滤器,但是它不起作用:

I tried using filtering like below but it's not working:

''

first = "https://graph.facebook.com/v3.2/act_105433210/campaigns?filtering=[{'field':'effective_status','operator':'IN','value':['PAUSED']}]&fields=created_time,name,effective_status,insights{spend,impressions,clicks}&access_token=%s"% token

然后我使用以下方法进行检查:

Then I check using:

result = requests.get(first)
content_dict = json.loads(result.content)
print(content_dict)

这是我得到的输出示例:

and this is a sample of the output I get:

{'data': [{'created_time': '2019-02-15T17:24:29+0100', 'name': '20122301-FB-BOOST-EVENT-CC SDSDSD', 'effective_status': 'PAUSED', 'id': '6118169436761'}

只有活动的名称,而没有见解! 是否有人之前或之前曾检索过有关已暂停的广告/广告系列的统计信息/洞察力?

There is only the name of the campaign and not insights ! Anyone did retrieve stats/insights for paused ads/campaigns before or not?

谢谢!

请检查我的python脚本的其他文章:

Please check my other post of my python script : I can't fetch stats for all my facebook campaigns using Python and Facebook Marketing API

推荐答案

经过几天的挖掘,我终于想出了一个脚本,该脚本确实可以提取3年的Facebook广告见解,从而避免了Facebook API的速率限制

After days of digging around, I finally come up with a script that I did run to extract 3 years of facebook ads insights avoiding the rate limit of the facebook API.

首先,我们导入所需的库:

First, we import the lib we'll need :

from facebookads.api import FacebookAdsApi
from facebookads.adobjects.adsinsights import AdsInsights
from facebookads.adobjects.adaccount import AdAccount
from facebookads.adobjects.business import Business
import datetime
import csv
import re 
import pandas as pd
import numpy as np
import matplotlib as plt
from google.colab import files
import time

请注意,提取见解之后,我将它们保存在Google Cloud存储中,然后保存在Big Query表中.

Please note that after extracting the insights, I'm saving them on Google Cloud storage then on Big Query tables.

access_token = 'my-token'
ad_account_id = 'act_id'
app_secret = 'app_s****'
app_id = 'app_id****'
FacebookAdsApi.init(app_id,app_secret, access_token=access_token, api_version='v3.2')
account = AdAccount(ad_account_id)

然后,以下脚本调用api并检查我们确实达到的速率限制:

Then, the following scripts calls the api and check the rate limit we did reach:

import logging
import requests as rq

#Function to find the string between two strings or characters
def find_between( s, first, last ):
    try:
        start = s.index( first ) + len( first )
        end = s.index( last, start )
        return s[start:end]
    except ValueError:
        return ""

#Function to check how close you are to the FB Rate Limit
def check_limit():
    check=rq.get('https://graph.facebook.com/v3.1/'+ad_account_id+'/insights?access_token='+access_token)
    usage=float(find_between(check.headers['x-ad-account-usage'],':','}'))
    return usage

现在,这是您可以运行以提取最近X天的数据的整个脚本!

Now, this is the whole script that you can run to extract data of the last X days !

Y = number of days 
for x in range(1, Y):

  date_0 = datetime.datetime.now() - datetime.timedelta(days=x )
  date_ = date_0.strftime('%Y-%m-%d')
  date_compact = date_.replace('-', '')
  filename = 'fb_%s.csv'%date_compact
  filelocation = "./"+ filename
    # Open or create new file 
  try:
      csvfile = open(filelocation , 'w+', 777)
  except:
      print ("Cannot open file.")


  # To keep track of rows added to file
  rows = 0

  try:
      # Create file writer
      filewriter = csv.writer(csvfile, delimiter=',')
      filewriter.writerow(['date','ad_name', 'adset_id', 'adset_name', 'campaign_id', 'campaign_name', 'clicks', 'impressions', 'spend'])
  except Exception as err:
      print(err)
  # Iterate through all accounts in the business account

  ads = account.get_insights(params={'time_range': {'since':date_, 'until':date_}, 'level':'ad' }, fields=[AdsInsights.Field.ad_name, AdsInsights.Field.adset_id, AdsInsights.Field.adset_name, AdsInsights.Field.campaign_id, AdsInsights.Field.campaign_name, AdsInsights.Field.clicks, AdsInsights.Field.impressions, AdsInsights.Field.spend ])
  for ad in ads:

    # Set default values in case the insight info is empty
    date = date_
    adsetid = ""
    adname = ""
    adsetname = ""
    campaignid = ""
    campaignname = ""
    clicks = ""
    impressions = ""
    spend = ""

    # Set values from insight data
    if ('adset_id' in ad) :
        adsetid = ad[AdsInsights.Field.adset_id]
    if ('ad_name' in ad) :
        adname = ad[AdsInsights.Field.ad_name]
    if ('adset_name' in ad) :
        adsetname = ad[AdsInsights.Field.adset_name]
    if ('campaign_id' in ad) :
        campaignid = ad[AdsInsights.Field.campaign_id]
    if ('campaign_name' in ad) :
        campaignname = ad[AdsInsights.Field.campaign_name]
    if ('clicks' in ad) : # This is stored strangely, takes a few steps to break through the layers
        clicks = ad[AdsInsights.Field.clicks]
    if ('impressions' in ad) : # This is stored strangely, takes a few steps to break through the layers
        impressions = ad[AdsInsights.Field.impressions]
    if ('spend' in ad) :
        spend = ad[AdsInsights.Field.spend]

    # Write all ad info to the file, and increment the number of rows that will display
    filewriter.writerow([date_, adname, adsetid, adsetname, campaignid, campaignname, clicks, impressions, spend])
    rows += 1

  csvfile.close()

# Print report
  print (str(rows) + " rows added to the file " + filename)
  print(check_limit(), 'reached of rate limit')
## write to GCS and BQ
  blob = bucket.blob('fb_2/fb_%s.csv'%date_compact)
  blob.upload_from_filename(filelocation)
  load_job_config = bigquery.LoadJobConfig()
  table_name = '0_fb_ad_stats_%s' % date_compact
  load_job_config.write_disposition = 'WRITE_TRUNCATE'
  load_job_config.skip_leading_rows = 1

  # The source format defaults to CSV, so the line below is optional.
  load_job_config.source_format = bigquery.SourceFormat.CSV
  load_job_config.field_delimiter = ','
  load_job_config.autodetect = True
  uri = 'gs://my-project/fb_2/fb_%s.csv'%date_compact
  load_job = bq_client.load_table_from_uri(
    uri,
    dataset.table(table_name),
    job_config=load_job_config)  # API request
  print('Starting job {}'.format(load_job.job_id))
  load_job.result()  # Waits for table load to complete.
  print('Job finished.')

  if (check_limit()>=75):
    print('75% Rate Limit Reached. Cooling Time 5 Minutes.')
    logging.debug('75% Rate Limit Reached. Cooling Time Around 3 Minutes And Half.')
    time.sleep(225)

这确实很好用,但是请注意,如果您打算提取3年的数据,则该脚本将花费大量时间来运行!

This did perfectly works but note that if you're planning to extract 3 years of data, the script will take a lot of time to run !

我要感谢 LucyTurtle

I'd like to thank LucyTurtle and Ashish Baid for their scripts that did help me during my work!

如果您需要更多详细信息或需要提取一天中不同广告帐户的数据,请参阅此帖子:

Please refer to this post if you need more details or if you need to extract data for one day for different ad accounts :

Facebook Marketing API-要获取的Python见解-达到用户请求限制

这篇关于使用Facebook Marketing API不会暂停广告见解的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆