高效地将Pandas数据框写入Google BigQuery [英] Efficiently write a Pandas dataframe to Google BigQuery

查看:128
本文介绍了高效地将Pandas数据框写入Google BigQuery的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用

I'm trying to upload a pandas.DataFrame to google big query using the pandas.DataFrame.to_gbq() function documented here. The problem is that to_gbq() takes 2.3 minutes while uploading directly to Google Cloud Storage GUI takes less than a minute. I'm planing to upload a bunch of dataframes (~32) each one with a similar size, so i want to know what its the faster alternative.

这是我正在使用的脚本:

This is the script that i'm using:

dataframe.to_gbq('my_dataset.my_table', 
                 'my_project_id',
                 chunksize=None, # i've tryed with several chunksizes, it runs faster when is one big chunk (at least for me)
                 if_exists='append',
                 verbose=False
                 )

dataframe.to_csv(str(month) + '_file.csv') # the file size its 37.3 MB, this takes almost 2 seconds 
# manually upload the file into GCS GUI
print(dataframe.shape)
(363364, 21)

我的问题是,什么更快?

my question is, what is faster?

  1. 使用pandas.DataFrame.to_gbq()函数上传Dataframe
  2. Dataframe保存为csv,然后使用此过程将文件上传到Google Cloud Storage,然后从中读取BigQuery
  1. Upload Dataframe using pandas.DataFrame.to_gbq() function
  2. Saving Dataframe as csv and then upload as a file to BigQuery using the Python API
  3. Saving Dataframe as csv and then upload the file to Google Cloud Storage using this procedure and then reading it from BigQuery

更新:

替代项2,使用pd.DataFrame.to_csv()load_data_from_file()似乎比替代项1花费的时间更长(3个循环平均要多17.9秒):

alternative 2, using pd.DataFrame.to_csv() and load_data_from_file() seems to take longer than alternative 1 ( 17.9 sec more in average with 3 loops):

def load_data_from_file(dataset_id, table_id, source_file_name):
    bigquery_client = bigquery.Client()
    dataset_ref = bigquery_client.dataset(dataset_id)
    table_ref = dataset_ref.table(table_id)

    with open(source_file_name, 'rb') as source_file:
        # This example uses CSV, but you can use other formats.
        # See https://cloud.google.com/bigquery/loading-data
        job_config = bigquery.LoadJobConfig()
        job_config.source_format = 'text/csv'
        job_config.autodetect=True
        job = bigquery_client.load_table_from_file(
            source_file, table_ref, job_config=job_config)

    job.result()  # Waits for job to complete

    print('Loaded {} rows into {}:{}.'.format(
        job.output_rows, dataset_id, table_id))

谢谢!

推荐答案

我使用以下代码对Datalab中的替代1和3进行了比较:

I did the comparison for alternative 1 and 3 in Datalab using the following code:

from datalab.context import Context
import datalab.storage as storage
import datalab.bigquery as bq
import pandas as pd
from pandas import DataFrame
import time

# Dataframe to write
my_data = [{1,2,3}]
for i in range(0,100000):
    my_data.append({1,2,3})
not_so_simple_dataframe = pd.DataFrame(data=my_data,columns=['a','b','c'])

#Alternative 1
start = time.time()
not_so_simple_dataframe.to_gbq('TestDataSet.TestTable', 
                 Context.default().project_id,
                 chunksize=10000, 
                 if_exists='append',
                 verbose=False
                 )
end = time.time()
print("time alternative 1 " + str(end - start))

#Alternative 3
start = time.time()
sample_bucket_name = Context.default().project_id + '-datalab-example'
sample_bucket_path = 'gs://' + sample_bucket_name
sample_bucket_object = sample_bucket_path + '/Hello.txt'
bigquery_dataset_name = 'TestDataSet'
bigquery_table_name = 'TestTable'

# Define storage bucket
sample_bucket = storage.Bucket(sample_bucket_name)

# Create or overwrite the existing table if it exists
table_schema = bq.Schema.from_dataframe(not_so_simple_dataframe)

# Write the DataFrame to GCS (Google Cloud Storage)
%storage write --variable not_so_simple_dataframe --object $sample_bucket_object

# Write the DataFrame to a BigQuery table
table.insert_data(not_so_simple_dataframe)
end = time.time()
print("time alternative 3 " + str(end - start))

这是n = {10000,100000,1000000}的结果:

and here are the results for n = {10000,100000,1000000}:

n       alternative_1  alternative_3
10000   30.72s         8.14s
100000  162.43s        70.64s
1000000 1473.57s       688.59s

从结果来看,替代方案3比替代方案1更快.

Judging from the results, alternative 3 is faster than alternative 1.

这篇关于高效地将Pandas数据框写入Google BigQuery的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆