将 pandas 数据框写入Google Cloud Storage或BigQuery [英] Write a Pandas DataFrame to Google Cloud Storage or BigQuery

查看:325
本文介绍了将 pandas 数据框写入Google Cloud Storage或BigQuery的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

你好,感谢你的时间和考虑。
我正在开发Google云平台/ Datalab中的Jupyter Notebook。
我创建了一个熊猫数据框,并且希望将此数据框写入Google Cloud Storage(GCS)和/或BigQuery。我在GCS中有一个存储桶,并通过以下代码创建了以下对象:

  import gcp 
import gcp.storage as storage
project = gcp.Context.default()。project_id
bucket_name ='steve-temp'
bucket_path = bucket_name
bucket = storage.Bucket(bucket_path)
bucket.exists()

我尝试过基于Google Datalab文档的各种方法,但继续失败。
Thanks $ / $>

解决方案

请尝试以下工作示例:

<$来自datalab.context的p $ p> 导入上下文
导入google.datalab.storage作为存储
导入google.datalab.bigquery作为bq
导入pandas作为pd

#Dataframe写入
simple_dataframe = pd.DataFrame(data = [{1,2,3},{4,5,6}],columns = ['a','b' ,'c'])

sample_bucket_name = Context.default()。project_id +'-datalab-example'
sample_bucket_path ='gs://'+ sample_bucket_name
sample_bucket_object = sample_bucket_path +'/Hello.txt'
bigquery_dataset_name ='TestDataSet'
bigquery_table_name ='TestTable'

定义存储桶
sample_bucket = storage.Bucket(sample_bucket_name)

#如果不存在则创建存储桶
如果不存在sample_bucket.exists():
sample_bucket.create()

#定义BigQuery数据集和表
dataset = bq.Dataset(bigquery _dataset_name)
table = bq.Table(bigquery_dataset_name +'。'+ bigquery_table_name)

#如果不是dataset.exists():
dataset,则创建BigQuery数据集
。 create()

#创建或覆盖现有表(如果存在的话)
table_schema = bq.Schema.from_data(simple_dataframe)
table.create(schema = table_schema,overwrite = True )

#将DataFrame写入GCS(Google Cloud Storage)
%存储写入 - 变量simple_dataframe --object $ sample_bucket_object

#将DataFrame写入BigQuery table
table.insert(simple_dataframe)

我使用这个例子,以及 _ table.py 文件来自。您可以在 datalab 源代码文件>这个链接。


Hello and thanks for your time and consideration. I am developing a Jupyter Notebook in the Google Cloud Platform / Datalab. I have created a Pandas DataFrame and would like to write this DataFrame to both Google Cloud Storage(GCS) and/or BigQuery. I have a bucket in GCS and have, via the following code, created the following objects:

import gcp
import gcp.storage as storage
project = gcp.Context.default().project_id    
bucket_name = 'steve-temp'           
bucket_path  = bucket_name   
bucket = storage.Bucket(bucket_path)
bucket.exists()  

I have tried various approaches based on Google Datalab documentation but continue to fail. Thanks

解决方案

Try the following working example:

from datalab.context import Context
import google.datalab.storage as storage
import google.datalab.bigquery as bq
import pandas as pd

# Dataframe to write
simple_dataframe = pd.DataFrame(data=[{1,2,3},{4,5,6}],columns=['a','b','c'])

sample_bucket_name = Context.default().project_id + '-datalab-example'
sample_bucket_path = 'gs://' + sample_bucket_name
sample_bucket_object = sample_bucket_path + '/Hello.txt'
bigquery_dataset_name = 'TestDataSet'
bigquery_table_name = 'TestTable'

# Define storage bucket
sample_bucket = storage.Bucket(sample_bucket_name)

# Create storage bucket if it does not exist
if not sample_bucket.exists():
    sample_bucket.create()

# Define BigQuery dataset and table
dataset = bq.Dataset(bigquery_dataset_name)
table = bq.Table(bigquery_dataset_name + '.' + bigquery_table_name)

# Create BigQuery dataset
if not dataset.exists():
    dataset.create()

# Create or overwrite the existing table if it exists
table_schema = bq.Schema.from_data(simple_dataframe)
table.create(schema = table_schema, overwrite = True)

# Write the DataFrame to GCS (Google Cloud Storage)
%storage write --variable simple_dataframe --object $sample_bucket_object

# Write the DataFrame to a BigQuery table
table.insert(simple_dataframe)

I used this example, and the _table.py file from the datalab github site as a reference. You can find other datalab source code files at this link.

这篇关于将 pandas 数据框写入Google Cloud Storage或BigQuery的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆