从非分区迁移到分区表 [英] Migrating from non-partitioned to Partitioned tables

查看:122
本文介绍了从非分区迁移到分区表的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

6月,BQ团队



第2步 - 逐个处理分区只扫描相应的列(无全表扫描) - 插入相应的分区中

#standardSQL
SELECT r。*
FROM pivot_table,UNNEST(day20170101)AS r

从Web用户界面运行上述查询,目标表名为mytable $ 20160101



您可以在第二天运行相同的服务。




FROM pivot_table,UNNEST(day20170102)AS r
#standardSQL

现在您应该将目标表作为mytable $ 20160102等等





您应该能够自动化/脚本化您的任何客户端。



有多种方式可以使用上述方法 - 取决于您的创造力注意:BigQuery最多允许使用10000列表格,所以在一年的相应日期365列是绝对不是问题在这里:o)
除非有一个限制,你可以走多远的新分区 - 我听说(但没有机会现在还没有超过90天的时间


更新

te>

请注意:
以上版本有一些额外的逻辑,将所有聚集的单元格尽可能地打包成最少的最终行数。


ROW_NUMBER()OVER(PARTITION BY d)AS行

然后>
GROUP BY行


以及

ARRAY_CONCAT_AGG(...)

是否这样做



当原始表中的行大小不是那么大时,这种方式效果很好,所以最终的组合行大小仍然会在BigQuery的行大小限制内(我相信它现在是10 MB)



如果您的源表已具有接近该限制的行大小 - 请使用低于调整后的版本



在这个版本中 - 分组会被删除,因此每一行只有一列的价值

  #standardSQL 
SELECT
CASE WHERE d ='day20170101'THEN r END as day20170101,
CASE WHEN d ='day20170102'THEN r END as day20170102,
CASE WHEN d ='day20170103'THEN r END AS day20170103,
CASE WHEN D ='day20170104'THEN r END AS day20170 104,
CASE当d ='day20170105'那么r END as day20170105,
CASE当d ='day20170106'那么r END as day20170106,
CASE当d ='day20170107'那么r END AS day20170107,
CASE WHEN d ='day20170108'THEN r END as day20170108,
CASE WHEN D ='day20170109'THEN r END as day20170109,
CASE WHEN D ='day20170110'THEN r END AS day20170110
FROM(
SELECT
stn,CONCAT('day',year,mo,da)AS d,ARRAY_AGG(t)AS r
FROM`bigquery-public -data.noaa_gsod.gsod2017`作为t
GROUP BY stn,d

在'day20170101'和'day20170110'之间的位置'
正如你现在所看到的 - 数据透视表(sparce_pivot_table)足够稀少(相同的21.5 MB,但现在114,089行与pivot_table中的11,584行),所以它有平均行初始版本的大小为190B与1.9KB。这显然比例子中的列数少10倍。

所以在使用这种方法之前,需要做一些数学计算/估计什么和如何完成!





Still:数据透视表中的每个单元格都是排序的原始表格中整行的JSON表示。它是这样的,它不仅保存了原始表中的行,而且还保存了一个模式。





因此它非常冗长 - 因此单元的大小可能会比原始大小大很多倍[这限制了这种方法的使用......除非您得到更有创意:o)......这里仍然有很多地方适用:o)]


In June the BQ team announced support for date-partitioned tables. But the guide is missing how to migrate old non-partitioned tables into the new style.

I am looking for a way to update several or if not all tables to the new style.

Also outside of DAY type partitioned what other options are available? Does the BQ UI show this, as I wasn't able to create such a new partitioned table from the BQ Web UI.

解决方案

from Pavan’s answer: Please note that this approach will charge you the scan cost of the source table for the query as many times as you query it.


from Pentium10 comments: So suppose I have several years of data, I need to prepare different query for each day and run all of it, and suppose I have 1000 days in history, I need to pay 1000 times the full query price from the source table?


As we can see - the main problem here is on having full scan for each and every day. The rest is less of a problem and can be easily scripted out in any client of the choice

So, below is to - How to partition table while avoid full table scan for each and every day?

Below step-by-step shows the approach

It is generic enough to extend/apply to anyone real use-case - meantime I am using bigquery-public-data.noaa_gsod.gsod2017 and I am limiting "exercise" to just 10 days to keep it readable

Step 1 – Create Pivot table
In this step we
a) compress each row’s content into record/array
and
b) put them all into respective "daily" column

#standardSQL
SELECT
  ARRAY_CONCAT_AGG(CASE WHEN d = 'day20170101' THEN r END) AS day20170101,
  ARRAY_CONCAT_AGG(CASE WHEN d = 'day20170102' THEN r END) AS day20170102,
  ARRAY_CONCAT_AGG(CASE WHEN d = 'day20170103' THEN r END) AS day20170103,
  ARRAY_CONCAT_AGG(CASE WHEN d = 'day20170104' THEN r END) AS day20170104,
  ARRAY_CONCAT_AGG(CASE WHEN d = 'day20170105' THEN r END) AS day20170105,
  ARRAY_CONCAT_AGG(CASE WHEN d = 'day20170106' THEN r END) AS day20170106,
  ARRAY_CONCAT_AGG(CASE WHEN d = 'day20170107' THEN r END) AS day20170107,
  ARRAY_CONCAT_AGG(CASE WHEN d = 'day20170108' THEN r END) AS day20170108,
  ARRAY_CONCAT_AGG(CASE WHEN d = 'day20170109' THEN r END) AS day20170109,
  ARRAY_CONCAT_AGG(CASE WHEN d = 'day20170110' THEN r END) AS day20170110
FROM (
  SELECT d, r, ROW_NUMBER() OVER(PARTITION BY d) AS line
  FROM (
    SELECT 
      stn, CONCAT('day', year, mo, da) AS d, ARRAY_AGG(t) AS r
    FROM `bigquery-public-data.noaa_gsod.gsod2017` AS t 
    GROUP BY stn, d
  ) 
)
GROUP BY line  

Run above query in Web UI with pivot_table (or whatever name is preferred) as a destination

As we can see - here we will get table with 10 columns – one column for one day and schema of each column is a copy of schema of original table:

Step 2 – Processing partitions one-by-one ONLY scanning respective column (no full table scan) – inserting into respective partition

#standardSQL
SELECT r.*
FROM pivot_table, UNNEST(day20170101) AS r

Run above query from Web UI with destination table named mytable$20160101

You can run same for next day

#standardSQL
SELECT r.*
FROM pivot_table, UNNEST(day20170102) AS r

Now you should have destination table as mytable$20160102 and so on

You should be able to automate/script this step with any client of your choice

There are many variations of how you can use above approach - it is up to your creativity

Note: BigQuery allows up to 10000 columns in table, so 365 columns for respective days of one year is definitely not a problem here :o) Unless there is a limitation on how far back you can go with new partitions – I heard (but didn’t have chance to check yet) there is now no more than 90 days back

Update

Please note: Above version has a little extra logic of packing all aggregated cells into as least final number of rows as possible.

ROW_NUMBER() OVER(PARTITION BY d) AS line
and then
GROUP BY line
along with
ARRAY_CONCAT_AGG(…)
does this

This works well when row size in your original table is not that big so final combined row size still will be within rows size limit that BigQuery has (which I believe is 10 MB as of now)

If your source table already has row size close to that limit – use below adjusted version

In this version – grouping is removed such that each row has only value for one column

#standardSQL
SELECT
    CASE WHEN d = 'day20170101' THEN r END AS day20170101,
    CASE WHEN d = 'day20170102' THEN r END AS day20170102,
    CASE WHEN d = 'day20170103' THEN r END AS day20170103,
    CASE WHEN d = 'day20170104' THEN r END AS day20170104,
    CASE WHEN d = 'day20170105' THEN r END AS day20170105,
    CASE WHEN d = 'day20170106' THEN r END AS day20170106,
    CASE WHEN d = 'day20170107' THEN r END AS day20170107,
    CASE WHEN d = 'day20170108' THEN r END AS day20170108,
    CASE WHEN d = 'day20170109' THEN r END AS day20170109,
    CASE WHEN d = 'day20170110' THEN r END AS day20170110
FROM (
    SELECT 
        stn, CONCAT('day', year, mo, da) AS d, ARRAY_AGG(t) AS r
    FROM `bigquery-public-data.noaa_gsod.gsod2017` AS t 
    GROUP BY stn, d
)
WHERE d BETWEEN 'day20170101' AND 'day20170110'

As you can see now - pivot table (sparce_pivot_table) is sparse enough (same 21.5 MB but now 114,089 rows vs. 11,584 rows in pivot_table) so it has average row size of 190B vs 1.9KB in initial version. Which is obviously about 10 times less as per number of columns in the example.
So before using this approach some math needs to be done to project/estimate what and how can be done!

Still: each cell in pivot table is sort of JSON representation of whole row in original table. It is such as it holds not just values as it was for rows in original table but also has a schema in it

As such it is quite verbose - thus the size of cell can be multiple times bigger than original size [which limits the usage of this approach ... unless you get even more creative :o) ... which is still plenty of areas here to apply :o) ]

这篇关于从非分区迁移到分区表的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆