错误:Postgres中最后一个预期的列之后有多余的数据 [英] ERROR: extra data after last expected column in Postgres

查看:74
本文介绍了错误:Postgres中最后一个预期的列之后有多余的数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个json文件:

I have a json file as:

[xyz@innolx20122 ~]$ cat test_cgs.json
{"technology":"AAA","vendor":"XXX","name":"RBNI","temporal_unit":"hour","regional_unit":"cell","dataset_metadata":"{\"name\": \"RBNI\", \"intervals_epoch_seconds\": [[1609941600, 1609945200]], \"identifier_column_names\": [\"CELLID\", \"CELLNAME\", \"NETWORK\"], \"vendor\": \"XXX\", \"timestamp_column_name\": \"COLLECTTIME\", \"regional_unit\": \"cell\"}","rk":1}

我正尝试上传到 Postgres

 CREATE TABLE temp_test_table
( 
  technology       character varying(255),
  vendor           character varying(255),
  name             character varying(255),
  temporal_unit    character varying(255),
  regional_unit    character varying(255),
  dataset_metadata json,
  rk    character varying(255)
);

这是我的复制命令

db-state=> \copy temp_test_table(technology,vendor,name,temporal_unit,regional_unit,dataset_metadata,rk) FROM '/home/eksinvi/test_cgs.json' WITH CSV DELIMITER ',' quote E'\b' ESCAPE '\';
ERROR:  extra data after last expected column
CONTEXT:  COPY temp_test_table, line 1: "{"technology":"AAA","vendor":"XXX","name":"RBNI","temporal_unit":"hour","regional_unit":"cell","data..."

我什至尝试将此文件加载到大查询表中,但没有运气

I even tried loading this file to big query table but no luck

bq load --autodetect --source_format=NEWLINE_DELIMITED_JSON --allow_quoted_newlines --allow_jagged_rows --ignore_unknown_values test-project:vikrant_test_dataset.cg_test_table "gs://test-bucket-01/test/test_cgs.json"

任何一种解决方案都对我有用.我想将此json加载到 Postgres 表或 bigquery 表中.

any of the solution would work for me. I want to load this json either to Postgres table or bigquery table.

推荐答案

我遇到了类似的问题.就我而言,它与NULL列和文件编码有关.我还必须指定一个自定义分隔符,因为我的列有时会包含默认的分隔符,这会使复制失败.

I had similar problems. In my case, it was related to NULL columns and encoding of the file. I also had to specify a custom delimiter because my columns sometimes included the default limiter and it would make the copy fail.

\\copy mytable FROM 'filePath.dat' (DELIMITER E'\\t', FORMAT CSV, NULL '', ENCODING 'UTF8'  );

就我而言,我是从SQL Server将数据导出到CSV文件并将其导入到postgres.在SQL Server中,我们的Unicode字符显示为空白".但这会破坏复制命令.我必须使用正则表达式查询在SQL表中搜索那些字符,并消除无效字符.这是一个极端的情况,但这是我的问题的一部分.

In my case, I was exporting data to a CSV file from SQL Server and importing it to postgres. In SQL Server, we had unicode characters that would show up as "blanks" but that would screw up the copy command. I had to search the SQL table for those characters with regex queries and eliminate invalid characters. It's an edge case but that was part of the problem in my case.

这篇关于错误:Postgres中最后一个预期的列之后有多余的数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆