Postgres pg_dump缓存查找索引失败 [英] Postgres pg_dump cache lookup failed for index
问题描述
我正尝试使用以下命令(省略详细信息)创建postgres数据库的备份:
$ pg_dump -h $ host -p 5432 -U $ user $ db> db.sql
一段时间后,出现错误(格式化)
pg_dump:[存档(db)]查询失败:错误:索引1184605879的高速缓存查找失败
pg_dump:[archiver(db)]查询为:
SELECT t.tableoid,
t.oid,
t.relname AS索引名,
pg_catalog.pg_get_indexdef(i.indexrelid)AS indexdef,
t.relnatts AS indnkeys,
i.indkey,
i.inclustered,
false AS无关位,
t.relpages,
c.contype,
c.conname ,
c.condeferrable,
c.condeferred,
c.tableoid AS contableoid,
c.oid AS conoid,
pg_catalog.pg_get_constraintdef(c.oid,false )AS定义,
(从pg_catalog.pg_tablespace的WHERE s.oid = t.reltablespace中选择spcname)AS表空间,
t.reloptions AS indreloptions
从pg_catalog.pg_index i
I N pg_catalog.pg_class t ON(t.oid = i.indexrelid)
左联接pg_catalog.pg_constraint c ON(i.indrelid = c.conrelid AND i.indexrelid = c.conindid AND c.contype IN('p ','u','x'))
i.indrelid ='1184605870':: pg_catalog.oid AND i.indisvalid AND i.indisready
按索引名
$订购c $ c>
然后我确认 pg_index
中的索引ID不存在。有什么办法可以解决此问题,或者有什么办法可以忽略此错误并继续操作,以便我可以继续导出非 pg_catalog
数据?
我不是最好的bash程序员,因此可以肯定的加以改进,但这是我用来逐个转储每个表以查找引起错误的表的脚本。
#!/ bin / bash
host = $ 1
port = $ 2
用户== $ 3
db = $ 4
#获取将要单独转储的所有表名
query = SELECT tablename FROM pg_tables WHERE tableowner = $ user;
表=($(psql -h $ host -p $ port -U $ user $ d -c $ query))
mkdir db_dump
dump_command = pg_dump -h $ host -p $ port -U $ user
for $(tables [@])中的表;做
评估 $ dump_command -t $ table $ db> db_dump / $ table.sql
完成
I am trying to create a back-up of a postgres database using the following command (omitting details):
$ pg_dump -h $host -p 5432 -U $user $db > db.sql
After some time, I get an error (formatted)
pg_dump: [archiver (db)] query failed: ERROR: cache lookup failed for index 1184605879
pg_dump: [archiver (db)] query was:
SELECT t.tableoid,
t.oid,
t.relname AS indexname,
pg_catalog.pg_get_indexdef(i.indexrelid) AS indexdef,
t.relnatts AS indnkeys,
i.indkey,
i.indisclustered,
false AS indisreplident,
t.relpages,
c.contype,
c.conname,
c.condeferrable,
c.condeferred,
c.tableoid AS contableoid,
c.oid AS conoid,
pg_catalog.pg_get_constraintdef(c.oid, false) AS condef,
(SELECT spcname FROM pg_catalog.pg_tablespace s WHERE s.oid = t.reltablespace) AS tablespace,
t.reloptions AS indreloptions
FROM pg_catalog.pg_index i
JOIN pg_catalog.pg_class t ON (t.oid = i.indexrelid)
LEFT JOIN pg_catalog.pg_constraint c ON (i.indrelid = c.conrelid AND i.indexrelid = c.conindid AND c.contype IN ('p','u','x'))
WHERE i.indrelid = '1184605870'::pg_catalog.oid AND i.indisvalid AND i.indisready
ORDER BY indexname
I then verified the index ID in pg_index
does not exist. Is there some way to fix this, or some way to ignore this error and continue so I can continue exporting non-pg_catalog
data?
I wrote a quick script that dumped each table individually to a sql file and found that it was some temporary login information that was causing the cache to fail. I simply excluded that table from the export and it was fine.
I'm not the best bash programmer so it can definitely be improved, but here is the script I used to dump each table individually to find which table was causing the error.
#!/bin/bash
host=$1
port=$2
user=$3
db=$4
# Get all table names that you will be dumping individually
query="SELECT tablename FROM pg_tables WHERE tableowner=$user;"
tables=( $(psql -h $host -p $port -U $user $d -c $query) )
mkdir db_dump
dump_command="pg_dump -h $host -p $port -U $user"
for table in $(tables[@]); do
eval "$dump_command -t $table $db > db_dump/$table.sql"
done
这篇关于Postgres pg_dump缓存查找索引失败的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!