odoo 9将二进制字段db迁移到文件存储 [英] odoo 9 migrate binary field db to filestore
问题描述
Odoo 9自定义模块二进制字段 attachment = True 参数. Binary Fields一些旧的记录附件= True未使用,因此未在ir.attachment表中创建旧记录条目,并且未保存文件系统. 我想知道如何在文件系统存储中迁移旧记录二进制字段值存储?如何基于旧记录的二进制字段值在ir_attachment行中创建/插入记录?有可用的脚本吗?
我确定您不再需要18个月前提出的解决方案,但是我遇到了同样的问题(许多GB的二进制文件数据库中的数据),并且这个问题出现在Google上,所以我想我将分享我的解决方案.
当设置附件= True时,二进制列将保留在数据库中,但是系统将在文件存储中查找数据.这使我无法从Odoo API访问数据,因此我需要直接从数据库中检索二进制数据,然后使用Odoo将二进制数据重新写入记录,然后最后删除列并清理表. >
这是我的脚本,灵感来自 然后,删除列并在psql中清理表. Odoo 9 custom module binary field attachment=True parameter added later after that new record will be stored in filesystem storage.
Binary Fields some old records attachment = True not used, so old record entry not created in ir.attachment table and filesystem not saved.
I would like to know how to migrate old records binary field value store in filesystem storage?. How to create/insert records in ir_attachment row based on old records binary field value? Is any script available? I'm sure that you no longer need a solution to this as you asked 18 months ago, but I have just had the same issue (many gigabytes of binary data in the database) and this question came up on Google so I thought I would share my solution. When you set attachment=True the binary column will remain in the database, but the system will look in the filestore instead for the data. This left me unable to access the data from the Odoo API so I needed to retrieve the binary data from the database directly, then re-write the binary data to the record using Odoo and then finally drop the column and vacuum the table. Here is my script, which is inspired by this solution for migrating attachments, but this solution will work for any field in any model and reads the binary data from the database rather than from the Odoo API. Afterwards, drop the column and vacuum the table in psql. 这篇关于odoo 9将二进制字段db迁移到文件存储的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!import xmlrpclib
import psycopg2
username = 'your_odoo_username'
pwd = 'your_odoo_password'
url = 'http://ip-address:8069'
dbname = 'database-name'
model = 'model.name'
field = 'field_name'
dbuser = 'postgres_user'
dbpwd = 'postgres_password'
dbhost = 'postgres_host'
conn = psycopg2.connect(database=dbname, user=dbuser, password=dbpwd, host=dbhost, port='5432')
cr = conn.cursor()
# Get the uid
sock_common = xmlrpclib.ServerProxy ('%s/xmlrpc/common' % url)
uid = sock_common.login(dbname, username, pwd)
sock = xmlrpclib.ServerProxy('%s/xmlrpc/object' % url)
def migrate_attachment(res_id):
# 1. get data
cr.execute("SELECT %s from %s where id=%s" % (field, model.replace('.', '_'), res_id))
data = cr.fetchall()[0][0]
# Re-Write attachment
if data:
data = str(data)
sock.execute(dbname, uid, pwd, model, 'write', [res_id], {field: str(data)})
return True
else:
return False
# SELECT attachments:
records = sock.execute(dbname, uid, pwd, model, 'search', [])
cnt = len(records)
print cnt
i = 0
for res_id in records:
att = sock.execute(dbname, uid, pwd, model, 'read', res_id, [field])
status = migrate_attachment(res_id)
print 'Migrated ID %s (attachment %s of %s) [Contained data: %s]' % (res_id, i, cnt, status)
i += 1
cr.close()
print "done ..."