避免Memcache“1000000字节的长度”限制值 [英] Avoiding Memcache "1000000 bytes in length" limit on values

查看:216
本文介绍了避免Memcache“1000000字节的长度”限制值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的模型有不同的实体,我想像一个公司的员工那样计算一次。为了避免一次又一次地进行相同的查询,计算的列表被保存在Memcache(duration = 1day)中。问题是,应用程序有时会给我一个错误,指出存储在Memcache中的字节数多于允许的数量:

 值的长度不得超过1000000字节;收到1071339字节

是否存储对象列表,您应该使用Memcache执行哪些操作?如果是这样,避免上述错误的最佳做法是什么?我目前正在拉1000件物品。您是否将值限制为< 200?在内存中检查对象的大小似乎不是一个好主意,因为它们可能在进入Memcache之前正在被处理(序列化或类似的东西)。

解决方案David,你没有说你使用哪种语言,但在Python中,你可以做Ibrahim建议使用pickle的同样的东西。你所需要做的就是编写两个小帮助函数,它们可以读取和写入一个大对象给memcache。这是一个(未经测试的)草图:

  def store(key,value,chunksize = 950000):
serialized = pickle .dumps(value,2)
values = {}
for xrange(0,len(serialized),chunksize):
values ['%s。%s'%(key ,// // chunksize)] =序列化[i:i + chunksize]
返回memcache.set_multi(值)
$ b $ def检索(键):$​​ b $ b result = memcache.get_multi (['%s。%s'%(key,i)for i in xrange(32)])
serialized =''.join([v for k,v in sorted(result.items())如果v不是None])
返回pickle.loads(序列化)


My model has different entities that I'd like to calculate once like the employees of a company. To avoid making the same query again and again, the calculated list is saved in Memcache (duration=1day).. The problem is that the app is sometimes giving me an error that there are more bytes being stored in Memcache than is permissible:

Values may not be more than 1000000 bytes in length; received 1071339 bytes

Is storing a list of objects something that you should be doing with Memcache? If so, what are best practices in avoiding the error above? I'm currently pulling 1000 objects. Do you limit values to < 200? Checking for an object's size in memory doesn't seem like too good an idea because they're probably being processed (serialized or something like that) before going into Memcache.

解决方案

David, you don't say which language you use, but in Python you can do the same thing as Ibrahim suggests using pickle. All you need to do is write two little helper functions that read and write a large object to memcache. Here's an (untested) sketch:

def store(key, value, chunksize=950000):
  serialized = pickle.dumps(value, 2)
  values = {}
  for i in xrange(0, len(serialized), chunksize):
    values['%s.%s' % (key, i//chunksize)] = serialized[i : i+chunksize]
  return memcache.set_multi(values)

def retrieve(key):
  result = memcache.get_multi(['%s.%s' % (key, i) for i in xrange(32)])
  serialized = ''.join([v for k, v in sorted(result.items()) if v is not None])
  return pickle.loads(serialized)

这篇关于避免Memcache“1000000字节的长度”限制值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆