Twitter-twemproxy-memcached-重试无法正常工作 [英] Twitter - twemproxy - memcached - Retry not working as expected

查看:167
本文介绍了Twitter-twemproxy-memcached-重试无法正常工作的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

简单设置:

  • 1个运行twemproxy的节点(vcache:22122)
  • 运行memcached(vcache-1,vcache-2)的2个节点都在11211上侦听

我有以下twemproxy配置:

I have the following twemproxy config:

default:
  auto_eject_hosts: true
  distribution: ketama
  hash: fnv1a_64
  listen: 0.0.0.0:22122
  server_failure_limit: 1
  server_retry_timeout: 600000 # 600sec, 10m
  timeout: 100
  servers:
    - vcache-1:11211:1
    - vcache-2:11211:1

twemproxy节点可以解析所有主机名.作为测试的一部分,我删除了vcache-2.理论上,每次尝试与vcache:22122进行接口连接时,twemproxy都会从池中与服务器联系以简化尝试.但是,如果其中一个缓存节点已关闭,则twemproxy应该从池中自动弹出"它,因此后续请求不会失败.

The twemproxy node can resolve all hostnames. As part of testing I took down vcache-2. In theory for every attempt to interface with vcache:22122, twemproxy will contact a server from the pool to facilitate the attempt. However, if one of the cache nodes is down, then twemproxy is supposed to "auto eject" it from the pool, so subsequent requests will not fail.

由应用程序层确定是否使用vcache:22122进行失败的接口尝试是由于基础结构问题引起的,如果是,请重试.但是,我发现在重试时,使用的是同一台失败的服务器,因此与其将后续尝试传递到已知的良好缓存节点(在本例中为vcache-1),还是将它们传递到弹出的缓存节点(vcache), -2).

It is up to the app layer to determine if a failed interface attempt with vcache:22122 was due to infrastructure issue, and if so, try again. However I am finding that on the retry, the same failed server is being used, so instead of subsequent attempts being passed to a known good cache node (in this case vcache-1) they are still being passed to the ejected cache node (vcache-2).

这是尝试重试的php代码片段:

Here's the php code snippet which attempts the retry:

....

// $this is a Memcached object with vcache:22122 in the server list

$retryCount = 0;

do {

    $status = $this->set($key, $value, $expiry);

    if (Memcached::RES_SUCCESS === $this->getResultCode()) {

        return true;
    }


} while (++$retryCount < 3);

return false;

-更新-

在Github上打开了指向问题的链接以获取更多信息:第427号问题

Link to Issue opened on Github for more info: Issue #427

推荐答案

我看不到您的配置有任何问题.如您所知,重要的设置已经到位:

I can't see anything wrong with your configuration. As you know the important settings are in place:

default:
  auto_eject_hosts: true
  server_failure_limit: 1

文档表明连接超时可能是一个问题.

The documentation suggests connection timeouts might be an issue.

仅依靠客户端超时会产生不利影响,即原始请求在客户端到代理连接上已超时,但在代理到服务器连接上仍处于挂起状态且未完成.当客户端重试原始请求时,这种情况会进一步恶化.

Relying only on client-side timeouts has the adverse effect of the original request having timedout on the client to proxy connection, but still pending and outstanding on the proxy to server connection. This further gets exacerbated when client retries the original request.

您的PHP脚本是否在twemproxy首次尝试失败并从池中删除服务器之前关闭了连接并重试?也许在twemproxy lower 中添加一个timeout值(低于PHP中使用的连接超时)可以解决此问题.

Is your PHP script closing the connection and retrying before twemproxy failed its first attempt and removed the server from the pool? Perhaps adding a timeout value in the twemproxy lower than the connection timeout used in PHP solves the issue.

在您关于Github的讨论中,尽管听起来像是对健康检查的支持,也许自动弹出,但在twemproxy中并不稳定.如果要针对旧软件包进行构建,则最好找到一个稳定了一段时间的软件包.是 mcrouter (带有

From your discussion on Github though it sounds like support for healthcheck, and perhaps auto ejection, aren't stable in twemproxy. If you're building against old packages you might be better to find a package which has been stable for some time. Is mcrouter (with interesting article) suitable?

这篇关于Twitter-twemproxy-memcached-重试无法正常工作的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆