IBM MQ QM如何在多个使用者上分配消息 [英] How does IBM MQ QM distribute messages over multiple consumers

查看:377
本文介绍了IBM MQ QM如何在多个使用者上分配消息的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们有一个IBM MQ v8安装程序设置,其中包含1个大容量非持久队列,并且该队列上有许多使用者(50个以上).需要大量使用者才能处理队列中正在发布的消息量.

We have a IBM MQ v8 setup setup with 1 high volume non-persistent queue and many consumers (50+) on that queue. The large number of consumers is needed to be able to process the volume of messages being published on the queue.

我们现在注意到的是,队列管理器没有将消息均匀地分布在X使用者上.少数消费者每分钟最多收到300条消息,而其他许多消费者每分钟仅收到几条消息(< 10).并且,队列上有许多消息,并且队列深度在稳步增加.使用者方的CPU和内存不是问题,两者的利用率均小于<. 50%.

What we now notice is that the queue manager is not distributing the messages evenly over the X consumers. A few consumers get up to 300 message per minute, while many other consumers only get a few messages per minute (<10). And, there are many messages on the queue and the queue depth is steadily increasing. CPU and memory on the consumer side are not a problem, utilization of both is < 50%.

有人可以解释一下IBM MQ队列管理器如何在多个使用者之间分配消息吗?并有可能在服务器端或消费者端对此产生影响,从而使消息在可用消费者中更均匀地分布吗?

Can someone explain how IBM MQ queue manager is distributing messages over multiple consumers? And is it possible to influence this either on server or on consumer side such that messages are distributed more evenly over the available consumers?

在Mark Taylors回复后添加

我们面临的挑战是每分钟有超过10.000条消息添加到队列中,我们无法足够快地消耗它们.当前的设置是,我们有一个在Docker容器中运行的简单使用者,并且通过运行多个Container进行扩展.运行12个使用者(Docker容器)确实会提高整体吞吐量,而运行50个以上的使用者不会增加任何吞吐量.每个消费者都很简单:

The challenge we face is that there are >10.000 messages added to the queue per minute and we're not able to consume them fast enough. Our current setup is that we have a simple consumer that is running in a Docker container and we scale by running multiple Containers. Running 12 consumers (Docker containers) does increase the overall throughput, running 50+ consumers does not add any throughput. Each consumer is simple:

1. connect to queue manager
2. Connect to queue
3. While true
   - Get message from queue
   - Process message (commenting this out does not increase overall performance)

我们如何获得更多的消息消耗性能?例如,如果在一个容器内我们一次连接到队列管理器,然后让多个线程使用同一个队列管理器连接到队列并获取消息,是否会对您有所帮助?还是我们甚至应该在多个线程上重用队列?

How can we get achieve more message consume performance? Would it for example help if within one container we connect to the queue manager once and then have multiple threads use that same queue manager to connect to the queue and get the messages? Or should we even reuse the queue over multiple threads?

此致

推荐答案

MQ的默认行为是向MOST RECENT getter提供消息.由于该过程最有可能是热"(在处理器缓存中),因此通常可以提高性能.因此,您不应期望消息平均分配.如果您看到一个应用程序收到最多的消息,这意味着它正在定期处理一条消息,然后再获取另一条消息.在下一条消息可用之前,它正在重新加入侍者队列.

MQ's default behaviour is to give messages to the MOST RECENT getter. That generally improves performance as that process is most likely to be "hot" (in the processor cache). So you should not expect equal distribution of messages. If you are seeing one application getting most messages that implies that it is regularly getting through processing one message before another is available for retrieval. It is rejoining the queue of waiters before the next message is available.

有很多方面会影响整体性能,包括事务性,检索标准,争用等,因此,实际上不可能说出您的问题是什么,或者是否无法更改默认分发算法(存在一个未记录的调整参数,它会颠覆服务员)会有所帮助.而且,通过代理" svrconn进程和线程真正完成等待的客户端连接使其变得更加复杂.

There are many aspects that affect overall performance including transactionality, retrieval criteria, contention etc so it's not really possible to say what your problem is, or whether changing that default distribution algorithm (there is an undocumented tuning parm that reverses the queue of waiters) would help. And having client connections where the waiting is really being done by the "proxy" svrconn processes and threads makes it more complicated.

这篇关于IBM MQ QM如何在多个使用者上分配消息的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆