ThreadPoolExecutor参数配置 [英] ThreadPoolExecutor parameter configuration

查看:136
本文介绍了ThreadPoolExecutor参数配置的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在与一个需要从Rest API请求数据的客户端应用程序一起工作.这些请求中有许多是独立的,因此可以异步调用.我正在使用ThreadPoolExecutor来执行此操作,并且我已经看到它可以通过几个参数进行配置:

I'm working with a client application which needs to request data from a Rest API. Many of these requests are independent, so they could be called asynchronously. I'm using ThreadPoolExecutor to do so, and I've seen it can be configured with several parameters:

  • corePoolSize
  • maxPoolSize
  • queueCapacity

我阅读了本文并且我了解以下内容:

I read this article and I understand the following:

  • corePoolSize是执行程序在其下方添加新线程而不是对其排队的值
  • maxPoolSize是执行者将请求排队的值之上的值
  • 如果实际线程数在corePoolSize和maxPoolSize之间,则请求将排队.

但是我有一些问题:

  • 我一直在测试,corePoolSize越高,得到的结果越好.在生产环境中,有很多客户向此Rest API提出请求(每天可能有数百万个请求),corePoolSize应该有多高?
  • 我应该如何获得最佳"参数?只能通过测试吗?
  • 哪个问题可能引起(每个参数)高/低值?

提前谢谢

更新

我当前的值是:

  • corePoolSize = 5
  • maxPoolSize = 20
  • queueCapacity = 100

推荐答案

  • corePoolSize 要保留在池中的线​​程数,即使它们处于空闲状态,除非设置了{@code allowCoreThreadTimeOut}
  • maximumPoolSize 池中允许的最大线程数
  • corePoolSize是您想要一直等待的线程数,即使没有人请求它们. maximumPoolSize 是要启动的Rest API的最大线程数,因此是并发请求数.

    The corePoolSize is the number of threads you want to keep waiting forever, even if there is no one requesting them. The maximumPoolSize is the maximum of how many threads and therefore number of concurrent requests to your Rest API you will start.

    • 您每秒有多少个请求?(每秒的平均值/最大值).
    • 一个请求需要多长时间?
    • 用户可以接受的最长等待时间是多长时间?

    corePoolSize> =每秒请求数*每个请求秒数

    maximumPoolSize> =每秒最大请求数*每个请求秒数

    queueCapacity< = maximumPoolSize * maxWaitTime/timePerRequest (您应该对此进行监视,以便知道何时必须采取措施.)

    queueCapacity <= maximumPoolSize * maxWaitTime / timePerRequest (You should monitor this so that you know when you will have to act.)

    您必须记住,Rest API或您自己的应用程序/服务器/带宽可能会对并发连接数施加一些限制,并且许多并发请求可能会增加每个请求的时间.

    You have to keep in mind that the Rest API or your own application/server/bandwidth might impose some limits on the number of concurrent connections and that many concurrent requests might increase the time per request.

    我宁愿将 corePoolSize 保持在较低水平,将 keepAliveTime 保持在较高水平.

    I would rather keep the corePoolSize low, keepAliveTime quite high.

    您必须记住,每个线程仅为并行HTTP请求增加了相当大的开销,应该有一个NIO变体来执行此操作,而无需大量线程.也许您可以尝试 Apache MINA .

    You have to keep in mind that each thread adds quite some overhead just for parallel HTTP-requests, there should be a NIO variant that does this without lots of threads. Maybe you could try Apache MINA.

    这篇关于ThreadPoolExecutor参数配置的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆