Play框架:请求超出可用线程数时会发生什么 [英] Play Framework: What happens when requests exceeds the available threads

查看:127
本文介绍了Play框架:请求超出可用线程数时会发生什么的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在线程池服务阻止请求中有一个线程.

I have one thread in the thread-pool servicing blocking request.

  def sync = Action {
    import Contexts.blockingPool
    Future { 
        Thread.sleep(100)
    } 
    Ok("Done")
  }

在Contexts.blockingPool中配置为:

In Contexts.blockingPool is configured as:

custom-pool {
    fork-join-executor {
            parallelism-min = 1
            parallelism-max = 1
    }
}

从理论上讲,如果上述请求同时接收100个请求,则预期的行为应为:1个请求应休眠(100),其余99个请求应被拒绝(或排队直到超时?).但是,我观察到创建了额外的工作线程来处理其余的请求.我还观察到,当池中的线程数小于收到的请求时,延迟会随着(变得越来越慢而无法处理服务请求)而增加.

In theory, if above request receives 100 simultaneous requests, the expected behaviour should be: 1 request should sleep(100) and rest of 99 requests should be rejected (or queued until timeout?). However I observed that extra worker threads are created to service rest of requests. I also observed that latency increases as (gets slower to service request) as number of threads in the pool gets smaller than the requests received.

如果收到大于配置的线程池大小的请求,预期的行为是什么?

What is expected behavior if a request larger than configured thread-pool size is received?

推荐答案

您的检验结构不正确,无法检验您的假设. 如果您在文档中的本节中浏览,您会发现Play具有一些线程池/执行上下文.关于您的问题,最重要的一个是默认线程池,以及它与您的操作所服务的HTTP请求之间的关系.

Your test is not correctly structured to test your hypothesis. If you go over this section in the docs you will see that Play has a few thread pools/execution contexts. The one that is important with regards to your question is the default thread pool and how that relates to the HTTP requests served by your action.

如文档所述,默认线程池是默认运行所有应用程序代码的位置. IE.所有动作代码,包括所有Future(未明确定义其自己的执行上下文),都将在此执行上下文/线程池中运行.因此,以您的示例为例:

As the doc describes, the default thread pool is where all application code is run by default. I.e. all action code, including all Future's (not explicitly defining their own execution context), will run in this execution context/thread pool. So using your example:

def sync = Action {

  // *** import Contexts.blockingPool
  // *** Future { 
  // *** Thread.sleep(100)
  // ***} 

  Ok("Done")
}

您的操作中所有由// ***注释的代码 not 将在默认线程池中运行. IE.当请求路由到您的操作时:

All the code in your action not commented by // *** will run in the default thread pool. I.e. When a request gets routed to your action:

  1. 带有Thread.sleepFuture将被分派到您的自定义执行上下文
  2. 然后不用等待Future完成(因为它正在其自己的线程池[Context.blockingPool]中运行,因此不会阻塞默认线程池上的任何线程)
  3. 您的Ok("Done")语句得到评估,客户端收到响应
  4. 约.收到响应后100毫秒,您的Future完成
  1. the Future with the Thread.sleep will be dispatched to your custom execution context
  2. then without waiting for that Future to complete (because it's running in it's own thread pool [Context.blockingPool] and therefore not blocking any threads on the default thread pool)
  3. your Ok("Done") statement is evaluated and the client receives the response
  4. Approx. 100 milliseconds after the response has been received, your Future completes

为解释您的观察,当您发送100个并发请求时,Play会欣然接受这些请求,路由到您的控制器操作(在默认线程池上执行),并分派到您的Future然后回应客户.

So to explain you observation, when you send 100 simultaneous requests, Play will gladly accept those requests, route to your controller action (executing on the default thread pool), dispatch to your Future and then respond to the client.

默认池的默认大小为

play {
  akka {
    ...
    actor {
      default-dispatcher = {
        fork-join-executor {
          parallelism-factor = 1.0
          parallelism-max = 24
        }
      }
    }
  }
}

每个内核最多使用24个线程. 鉴于您的操作几乎没有(不包括Future),您将能够不费力地处理每秒1000个请求.但是,您的Future将需要更长的时间来处理积压订单,因为您正在阻止自定义池(blockingPool)中的唯一线程.

to use 1 thread per core up to a max of 24. Given that your action does very little (excl. the Future), you will be able to handle into the 1000's of requests/sec without a sweat. Your Future will however take much longer to work through the backlog because you are blocking the only thread in your custom pool (blockingPool).

如果您使用我的操作略有调整的版本,则会在日志输出中看到确认以上说明的内容:

If you use my slightly adjusted version of your action, you will see what confirms the above explanation in the log output:

object Threading {

  def sync = Action {
    val defaultThreadPool = Thread.currentThread().getName;

    import Contexts.blockingPool
    Future {
      val blockingPool = Thread.currentThread().getName;
      Logger.debug(s"""\t>>> Done on thread: $blockingPool""")
      Thread.sleep(100)
    }

    Logger.debug(s"""Done on thread: $defaultThreadPool""")
    Results.Ok
  }
}

object Contexts {
  implicit val blockingPool: ExecutionContext = Akka.system.dispatchers.lookup("blocking-pool-context")
}

首先将快速处理您的所有请求,然后再依次完成Future.

All your requests are swiftly dealt with first and then your Future's complete one by one afterwards.

因此,总而言之,如果您真的想测试Play如何仅用一个线程处理请求来处理许多并发请求,那么可以使用以下配置:

So in conclusion, if you really want to test how Play will handle many simultaneous requests with only one thread handling requests, then you can use the following config:

play {
  akka {
    akka.loggers = ["akka.event.Logging$DefaultLogger", "akka.event.slf4j.Slf4jLogger"]
    loglevel = WARNING
    actor {
      default-dispatcher = {
        fork-join-executor {
          parallelism-min = 1
          parallelism-max = 1
        }
      }
    }
  }
}

您可能还想在操作中添加Thread.sleep(以降低默认线程池中的寂寞线程的速度)

you might also want to add a Thread.sleep to your action like this (to slow the default thread pools lonesome thread down a bit)

    ...
    Thread.sleep(100)
    Logger.debug(s"""<<< Done on thread: $defaultThreadPool""")
    Results.Ok
}

现在,您将有1个用于请求的线程和1个用于您的Future线程. 如果您在高并发连接数下运行此程序,您会注意到客户端在Play逐一处理请求时阻塞.您期望看到的是什么...

Now you will have 1 thread for requests and 1 thread for your Future's. If you run this with high concurrent connections you will notice that the client blocks while Play handles the requests one by one. Which is what you expected to see...

这篇关于Play框架:请求超出可用线程数时会发生什么的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆