scala.concurrent.blocking - 它实际上做什么? [英] scala.concurrent.blocking - what does it actually do?

查看:225
本文介绍了scala.concurrent.blocking - 它实际上做什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我花了一段时间学习了Scala执行上下文,底层线程模型和并发的主题。您能解释 scala.concurrent.blocking 调整运行时行为可以提高性能或避免死锁 ,如 scaladoc 中所述?



文档中,作为等待api的手段,不实现Awaitable。



它实际上是做什么的?



透过来源追踪不容易出现其秘密。

解决方案

阻止意在作为< c $ c> ExecutionContext 所包含的代码是阻塞的,可能导致线程饥饿。这将给线程池一个机会来产生新线程,以防止饥饿。这是调整运行时行为的含义。这不是魔术,但不会与每个 ExecutionContext 一起工作。



考虑这个例子:

  import scala.concurrent._ 
val ec = scala.concurrent.ExecutionContext.Implicits.global

(0至100)foreach {n =
未来{
println(正在启动Future:+ n)
阻止{Thread.sleep(3000)}
println b}这是使用默认的全局 ExecutionContext 。运行代码,你会注意到,100 Future 都立即执行,但如果你删除 blocking ,它们只能一次执行几个。默认的 ExecutionContext 将通过产生新的线程对阻塞调用(标记为这样)作出反应,因此不会因运行 Future s。



现在看看这个有4个线程的固定池的示例:

  import java.util.concurrent.Executors 
val executorService = Executors.newFixedThreadPool(4)
val ec = ExecutionContext.fromExecutorService(executorService)

(0至100)foreach {n =
未来{
println(正在启动Future:+ n)
阻止{Thread.sleep(3000)}
println b}(ec)
}

ExecutionContext 不是为了处理生成新线程,所以即使我的阻塞代码包围了阻塞,你可以看到它仍然只会执行最多4 未来。这就是为什么我们说可以提高性能或避免死锁 - 这不是保证。正如我们在后面的 ExecutionContext 中看到的,它不能保证。



它是如何工作的?已关联,阻止执行此代码:

  BlockContext.current.blockOn (body)(scala.concurrent.AwaitPermission)

BlockContext.current 从当前线程中检索 BlockContext ,参见这里 BlockContext 通常只是一个线程 BlockContext trait如在源代码中看到的,它或者存储在 ThreadLocal 中,或者如果在那里没有找到,它是从当前线程匹配的模式。如果当前线程不是 BlockContext ,则使用 DefaultBlockContext



接下来,在当前 BlockContext 上调用 blockOn blockOn BlockContext 中的抽象方法,因此它的实现取决于 ExecutionContext 处理它。如果我们查看 DefaultBlockContext (当前线程不是 BlockContext )时,我们看到 blockOn 实际上什么也没有。因此,在非 - BlockContext 中使用阻塞意味着根本没有做任何特殊操作,



BlockContext 的线程怎么样?例如,在全局上下文中,看到在这里 blockOn 做了更多。深入挖掘,你可以看到它使用 ForkJoinPool 下面,在同一个片段中定义的 DefaultThreadFactory 用于在 ForkJoinPool 中生成新线程。如果没有从 BlockContext (线程)中实现 blockOn ForkJoinPool 不知道您正在屏蔽,并且不会尝试生成更多线程来响应。



Scala的 Await ,使用 blocking 实现。


I have spent a while learning the topic of Scala execution contexts, underlying threading models and concurrency. Can you explain in what ways does scala.concurrent.blocking "adjust the runtime behavior" and "may improve performance or avoid deadlocks" as described in the scaladoc?

In the documentation, it is presented as a means to await api that doesn't implement Awaitable. (Perhaps also just long running computation should be wrapped?).

What is it that it actually does?

Following through the source doesn't easily betray its secrets.

解决方案

blocking is meant to act as a hint to the ExecutionContext that the contained code is blocking and could lead to thread starvation. This will give the thread pool a chance to spawn new threads in order to prevent starvation. This is what is meant by "adjust the runtime behavior". It's not magic though, and won't work with every ExecutionContext.

Consider this example:

import scala.concurrent._
val ec = scala.concurrent.ExecutionContext.Implicits.global

(0 to 100) foreach { n =>
    Future {
        println("starting Future: " + n)
        blocking { Thread.sleep(3000) }
        println("ending Future: " + n)
    }(ec)
}

This is using the default global ExecutionContext. Running the code as-is, you will notice that the 100 Futures are all executed immediately, but if you remove blocking, they only execute a few at a time. The default ExecutionContext will react to blocking calls (marked as such) by spawning new threads, and thus doesn't get overloaded with running Futures.

Now look at this example with a fixed pool of 4 threads:

import java.util.concurrent.Executors
val executorService = Executors.newFixedThreadPool(4)
val ec = ExecutionContext.fromExecutorService(executorService)

(0 to 100) foreach { n =>
    Future {
        println("starting Future: " + n)
        blocking { Thread.sleep(3000) }
        println("ending Future: " + n)
    }(ec)
}

This ExecutionContext isn't built to handle spawning new threads, and so even with my blocking code surrounded with blocking, you can see that it will still only execute at most 4 Futures at a time. And so that's why we say it "may improve performance or avoid deadlocks"--it's not guaranteed. As we see in the latter ExecutionContext, it's not guaranteed at all.

How does it work? As linked, blocking executes this code:

BlockContext.current.blockOn(body)(scala.concurrent.AwaitPermission)

BlockContext.current retrieves the BlockContext from the current thread, seen here. A BlockContext is usually just a Thread with the BlockContext trait mixed in. As seen in the source, it is either stored in a ThreadLocal, or if it's not found there, it is pattern matched out of the current thread. If the current thread is not a BlockContext, then the DefaultBlockContext is used instead.

Next, blockOn is called on the current BlockContext. blockOn is an abstract method in BlockContext, so it's implementation is dependent on how the ExecutionContext handles it. If we look at the implementation for DefaultBlockContext (when the current thread is not a BlockContext), we see that blockOn actually does nothing there. So using blocking in a non-BlockContext means that nothing special is done at all, and the code is run as-is, with no side-effects.

What about threads that are BlockContexts? For instance, in the global context, seen here, blockOn does quite a bit more. Digging deeper, you can see that it's using a ForkJoinPool under the hood, with the DefaultThreadFactory defined in the same snippet being used for spawning new threads in the ForkJoinPool. Without the implementation of blockOn from the BlockContext (thread), the ForkJoinPool doesn't know you're blocking, and won't try to spawn more threads in response.

Scala's Await too, uses blocking for its implementation.

这篇关于scala.concurrent.blocking - 它实际上做什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆