System.Lazy T具有不同的线程安全模式 [英] System.Lazy<T> with different thread-safety mode
问题描述
.NET 4.0 的 System.Lazy
.NET 4.0's System.Lazy<T> class offers three Thread-Safety modes via the enum LazyThreadSafetyMode, which I'll summarise as:
- LazyThreadSafetyMode.None - 不是线程安全的.
- LazyThreadSafetyMode.ExecutionAndPublication - 只有一个并发线程会尝试创建底层值.成功创建后,所有等待的线程将收到相同的值.如果在创建过程中发生未处理的异常,它将在每个等待线程上重新抛出,并在随后每次尝试访问底层值时进行缓存和重新抛出.
- LazyThreadSafetyMode.PublicationOnly - 多个并发线程将尝试创建基础值,但第一个成功将确定传递给所有线程的值.如果在创建过程中发生未处理的异常,则不会被缓存和并发&随后尝试访问基础值将重新尝试创建 &可能会成功.
- LazyThreadSafetyMode.None - Not thread safe.
- LazyThreadSafetyMode.ExecutionAndPublication - Only one concurrent thread will attempt to create the underlying value. On successful creation, all waiting threads will receive the same value. If an unhandled exception occurs during creation, it will be re-thrown on each waiting thread, cached and re-thrown on each subsequent attempt to access the underlying value.
- LazyThreadSafetyMode.PublicationOnly - Multiple concurrent threads will attempt to create the underlying value but the first to succeed will determine the value passed to all threads. If an unhandled exception occurs during creation, it will not be cached and concurrent & subsequent attempts to access the underlying value will re-try the creation & may succeed.
我想要一个延迟初始化的值,它遵循稍微不同的线程安全规则,即:
I'd like to have a lazy-initialized value which follows slightly different thread-safety rules, namely:
只有一个并发线程会尝试创建底层值.成功创建后,所有等待的线程将收到相同的值.如果在创建过程中发生未处理的异常,它会在每个等待线程上重新抛出,但不会被缓存,后续尝试访问底层值将重新尝试创建&可能会成功.
因此,与 LazyThreadSafetyMode.ExecutionAndPublication 的主要区别在于,如果创建时的首次尝试"失败,可以稍后重新尝试.
So the key differince with LazyThreadSafetyMode.ExecutionAndPublication is that if a "first go" at creation fails, it can be re-attempted at a later time.
是否有提供这些语义的现有 (.NET 4.0) 类,还是我必须自己推出?如果我自己动手,是否有一种聪明的方法可以重新使用现有的 Lazy<T>在实现中避免显式锁定/同步?
Is there an existing (.NET 4.0) class that offers these semantics, or will I have to roll my own? If I roll my own is there a smart way to re-use the existing Lazy<T> within the implementation to avoid explicit locking/synchronization?
注意对于用例,想象一下创建"可能很昂贵并且容易出现间歇性错误,例如涉及从远程服务器获取大量数据.我不想进行多次并发尝试来获取数据,因为它们很可能都失败或都成功.但是,如果他们失败了,我希望以后能够重试.
N.B. For a use case, imagine that "creation" is potentially expensive and prone to intermittent error, involving e.g. getting a large chunk of data from a remote server. I wouldn't want to make multiple concurrent attempts to get the data since they'll likely all fail or all succeed. However, if they fail, I'd like to be able to retry later on.
推荐答案
我对 Darin 的更新答案 版本的尝试没有我指出的竞争条件... 警告,我不完全确定这最终完全没有竞争条件.
My attempt at a version of Darin's updated answer that doesn't have the race condition I pointed out... warning, I'm not completely sure this is finally completely free of race conditions.
private static int waiters = 0;
private static volatile Lazy<object> lazy = new Lazy<object>(GetValueFromSomewhere);
public static object Value
{
get
{
Lazy<object> currLazy = lazy;
if (currLazy.IsValueCreated)
return currLazy.Value;
Interlocked.Increment(ref waiters);
try
{
return lazy.Value;
// just leave "waiters" at whatever it is... no harm in it.
}
catch
{
if (Interlocked.Decrement(ref waiters) == 0)
lazy = new Lazy<object>(GetValueFromSomewhere);
throw;
}
}
}
更新:我以为我在发布这篇文章后发现了竞争条件.该行为实际上应该是可以接受的,只要您对一种可能很少见的情况感到满意,在这种情况下,某个线程在另一个线程已经从一个成功的线程中返回后,抛出它从慢 Lazy
观察到的异常fast Lazy
(以后的请求都会成功).
Update: I thought I found a race condition after posting this. The behavior should actually be acceptable, as long as you're OK with a presumably rare case where some thread throws an exception it observed from a slow Lazy<T>
after another thread has already returned from a successful fast Lazy<T>
(future requests will all succeed).
服务员
= 0- t1:进入运行到
Interlocked.Decrement
(waiters
= 1) 之前 - t2:进入并运行到
Interlocked.Increment
之前(waiters
= 1) - t1:执行它的
Interlocked.Decrement
并准备覆盖 (waiters
= 0) - t2:运行到
Interlocked.Decrement
之前(waiters
= 1) - t1:用新的覆盖
lazy
(称之为lazy1
)(waiters
= 1) - t3:进入并阻止
lazy1
(waiters
= 2) - t2:执行它的
Interlocked.Decrement
(waiters
= 1) - t3:从
lazy1
获取并返回值(waiters
现在无关紧要) - t2:重新抛出异常
waiters
= 0- t1: comes in runs up to just before the
Interlocked.Decrement
(waiters
= 1) - t2: comes in and runs up to just before the
Interlocked.Increment
(waiters
= 1) - t1: does its
Interlocked.Decrement
and prepares to overwrite (waiters
= 0) - t2: runs up to just before the
Interlocked.Decrement
(waiters
= 1) - t1: overwrites
lazy
with a new one (call itlazy1
) (waiters
= 1) - t3: comes in and blocks on
lazy1
(waiters
= 2) - t2: does its
Interlocked.Decrement
(waiters
= 1) - t3: gets and returns the value from
lazy1
(waiters
is now irrelevant) - t2: rethrows its exception
我想不出会导致比在另一个线程产生成功结果后该线程抛出异常"更糟糕的事件序列.
I can't come up with a sequence of events that will cause something worse than "this thread threw an exception after another thread yielded a successful result".
Update2:将 lazy
声明为 volatile
以确保所有读者立即看到受保护的覆盖.有些人(包括我自己)看到 volatile
并立即想到好吧,这可能被错误地使用了",他们通常是对的.这就是我在这里使用它的原因:在上面示例中的事件序列中,t3 仍然可以读取旧的 lazy
而不是 lazy1
如果它位于读取lazy.Value
t1 修改 lazy
以包含 lazy1
的那一刻.volatile
可以防止这种情况发生,以便下一次尝试可以立即开始.
Update2: declared lazy
as volatile
to ensure that the guarded overwrite is seen by all readers immediately. Some people (myself included) see volatile
and immediately think "well, that's probably being used incorrectly", and they're usually right. Here's why I used it here: in the sequence of events from the example above, t3 could still read the old lazy
instead of lazy1
if it was positioned just before the read of lazy.Value
the moment that t1 modified lazy
to contain lazy1
. volatile
protects against that so that the next attempt can start immediately.
我还提醒自己为什么我脑子里会出现这样的话低锁并发编程很难,只需使用 C# lock
语句即可!!!"在我写原始答案的整个过程中.
I've also reminded myself why I had this thing in the back of my head saying "low-lock concurrent programming is hard, just use a C# lock
statement!!!" the entire time I was writing the original answer.
Update3:只是在 Update2 中更改了一些文本,指出使 volatile
成为必要的实际情况——这里使用的 Interlocked
操作显然是对重要今天的 CPU 架构,而不是我最初假设的半围栏,所以 volatile
保护的部分比我原先想象的要窄得多.
Update3: just changed some text in Update2 pointing out the actual circumstance that makes volatile
necessary -- the Interlocked
operations used here are apparently implemented full-fence on the important CPU architectures of today and not half-fence as I had originally just sort-of assumed, so volatile
protects a much narrower section than I had originally thought.
这篇关于System.Lazy T具有不同的线程安全模式的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!