这足以确保相互排斥吗? [英] Is this enough to insure mutual exclusion?

查看:69
本文介绍了这足以确保相互排斥吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

lock.lock(); //lock
if(nReaders > 0){

readers.await(); }//await

nReaders++;



...


nReaders--;

readers.signal(); //signal
lock.unlock(); //lock





我尝试使用它,以便我的全局变量在我的程序执行期间保持一致,但由于某种原因它不起作用。可能是什么问题?



I tried using it so that my global variable stays consistent throughout the execution of my program, but it doesn't work for some reason. What might be the problem?

推荐答案

哦,是的,确实可以确保,但也许太多了:结果是创建了与非序列化访问相反的缺陷:死锁的可能性 :-)。



如果没有剩下的代码,这段代码就无法分析等待同一个对象并锁定在同一个锁对象上的其他线程(总是应该存在,否则锁将完全没有意义)。但实际上,它是一个准备好的捕获死锁 。成像你有一个线程,它应该发出信号读者。如果没有发生这种情况,那么运行此代码中显示的片段的线程将不会被唤醒。现在,发出信号的成像发生在另一个锁定的代码片段中,并且用相同的 lock 对象锁定。如果该线程的 lock 调用发生在执行你显示的片段的线程已经在等待时,你将得到两个线程无限地等待彼此。



更糟糕的是,在某些环境中,概率的发挥可能导致我所描述的锁定的代码片段将由两个线程按顺序执行的情况,只是剪切重合,比如,整年的运行时间,并且,明年,它可能最终陷入僵局。这不是一个笑话:当死锁的概率可以与任何初步设置值一样小时,我可以很容易地设计这种效果的演示,但不是100%(创建这种情况的一种方法是链式死锁称为 五个餐饮哲学家的问题,并且可以使用一些延迟来调整概率。同样,这完全取决于在您的代码中写入了什么。显示的代码片段可能危险。它不仅是可疑的,而且看起来像是熟悉的死锁模式的一部分 我实际上在一些软件产品中看到了我必须继承并修复/替换:等待在相互排除的区域内。



你应该明白一件事:等待锁定意味着线程的条件状态转换为特殊的等待状态,当线程被关闭并且没有被调度回执行,直到它被某些事件唤醒,例如通过释放锁定其他线程,发出条件信号,超时或中止。一旦你试图通过另一个同步原语保护对一个同步原语的访问,你就会遇到麻烦。



请不要问我怎么样要解决这个问题。没有什么可以解决的,但这段代码毫无意义。我只是不了解你的目标,所以整个事情都不是一个有效的问题。您需要非常彻底地设计代码,并证明它不会遇到死锁或竞争条件,就像证明数学定理一样:使用精确的逻辑推测。证明不应该基于对所有可能变体的考虑(然而这对于非常简单的问题是可能的),但是你应该使用逻辑推理得出这个结论。我没有实际死锁的年份的例子应该说明你不能仅依靠测试。其中一种分析方法是基于 Petri网形式主义。



同时,有许多简单的问题,不,甚至是类问题(它们可能非常复杂,但在这方面很简单),只需查看代码即可轻松完成分析。一个简单的例子是:只使用锁并且锁是严格嵌套的(特别是,在所有异常上释放所有锁也很重要)。这些模型在性能方面可能有好有坏(许多人严重过度同步应用程序而没有任何有用的效果,但这是一个单独的主题),但它们永远不会导致死锁。我知道开发人员只使用常识和简单的线程模型而且从来没有问题。



请参阅:

http://en.wikipedia.org/wiki/Deadlock [ ^ ],

http://en.wikipedia.org/wiki/Race_condition [ ^ ],

http://en.wikipedia.org/wiki/Thread_synchronization [ ^ ],

http://en.wikipedia.org/wiki/Dining_philosophers_problem [ ^ ],

http:// en。 wikipedia.org/wiki/Petri_net [ ^ ]。



-SA
Oh yes, it does ensure that, but perhaps too much: it turns out to create a defect opposite to the non-serialized access: the possibility of deadlock :-).

This code cannot be analyzed along, without the rest of the code, some the parts in other thread(s) awaiting on the same object and locking on the same lock object (which always should exist, otherwise the lock would be totally pointless). But as it is, it is a well-prepared catch for a deadlock. Imaging that you have a thread that it is supposed to signal the condition readers. If this does not happen, the thread running the fragment shown in this code won't get waken up. Now, imaging that signalling the condition happens in the another locked fragment of code, and it is locked with the same lock object. If the lock call by that thread happens when the thread executing the fragment you show is already awaiting, you will get two threads infinitely waiting for each other.

Worse, in certain environment, the play of probabilities may lead to the situation when the locked fragments of code I described will be executed by both threads in some sequentially order, just be shear coincidence, for, say, whole year of runtime, and, on next year, it may eventually run into the deadlock. This is not a joke: I can easily design the demonstration of such effect when the probability of deadlock can be made as small as any preliminary set value, and yet not 100% (one way to create such situation is the chained deadlock called the "problem of five dining philosophers", and the probability can be tuned using some delays). Again, it all depends on what else is written in your code. The fragment of code shown is potentially dangerous. Not only it is suspicious, but it looks like a part of the very familiar deadlock pattern I actually saw in some software products I had to inherit and fix/replace: a wait inside mutually excluded area.

You should understand one thing: both await and lock mean conditional state transition of a thread into a special "wait state", when a thread is switched off and not scheduled back to execution until it is waken up by some event, such as release of the lock by other thread, signalling a condition, timeout or abort. As soon as you are trying to "protect" the access to one synchronization primitive by another synchronization primitive, you are creating some trouble.

Please don't ask me how to "fix" it. There is nothing to fix, but this code makes no sense. I just don't know your goals, so the whole thing is not a valid question. You need to design your code very thoroughly and proof that it does not run into deadlock or, say, race condition, the same way as mathematical theorems are proven: using precise logical speculation. The prove should not be based on the consideration of all possible variants (which is however possible for very simple problems), but you should come to this conclusion using logical reasoning. My example with the year without an actual deadlock should explain that you cannot depend just on testing. One of the analysis method is based on Petri net formalism.

At the same time, there are many simple problems, no, even classes of problems (they can be very complex but simple in this aspect) where the analysis can be easily done just by looking at the code. One simple example is: only locks are used and locks are strictly nested (in particular, it's also important to release all locks on all exceptions). Such models can be good or bad in terms of performance (many people heavily over-synchronize the applications without any useful effect of it, but this is a separate topic), but they never cause deadlocks. I know developers who use only the common sense and simple models of threading and never have problems with it.

Please see:
http://en.wikipedia.org/wiki/Deadlock[^],
http://en.wikipedia.org/wiki/Race_condition[^],
http://en.wikipedia.org/wiki/Thread_synchronization[^],
http://en.wikipedia.org/wiki/Dining_philosophers_problem[^],
http://en.wikipedia.org/wiki/Petri_net[^].

—SA


这篇关于这足以确保相互排斥吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆