挥发性及其有害影响 [英] Volatile and its harmful implications
问题描述
I am a embedded developer and use volatile keyword when working with I/O ports. But my Project manager suggested using volatile keyword is harmful and has lot of draw backs, But i find in most of the cases volatile is useful in embedded programming, As per my knowledge volatile is harmful in kernel code as the changes to our code will become unpredictable. There are any drawbacks using volatile in Embedded Systems also?
No, volatile
is not harmful. In any situation. Ever. There is no possible well-formed piece of code that will break with the addition of volatile
to an object (and pointers to that object). However, volatile
is often poorly understood. The reason the kernel docs state that volatile
is to be considered harmful is that people kept using it for synchronization between kernel threads in broken ways. In particular, they used volatile
integer variables as though access to them was guaranteed to be atomic, which it isn't.
volatile
is also not useless, and particularly if you go bare-metal, you will need it. But, like any other tool, it is important to understand the semantics of volatile
before using it.
What volatile
is
Access to volatile
objects is, in the standard, considered a side-effect in the same way as incrementing or decrementing by ++
and --
. In particular, this means that 5.1.2.3 (3), which says
(...) An actual implementation need not evaluate part of an expression if it can deduce that its value is not used and that no needed side effects are produced (including any caused by calling a function or accessing a volatile object)
does not apply. The compiler has to chuck out everything it thinks it knows about the value of a volatile
variable at every sequence point. (like other side-effects, when access to volatile
objects happens is governed by sequence points)
The effect of this is largely the prohibition of certain optimizations. Take, for example, the code
int i;
void foo(void) {
i = 0;
while(i == 0) {
// do stuff that does not touch i
}
}
The compiler is allowed to make this into an infinite loop that never checks i
again because it can deduce that the value of i
is not changed in the loop, and thus that i == 0
will never be false. This holds true even if there is another thread or an interrupt handler that could conceivably change i
. The compiler does not know about them, and it does not care. It is explicitly allowed to not care.
Contrast this with
int volatile i;
void foo(void) {
i = 0;
while(i == 0) { // Note: This is still broken, only a little less so.
// do stuff that does not touch i
}
}
Now the compiler has to assume that i
can change at any time and cannot do this optimization. This means, of course, that if you deal with interrupt handlers and threads, volatile
objects are necessary for synchronisation. They are not, however, sufficient.
What volatile
isn't
What volatile
does not guarantee is atomic access. This should make intuitive sense if you're used to embedded programming. Consider, if you will, the following piece of code for an 8-bit AVR MCU:
uint32_t volatile i;
ISR(TIMER0_OVF_vect) {
++i;
}
void some_function_in_the_main_loop(void) {
for(;;) {
do_something_with(i); // This is thoroughly broken.
}
}
The reason this code is broken is that access to i
is not atomic -- cannot be atomic on an 8-bit MCU. In this simple case, for example, the following might happen:
i
is0x0000ffff
do_something_with(i)
is about to be called- the high two bytes of
i
are copied into the parameter slot for this call - at this point, timer 0 overflows and the main loop is interrupted
- the ISR changes
i
. The lower two bytes ofi
overflow and are now0
.i
is now0x00010000
. - the main loop continues, and the lower two bytes of
i
are copied into the parameter slot do_something_with
is called with0
as its parameter.
Similar things can happen on PCs and other platforms. If anything, more opportunities it can fail open up with a more complex architecture.
Takeaway
So no, using volatile
is not bad, and you will (often) have to do it in bare-metal code. However, when you do use it, you have to keep in mind that it is not a magic wand, and that you will still have to make sure you don't trip over yourself. In embedded code, there's often a platform-specific way to handle the problem of atomicity; in the case of AVR, for example, the usual crowbar method is to disable interrupts for the duration, as in
uint32_t x;
ATOMIC_BLOCK(ATOMIC_RESTORESTATE) {
x = i;
}
do_something_with(x);
...where the ATOMIC_BLOCK
macro calls cli()
(disable interrupts) before and sei()
(enable interrupts) afterwards if they were enabled beforehand.
With C11, which is the first C standard that explicitly acknowledges the existence of multithreading, a new family of atomic types and memory fencing operations have been introduced that can be used for inter-thread synchronisation and in many cases make use of volatile
unnecessary. If you can use those, do it, but it'll likely be some time before they reach all common embedded toolchains. With them, the loop above could be fixed like this:
atomic_int i;
void foo(void) {
atomic_store(&i, 0);
while(atomic_load(&i) == 0) {
// do stuff that does not touch i
}
}
...in its most basic form. The precise semantics of the more relaxed memory order semantics go way beyond the scope of a SO answer, so I'll stick with the default sequentially consistent stuff here.
If you're interested in it, Gil Hamilton provided a link in the comments to an explanation of a lock-free stack implementation using C11 atomics, although I don't feel it's a terribly good write-up of the memory order semantics themselves. The C11 model does, however, appear to closely mirror the C++11 memory model, of which a useful presentation exists here. If I find a link to a C11-specific write-up, I will put it here later.
这篇关于挥发性及其有害影响的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!