屏蔽是有效挫败边信道攻击? [英] Is masking effective for thwarting side channel attacks?

查看:116
本文介绍了屏蔽是有效挫败边信道攻击?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我与一些BIGINT公共密钥加密code工作。它是安全使用按位屏蔽,以保证访问的计算时间和内存地址是独立的数据值?

这是技术容易受到侧信道不同的指令时序,功耗,射频辐射,或其他的东西,我不知道的攻击? (仅供参考,我所知道的像RSA致盲,EC蒙哥马利梯,缓存刷新,而这些技术。)


简单code(C / C ++)的例子:

  UINT A =(...),B =(...);
如果(A< B)
    A + = B;

现在转换为使用固定时间掩蔽:

  UINT A =(...),B =(...);
UINT面具= - (UINT)(A< B);
一个=((A + B)及掩模)| (A和〜掩模);

注意 A< b 为0或1,而面具是00000000或0xFFFFFFFF。


类似地,对于高级别操作(VB):

 整数x =(...);
如果(x.isFoo())
    x.doBar();

在下面的一个可以接受的安全翻译?

 整数x =(...);
UINT面具= - (UINT)x.isFoo(); //假设这是恒定时间
整数Y(X); //复制构造
y.doBar(); //假设这是恒定时间
x.replace(Y,掩模); //假设这里采用屏蔽


解决方案

此技术的可能的是安全的......如果操作我们假设采取恒定的时间真的这么做了,如果没有按编译 t更改code做些别的事情吧。

在特定的,让我们来看看你的第一个例子:

UINT A =(...),B =(...);
UINT面具= - (UINT)(A< B);
一个=((A + B)及掩模)| (A和〜掩模);

我看到两个似是而非的有些方法,使这可能无法在固定时间内运行:


  1. 比较 A< b 可能会或可能不会需要一定的时间,这取决于编译器(和CPU)。如果它的编译为简单的位操作,​​它​​可以是恒定的时间;如果它编译使用条件跳转,它很可能不是。


  2. 在高优化的水平,这可能是一个太聪明的编译器可能会检测到所发生的事情(比如,通过拆分code为基础的比较两条路径,并合并他们回来之前分别优化它们)和优化它放回非固定时间code我们试图避免的。

    (当然,它也可能是一个足够聪明的编译器可以优化天真,看似非恒定的时间code到一个固定时间操作,如果它认为会更有效!)


一种可能的方法,以避免第一问题将是,以取代显式位操作的比较,如在

uint32_t的一个=(...),B =(...);
uint32_t的掩模= - ((一 - 二)及GT;> 31);
一个=((A + B)及掩模)| (A和〜掩模);

但是,请注意,这仅相当于原来的code,如果我们可以肯定的是 A B 不到2相差 31 。如果不能保证,我们不得不减法,例如之前变量铸造成一个较长的类型:

uint32_t的面具=(uint32_t的)(((uint64_t中)一 - (uint64_t中)B)>> 32);

所有这一切说,即使这并非万无一失,因为编译器仍然可以决定把这个code到的东西,是不恒定时间。 (例如,32位CPU的64位减法可能采取可变的时间取决于是否有借与否—这是precisely正是我们试图隐瞒,这里)

在一般情况下,只有这样才能使的确定的不会发生这样的时机泄漏是:


  1. 检查生成的汇编code手动(例如寻找跳转指令,你没想到的),以及


  2. 实际基准code,以验证它,事实上,同时服用不管投入运行。


显然,你还需要为你想支持的编译器和目标平台的每个组合单独做到这一点。

I'm working with some bigint public-key cryptography code. Is it safe to use bitwise masking to ensure that the calculation timing and memory addresses accessed are independent of the data values?

Is this technique vulnerable to side-channel attacks based on instruction timing, power, RF emissions, or other things I'm unaware of? (For reference, I'm aware of techniques like RSA blinding, EC Montgomery ladder, cache flushing, and such.)


Example of straightforward code (C/C++):

uint a = (...), b = (...);
if (a < b)
    a += b;

Now translated to use constant-time masking:

uint a = (...), b = (...);
uint mask = -(uint)(a < b);
a = ((a + b) & mask) | (a & ~mask);

Note that a < b is 0 or 1, and mask is 0x00000000 or 0xFFFFFFFF.


Similarly, for a high-level operation (C++):

Integer x = (...);
if (x.isFoo())
    x.doBar();

Is the following an acceptable safe translation?

Integer x = (...);
uint mask = -(uint)x.isFoo();  // Assume this is constant-time
Integer y(x);                  // Copy constructor
y.doBar();                     // Assume this is constant-time
x.replace(y, mask);            // Assume this uses masking

解决方案

This technique may be safe... if the operations we assume to take constant time really do, and if the compiler doesn't change the code to do something else instead.

In particular, let's take a look at your first example:

uint a = (...), b = (...);
uint mask = -(uint)(a < b);
a = ((a + b) & mask) | (a & ~mask);

I see two somewhat plausible ways in which this could fail to run in constant time:

  1. The comparison a < b might or might not take constant time, depending on the compiler (and CPU). If it's compiled to simple bit manipulation, it may be constant-time; if it's compiled to use a conditional jump, it may well not be.

  2. At high optimization levels, it's possible that a too-clever compiler might detect what's happening (say, by splitting the code into two paths based on the comparison, and optimizing them separately before merging them back) and "optimize" it back into the non-constant time code we were trying to avoid.

    (Of course, it's also possible that a sufficiently clever compiler could optimize the naïve, seemingly non-constant time code into a constant-time operation, if it thought that would be more efficient!)

One possible way to avoid the first issue would be to replace the comparison with explicit bit manipulation, as in:

uint32_t a = (...), b = (...);
uint32_t mask = -((a - b) >> 31);
a = ((a + b) & mask) | (a & ~mask);

However, note that this is only equivalent to your original code if we can be sure that a and b differ by less than 231. If that is not guaranteed, we'd have to cast the variables into a longer type before the subtraction, e.g.:

uint32_t mask = (uint32_t)(( (uint64_t)a - (uint64_t)b ) >> 32);

All that said, even this is not foolproof, as the compiler could still decide to turn this code into something that is not constant-time. (For instance, 64-bit subtraction on a 32-bit CPU could potentially take variable time depending on whether there's a borrow or not — which is precisely what we're trying to hide, here.)

In general, the only way to make sure that such timing leaks don't occur is to:

  1. inspect the generated assembly code manually (e.g. looking for jump instructions where you didn't expect any), and

  2. actually benchmark the code to verify that it does, indeed, take the same time to run regardless of the inputs.

Obviously, you'll also need to do this separately for each combination of compiler and target platform that you wish to support.

这篇关于屏蔽是有效挫败边信道攻击?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆