掩蔽是否有效阻止侧信道攻击? [英] Is masking effective for thwarting side channel attacks?

查看:113
本文介绍了掩蔽是否有效阻止侧信道攻击?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用一些bigint公钥加密代码。使用逐位掩码是否安全,以确保访问的计算时序和存储器地址与数据值无关?



这种技术容易受到基于指令定时,功率,射频发射或其他我不知道的事情? (为了参考,我知道像RSA盲法,EC蒙哥马利梯,缓存刷新等技术)。






直接代码示例(C / C ++):

  uint a = ); 
if(a< b)
a + = b;

现在翻译为使用常量时间掩蔽:

  uint a =(...),b =(...); 
uint mask = - (uint)(a< b);
a =((a + b)& mask)| (a&〜掩模);

注意 b 为0或1,mask为0x00000000或0xFFFFFFFF。






高级操作(C ++):

  Integer x =(...) 
if(x.isFoo())
x.doBar();以下是可接受的安全翻译?

pre> Integer x =(...);
uint mask = - (uint)x.isFoo(); //假设这是恒定时间
Integer y(x); //复制构造函数
y.doBar(); //假设这是恒定时间
x.replace(y,mask); //假设这使用屏蔽


解决方案

可能是安全的...如果我们假设要花费恒定时间的操作真的做到,并且编译器不改变代码来做别的事情。



特别是,让我们来看看你的第一个例子:

  uint a =(...),b =(...); 
uint mask = - (uint)(a< b);
a =((a + b)& mask)| (a&〜掩模);

我看到两种有点似是而非的方式,它们可能无法在常量时间运行:


  1. 比较 a< b 可能或可能不需要恒定的时间,这取决于编译器(和CPU)。如果它被编译为简单的位操作,它可以是恒定时间;


  2. 在高优化级别,一个太聪明的编译器可能会检测到发生了什么事情(例如,通过将代码分成基于比较的两个路径,并在将它们合并之前分别对它们进行优化),并将它优化回我们试图避免的非常数时间码。



    (当然,也有可能一个足够聪明的编译器可以将朴素的,看似非恒定的时间代码优化为一个恒定时间操作,如果它认为会更有效率) p>


避免第一个问题的一种可能方法是使用显式位操作替换比较,如:

  uint32_t a =(...),b = 
uint32_t mask = - ((a-b)>> 31);
a =((a + b)& mask)| (a&〜掩模);但是,请注意,这只是等价于您的原始代码,如果我们可以确保<$ c <$ c <$ p <$ p < $ c> a
b 相差小于2 31 。如果不能保证,我们必须在减法之前将变量转换为更长的类型,例如:

  uint32_t mask =(uint32_t)(((uint64_t)a  - (uint64_t)b)> 32); 

尽管如此,即使这并不是万无一失,编译器仍然可以决定将此代码这是不是恒定时间。 (例如,32位CPU上的64位减法可能需要可变时间,具体取决于是否有借位—这正是我们要隐藏的内容。)



一般来说,确保不会发生这种定时泄漏的唯一方法是:

  1. 手动检查生成的汇编代码(例如查找跳转指令,如果没有期望)。



实际上,显然,您还需要为您希望支持的编译器和目标平台的每个组合单独执行此操作。


I'm working with some bigint public-key cryptography code. Is it safe to use bitwise masking to ensure that the calculation timing and memory addresses accessed are independent of the data values?

Is this technique vulnerable to side-channel attacks based on instruction timing, power, RF emissions, or other things I'm unaware of? (For reference, I'm aware of techniques like RSA blinding, EC Montgomery ladder, cache flushing, and such.)


Example of straightforward code (C/C++):

uint a = (...), b = (...);
if (a < b)
    a += b;

Now translated to use constant-time masking:

uint a = (...), b = (...);
uint mask = -(uint)(a < b);
a = ((a + b) & mask) | (a & ~mask);

Note that a < b is 0 or 1, and mask is 0x00000000 or 0xFFFFFFFF.


Similarly, for a high-level operation (C++):

Integer x = (...);
if (x.isFoo())
    x.doBar();

Is the following an acceptable safe translation?

Integer x = (...);
uint mask = -(uint)x.isFoo();  // Assume this is constant-time
Integer y(x);                  // Copy constructor
y.doBar();                     // Assume this is constant-time
x.replace(y, mask);            // Assume this uses masking

解决方案

This technique may be safe... if the operations we assume to take constant time really do, and if the compiler doesn't change the code to do something else instead.

In particular, let's take a look at your first example:

uint a = (...), b = (...);
uint mask = -(uint)(a < b);
a = ((a + b) & mask) | (a & ~mask);

I see two somewhat plausible ways in which this could fail to run in constant time:

  1. The comparison a < b might or might not take constant time, depending on the compiler (and CPU). If it's compiled to simple bit manipulation, it may be constant-time; if it's compiled to use a conditional jump, it may well not be.

  2. At high optimization levels, it's possible that a too-clever compiler might detect what's happening (say, by splitting the code into two paths based on the comparison, and optimizing them separately before merging them back) and "optimize" it back into the non-constant time code we were trying to avoid.

    (Of course, it's also possible that a sufficiently clever compiler could optimize the naïve, seemingly non-constant time code into a constant-time operation, if it thought that would be more efficient!)

One possible way to avoid the first issue would be to replace the comparison with explicit bit manipulation, as in:

uint32_t a = (...), b = (...);
uint32_t mask = -((a - b) >> 31);
a = ((a + b) & mask) | (a & ~mask);

However, note that this is only equivalent to your original code if we can be sure that a and b differ by less than 231. If that is not guaranteed, we'd have to cast the variables into a longer type before the subtraction, e.g.:

uint32_t mask = (uint32_t)(( (uint64_t)a - (uint64_t)b ) >> 32);

All that said, even this is not foolproof, as the compiler could still decide to turn this code into something that is not constant-time. (For instance, 64-bit subtraction on a 32-bit CPU could potentially take variable time depending on whether there's a borrow or not — which is precisely what we're trying to hide, here.)

In general, the only way to make sure that such timing leaks don't occur is to:

  1. inspect the generated assembly code manually (e.g. looking for jump instructions where you didn't expect any), and

  2. actually benchmark the code to verify that it does, indeed, take the same time to run regardless of the inputs.

Obviously, you'll also need to do this separately for each combination of compiler and target platform that you wish to support.

这篇关于掩蔽是否有效阻止侧信道攻击?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆