Python的eval()在不受信任的字符串上的安全性吗? [英] Security of Python's eval() on untrusted strings?
问题描述
如果我正在使用eval()评估Python字符串,并且具有类似这样的类:
If I am evaluating a Python string using eval(), and have a class like:
class Foo(object):
a = 3
def bar(self, x): return x + a
如果我不信任该字符串,会有哪些安全隐患?特别是:
What are the security risks if I do not trust the string? In particular:
-
eval(string, {"f": Foo()}, {})
不安全吗?也就是说,您可以从Foo实例访问os或sys还是不安全的东西? -
eval(string, {}, {})
不安全吗?也就是说,我是否可以完全通过len和list之类的内置工具访问os或sys? - 是否有一种方法可以使内置插件在评估环境中根本不存在?
- Is
eval(string, {"f": Foo()}, {})
unsafe? That is, can you reach os or sys or something unsafe from a Foo instance? - Is
eval(string, {}, {})
unsafe? That is, can I reach os or sys entirely from builtins like len and list? - Is there a way to make builtins not present at all in the eval context?
有一些我不关心的不安全字符串,例如"[0] * 100000000",因为在最坏的情况下,它们会减慢/停止程序.我主要关心的是保护程序外部的用户数据.
There are some unsafe strings like "[0] * 100000000" I don't care about, because at worst they slow/stop the program. I am primarily concerned about protecting user data external to the program.
很明显,在大多数情况下,没有自定义词典的eval(string)
是不安全的.
Obviously, eval(string)
without custom dictionaries is unsafe in most cases.
推荐答案
您无法使用这样的黑名单方法来保护评估.请参阅评估确实很危险,以获取可以隔离CPython解释器并允许访问的输入示例任何您喜欢的课程,依此类推.
You cannot secure eval with a blacklist approach like this. See Eval really is dangerous for examples of input that will segfault the CPython interpreter, give access to any class you like, and so on.
这篇关于Python的eval()在不受信任的字符串上的安全性吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!