使用 pytest 和假设进行异常处理和测试 [英] Exception handling and testing with pytest and hypothesis
问题描述
我正在为带有假设的统计分析编写测试.当我的代码传递非常稀疏的数据时,假设导致我在代码中出现 ZeroDivisionError
.所以我修改了我的代码来处理异常;就我而言,这意味着记录原因并重新引发异常.
I'm writing tests for a statistical analysis with hypothesis. Hypothesis led me to a ZeroDivisionError
in my code when it is passed very sparse data. So I adapted my code to handle the exception; in my case, that means log the reason and reraise the exception.
try:
val = calc(data)
except ZeroDivisionError:
logger.error(f"check data: {data}, too sparse")
raise
我需要通过调用堆栈向上传递异常,因为顶级调用者需要知道存在异常,以便它可以将错误代码传递给外部调用者(REST API 请求).
I need to pass the exception up through the call stack because the top-level caller needs to know there was an exception so that it can pass an error code to the external caller (a REST API request).
我也不能为 val
分配一个合理的值;基本上我需要一个直方图,当我从数据计算合理的 bin 宽度时会发生这种情况.显然,当数据稀疏时,这会失败.如果没有直方图,算法就无法继续进行.
I can't also assign a reasonable value to val
; essentially I need a histogram, and this happens when I'm calculating a reasonable bin width from the data. Obviously this fails when the data is sparse. And without the histogram, the algorithm cannot proceed any further.
现在我的问题是,在我的测试中,当我这样做时:
Now my issue is, in my test when I do something like this:
@given(dataframe)
def test_my_calc(df):
# code that executes the above code path
hypothesis
不断生成触发 ZeroDivisionError
的失败示例,我不知道如何忽略此异常.通常我会用 pytest.mark.xfail(raises=ZeroDivisionError)
标记这样的测试,但在这里我不能这样做,因为相同的测试通过了表现良好的输入.
hypothesis
keeps generating failing examples that trigger ZeroDivisionError
, and I don't know how to ignore this exception. Normally I would mark a test like this with pytest.mark.xfail(raises=ZeroDivisionError)
, but here I can't do that as the same test passes for well behaved inputs.
这样的事情是理想的:
- 像往常一样对大多数输入进行测试,但是
- 当引发
ZeroDivisionError
时,将其作为预期失败跳过.
- continue with the test as usual for most inputs, however
- when
ZeroDivisionError
is raised, skip it as an expected failure.
我怎么能做到这一点?我是否还需要在测试正文中放入 try: ... except: ...
?我需要在 except 块中做什么才能将其标记为预期失败?
How could I achieve that? Do I need to put a try: ... except: ...
in the test body as well? What would I need to do in the except block to mark it as an expected failure?
编辑:为了解决@hoefling 的评论,分离失败的案例将是想法解决方案.但不幸的是,hypothesis
没有给我足够的控制权.我最多可以控制生成数据的总数和限制(最小,最大).然而,失败案例的传播范围非常小.我没有办法控制它.我想这就是假设的重点,也许我根本不应该为此使用假设.
Edit: to address the comment by @hoefling, separating out the failing cases would be the idea solution. But unfortunately, hypothesis
doesn't give me enough handles to control that. At most I can control the total count, and limits (min, max) of the generated data. However the failing cases have a very narrow spread. There is no way for me to control that. I guess that's the point of hypothesis, and maybe I shouldn't be using hypothesis at all for this.
以下是我生成数据的方式(稍微简化):
Here's how I generate my data (slightly simplified):
cities = [f"city{i}" for i in range(4)]
cats = [f"cat{i}" for i in range(4)]
@st.composite
def dataframe(draw):
data_st = st.floats(min_value=0.01, max_value=50)
df = []
for city, cat in product(cities, cats):
cols = [
column("city", elements=st.just(city)),
column("category", elements=st.just(cat)),
column("metric", elements=data_st, fill=st.nothing()),
]
_df = draw(data_frames(cols, index=range_indexes(min_size=2)))
# my attempt to control the spread
assume(np.var(_df["metric"]) >= 0.01)
df += [_df]
df = pd.concat(df, axis=0).set_index(["city", "category"])
return df
推荐答案
from hypothesis import assume, given, strategies as st
@given(...)
def test_stuff(inputs):
try:
...
except ZeroDivisionError:
assume(False)
assume
调用会告诉 Hypothesis 这个例子是坏的",它应该尝试另一个,而不会导致测试失败.如果你有这样的函数,它相当于在你的策略上调用 .filter(will_not_cause_zero_division)
.查看文档了解详情.
The assume
call will tell Hypothesis that this example is "bad" and it should try another, without failing the test. It's equivalent to calling .filter(will_not_cause_zero_division)
on your strategy, if you had such a function. See the docs for details.
这篇关于使用 pytest 和假设进行异常处理和测试的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!