具有测试之间依赖关系的单元测试 [英] Unit-testing with dependencies between tests
问题描述
当你有单元测试时,你如何进行单元测试
How do you do unit testing when you have
- 一些常规单元测试
- 更复杂的测试检查边缘情况,取决于一般情况
举个例子,想象一下测试一个 CSV 阅读器(我只是编了一个符号来演示),
To give an example, imagine testing a CSV-reader (I just made up a notation for demonstration),
def test_readCsv(): ...
@dependsOn(test_readCsv)
def test_readCsv_duplicateColumnName(): ...
@dependsOn(test_readCsv)
def test_readCsv_unicodeColumnName(): ...
我希望子测试仅在其父测试成功时运行.这背后的原因是运行这些测试需要时间.许多回到单一原因的失败报告也不会提供信息.当然,我可以将所有边缘情况硬塞到主要测试中,但我想知道是否有更结构化的方法来做到这一点.
I expect sub-tests to be run only if their parent test succeeds. The reason behind this is that running these tests takes time. Many failure reports that go back to a single reason wouldn't be informative, either. Of course, I could shoehorn all edge-cases into the main test, but I wonder if there is a more structured way to do this.
我发现了这些相关但不同的问题,
I've found these related but different questions,
更新:
我发现 TestNG 有很好的内置支持用于测试依赖项.你可以写这样的测试,
I've found TestNG which has great built-in support for test dependencies. You can write tests like this,
@Test{dependsOnMethods = ("test_readCsv"))
public void test_readCsv_duplicateColumnName() {
...
}
推荐答案
就我个人而言,我不会担心在单元测试之间创建依赖关系.这对我来说听起来有点代码味道.几点:
Personally, I wouldn't worry about creating dependencies between unit tests. This sounds like a bit of a code smell to me. A few points:
- 如果测试失败,让其他测试失败并了解不利代码更改造成的问题的严重程度.
- 测试失败应该是例外而不是常态,那么为什么在绝大多数时间(希望如此!)没有任何好处的情况下浪费精力并创建依赖关系?如果失败经常发生,那么您的问题不在于单元测试依赖项,而在于频繁的测试失败.
- 单元测试应该运行得非常快.如果它们运行缓慢,那么将您的精力集中在提高这些测试的速度上,而不是防止后续的失败.通过更多地解耦代码并使用依赖注入或模拟来实现这一点.
这篇关于具有测试之间依赖关系的单元测试的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!