BDD Anti Patterns
The following BDD anti-patterns I have observed that reduce the effectiveness of BDD practices:
1. Story Scenarios that are written in traditional test script manner. The common pattern for teams new to writing BDD scenarios is to this of them as Test Scripts and to capture many detailed steps for each Scenario. This amount of detail makes BDD scripts very difficult to maintain and also difficult to understand what the acceptance criteria is that is being tested. The only detail that is required in a Scenario block are the parameters used to convey easily to anyone and quickly acceptance criteria. Any other auxiliary parameters used to setup the scenario or progress the test to the position in the application to be tested can be found in the detail of the Test Report produced.
2. Scaffolding (stubs, mocks and virtualisation) that reduce the amount of system under test. Some teams have learnt to do unit testing quite well and the problem is when they start learning to do automated acceptance testing, they reuse some of those patterns which actually degrades the outcome. When unit testing we definitely want to isolate a piece or unit of functionality so that we can test all possible input parameters to test every code path of that function. However, in acceptance testing we want to replicate the real system under test in order to gain confidence with all component integrated and running together.
3. Too many Scenarios being written for Stories in order to cover all code paths within the application. Scenarios should cover the most important positive, negative and edge case behaviours that customer cares about when using the application. Any more obscure test cases should be covered by unit testing and manual exploratory testing.
4. No Test Report. Unfortunately some teams put a lot of effort into doing BDD but they fail to generate and published any Test Report to be reviewed. This causes the BDD tests really to become pseudo 'unit tests' with an audience of fellow developers. The real benefits of greater collaboration with the business and customer are lost.
5. Scenarios being written in retrospect after the development has been completed. This usually occurs when there is a lack of collaboration with the customer and so functionality is written under assumptions. The outcome is that invalid assumptions are made and functionality invariable needs to be redeveloped when reviewed by the customer.
1. Story Scenarios that are written in traditional test script manner. The common pattern for teams new to writing BDD scenarios is to this of them as Test Scripts and to capture many detailed steps for each Scenario. This amount of detail makes BDD scripts very difficult to maintain and also difficult to understand what the acceptance criteria is that is being tested. The only detail that is required in a Scenario block are the parameters used to convey easily to anyone and quickly acceptance criteria. Any other auxiliary parameters used to setup the scenario or progress the test to the position in the application to be tested can be found in the detail of the Test Report produced.
2. Scaffolding (stubs, mocks and virtualisation) that reduce the amount of system under test. Some teams have learnt to do unit testing quite well and the problem is when they start learning to do automated acceptance testing, they reuse some of those patterns which actually degrades the outcome. When unit testing we definitely want to isolate a piece or unit of functionality so that we can test all possible input parameters to test every code path of that function. However, in acceptance testing we want to replicate the real system under test in order to gain confidence with all component integrated and running together.
3. Too many Scenarios being written for Stories in order to cover all code paths within the application. Scenarios should cover the most important positive, negative and edge case behaviours that customer cares about when using the application. Any more obscure test cases should be covered by unit testing and manual exploratory testing.
4. No Test Report. Unfortunately some teams put a lot of effort into doing BDD but they fail to generate and published any Test Report to be reviewed. This causes the BDD tests really to become pseudo 'unit tests' with an audience of fellow developers. The real benefits of greater collaboration with the business and customer are lost.
5. Scenarios being written in retrospect after the development has been completed. This usually occurs when there is a lack of collaboration with the customer and so functionality is written under assumptions. The outcome is that invalid assumptions are made and functionality invariable needs to be redeveloped when reviewed by the customer.