Agile and BDD Automation implementation
Our shop has been a traditional waterfall shop with regression automated tests that would be done after most of the development for the feature was finished. We are moving to Agile with BDD and are looking at the best way to implement test automation within the sprint. We have a question on what other teams have found to work the best with regard to the focus of the automation.
Our questions is that once the BDD test case is created, should automation be tied to each scenario within the BDD test case (thus automate each scenario and test directly within the BDD test case file)? Or should our automation not be tied to the BDD test case solely, but more of a critical paths approach, where our tests are written to automate the workflows of new features being developed in the sprint, but is not necessarily linked to each scenario and line in the BDD file?
I'd say it depends. If it is relatively easy to automate your tests following the BDD format (for example using Selenium and JBehave or Cucumber) then by all means go for it.
If you need to 'force' your automated tests to fit into the BDD format, requiring additional logic and / or code, I'd say it's a waste of time and you'd be better of automating your tests in the most logical and straightforward format, which usually follows workflows (on a functional or end-to-end level) or code or system components (on a unit and system test level).
For example, in my current project we recently introduced the BDD notation to specify user story implementations. It makes creating the test cases easier, but I decided not to adopt the BDD standard in my automated tests as the tool I use just doesn't work that way. I still profit from the introduction of BDD by the way as it makes test case automation much easier (less need to consult other testers with far more domain knowledge all the time).
I would go with the first approach, the reason being is that it'll be easier to run only the tests needed when you move towards continuous integration.
I had a situation in the past where our test automation framework was really just a series of different workflows through a system. Every time a new user story would appear, we'd have to decide whether or not the story was worthy of automation. If it was worthy of being automated, we then had to retrofit the tests onto the existing test suite(s). This isn't too hard in the beginning, but going down the line, becomes increasingly difficult to achieve and maintain.
So the lesson learned from my misadventure was to stick to the user story when automating the test case and not to be tempted by the workflow approach. This will result in a pretty big collection of test cases, but on the other hand, the tests will be isolated and specific to a goal, so they'll be easier to understand, judge, and maintain.
You'd probably be doing yourself a big favor if expose the functionality contained within the pages (or in some cases, features) within your AUT as classes, and build your test libraries calling upon those classes. For us Selenium users, we use the PageObject design pattern (https://code.google.com/p/selenium/wiki/PageObjects). Something similar can be done with other tools as well.