| || |
Development and QA using the same scenarios
I am new to my company as QA Lead and am not accustomed to this approach. My development team has told me that we as a group identify the items needing to be tested prior to doing any coding, so that the developers know what to test as exit criteria. I have tried this out a few times and am not sure if this will work for me. I wanted to get some feedback if this is something that works for others and maybe I just have to adjust or if this is way off base and I should ask my team to adjust to suit my working style.
In the attempts I've made to use this approach, I have gotten negative feedback that my scenarios are causing confusion. I'm also finding that different developers want the scenarios written in different formats (one does tables, one likes Gherkin). One thing to note is that the developers will write the scenarios and then send them to me to check over (which is great, but if I want them written differently, its more work and could create friction between myself and the developer who took the time to write them).
This approach almost seems absurd to me as I do not expect developers to complete negative testing, usability testing, integration testing, etc. and I don't see why all of this should be documented in the card (we are agile and use Jira). My suggestion is to do separate scenarios (or test cases) and certainly share them, but I haven't yet seen the value in using developer written scenarios for QA testing. BTW, we do not use test driven development. If we did, I definitely understand the need for this.
Thanks in advance for any feedback.
It's not uncommon for a new test lead to be pulled in all different directions. I remember when I was first a test lead, I had different developers who've had several more years more experience all starting to suggesting different things and getting pulled in all different directions. Each developer probably had his or her process form a previous job, and had different positives and negatives relating to that.
My advice is to not react and be strategic about it. Whether or not you, or the QA community thinks a process is right or wrong doesn't matter in the end. What you need to do is get everyone on the same page, from there the expectations are set and you can review and make adjustments in the retrospectives, and have buy in and agreement with everybody.
From nothing, I would recommend the following..
1) Talk to your boss (CTO) and project management (PO or whatever role plays this part). Let them know there are some friction and you have ideas how to do this better which conflicts with the developer's view. Don't come in with an attitude of what's right or wrong, convey the message that you are taking the initiative to define the process to create workability.
2) Interview all the devs you will be working with. Get as much as you can of how things worked for them, what they think it should be, and what experiences they had. This is very important, because no one likes having a process forced on them. By getting feedback you have a better chance of any buy-in on proposed process changes.
3) After interviewing everyone, compile all the common themes, and send it out to everyone, and schedule a workshop to discuss the process. Work out the details and come to a consensus.
4) Document everything, and push the agreed upon process. It's very common for less experienced person to get bullied around when they are new to a lead role. At this point stick to your guns, but don't get emotionally attached to it.
Hope that helps on that part.
On the part of test cases. I agree Jira is a horrible tool for that. I personally flavor the BDD approach. It's very similar to TDD, but sort of adds a DSL (domain specification language), that makes it more clear. Gherkin is used commonly for it. I think it's best to use a separate tool to manage test cases. In my shop, I'm using test rail for that. Behavior level descriptions we tend to want to automated, while all the negative test cases we don't do as much automation, mainly for the reason that automated tests are expensive to maintain and we're mostly concerned about integration risk. But we'll use test rail to add additional negative behavior expectations for testing to capture that, and use the test cases as the source of truth when we're talking about how the software should behave.