I've just started a job as a Test Analyst. I've been recruited within the company, I have no previous testing experience although I do have some testing theory knowledge. I am finding there is quite a steep learning curve.
I use QC within my job to write/run manual tests. Looking at the posts in this forum people are talking about a lot of features that don't seem to exist within the QC I'm using so either these features are disabled or an old version of QC is being used. For example, I can't import from Excel but I can export.
1. All the testing we do is manual. Looking at existing test scripts there are often 10 or so scripts that will do a very similar thing but under different conditions or using different types of input data. Something I noticed was that in each test there may a step where a dialog box opens and the expected output might read:
"X Dialog box opens, check that fields for Name, Address, Date of birth, [...] are shown and buttons "OK", "Cancel" are shown."
It looks like a thorough test step to me. But then when I look at all the other 9 tests the exact same step is there. I'm thinking that it must surely be bad practice to have the person running the tests testing the same part of the GUI over and over especially when a) the steps taken to reach that point were exactly the same for each test and b) there's no reason to expect that the GUI may change from one test to the next. As my job was to review and update these scripts I just deleted anything that asked the tester to look for buttons, etc. so that the focus was instead functionality (e.g. does value X appear when we do A, B and C.) I did include an extra test just to check the GUI matched the design though.
Is my thinking correct? Is there a better way I could do this? I think ideally separate tests would be run for GUI but generally in my department it seems to be combined.
2. There is a call to test functionality in QC. I've been told that use of it is "discouraged" and instead all steps should be included in one test. Apparently the department have experienced problems in the past when Call to test is used.
My experience when writing scripts is that something can change could affect ~80 scripts. I once spent a day and a half changing every single of one these scripts to match the new changes and part way through realised I could save time in the future by using call-to-test and I would only have to change 1 script instead of 80!
I think the problem with using call to test is when multiple people call the same test and one of them edits the called test to match their own but ends up breaking everyone else's. I worked around this by making it clear that this called test can only be called by my tests and I including it in the same folder.
I'd appreciate feedback on my thoughts on writing tests so far!
My initial thought is your question/thought on repeated or reusable test cases, is not limited to QC's perview but a general thought on the testing practices in general.
So writing reusable test cases is just the beginining but maintenance wrt to updates is the toughest part and Yes there has to be some process on who updates the resuable components to prevent issues/unwanted updates to them.
On Excel question to export test cases from excel to QC, not sure if you have installed the add-in from the QC addins page.
Else the other way could be to use OTA to write a custom add-in, which can be very flexible as per the need.