I was just hired by a company to head their QA department. I've been a QA specialist for 12 years, so I know a bit about planning. However, I have never been a QA manager before. What I have before me is an application of monstrous proportions. They have over 60,000 clients with customized configurations. I am wondering how to keep track of the versions and clients? Any ideas will be much appreciated.
What does "customized configurations" mean in this context? Does that mean your application has 60,000 unique implementations? Or that your 60,000 customers may or may not have customized a small portion? Or something else.
Many shops test the "standard implementation" intensely, test the configurable elements separately, and sometimes use "representative" client customizations as part of their test suite.
You will want to sit down the with product architects, client support, marketing, and other stakeholders to understand how they deliver, support and market these "customized configurations" and understand the expectations that are either stated contractually or implied. Then, you have a better idea how to effectively track and test them.
Thank you Joe BTW I enjoy your blog.
Originally Posted by Joe Strazzere
I have only had an initial meeting with the Product Manager and Product Architect. Both tell me that every client configures the product in a different way and there are thousands of possibilities. They also have six or seven different ways of doing the same thing (like closing a window for example). Is there a standard for this? Where does one draw the line? At least they are aware that we can't possibly test everything in the application. They also don't have any bug tracking tools. They do have implementation specialists who work out of the office and a handful of automated scripts. So, I was thinking of breaking up the application into the major components and then focusing in one area (building test procedures). I have no idea how they got along without a QA department.
Sounds like a very interesting challenge. I would suggest it needs to be broken down into parts.
Originally Posted by swtester99
1) Which parts are not configurable. Of these, which are the most important (you will have to decide within your place of work what important means), and focus on those parts
2) If there is several ways of doing things from part 1, try and find out which are mostly used by the client base, and support (test) those ways. Make it clear what you are not testing
This bit is interesting. If all 60,000 customers really configure the system in a different way, and there is not a clear way of doing things, you can only test the functionality that allows the configuration, rather than the end to end test. i.e. test that flag XYZ can be changed between 1-100, and check that when it is changed, the expected outcome has happened.
This leaves a large gap of how to test the process (whatever that is), and you need to come up with a way of doing that and as Joe says, find out what the contract position is, which should help guide you.
Sounds fun. Good luck!
It would be interesting to see just how many of them REALLY change their app from the way it's delivered to them?
Any way to audit configurations?
I worked at a place that was very configurable, but not close to that many customers. We were able to have an audit table so we knew how each customer had their settings. Plus during implementations that team would guide the customers and keep track of what they wanted, so we were able to keep up with it. I agree though that testing each of their settings is not possible. Functionality and basic configurations is what I see possible. Congrats on the new position!