| || |
Challenges of testing multiple browser games simultaneously
A successful free-to-play browser games developer has several games in the live environment. Some are feature-complete, 4 others are in development and constantly being improved and added to. There are about 40 testers on the QA team - more will be hired, however, finding a good fit is challenging - but it is not enough to test all 4 games equally.
Because of that, all testers have to work on all games in development, sometimes on 3 in one week. Additionally, sprints of some new features are sometimes 6 weeks long, because the feature is quite large (primarily because the feature has a very high priority on the product manager's schedule and he simply wants it to be done as soon as possible).
This causes the following problems: having to work on 3 games in one week tends to "confuse" testers sometimes, their productivity decreases, and because of that they are not able to verify and validate all of the features in the test plans (QA schedule slips).
Right now it works like this: Every couple of days or even every next day testers rotate and work on a different game. They go through the test plans they were given and give it a pass or fail. The rotation is necessary because a product manager of a different game needs, say, 15 testers for the next 2 days to test a new feature. Of course not all testers rotate at the same time. So some get to spend more time on the game and will, of course, be more familiar with it, which means that they will have an advantage over testers who join later to test new features (advantage = being able to test features more quickly because of the greater familiarity, and not having to "get used" to the game before testing). Also, because of that there are no "game expert" testers since all of them have to do everything. As more and more testers join the QA team, this will eventually become a big problem, i.e. chaos and confusion. I was presented with this problem a few days ago.
I came up with the following possible solutions: (in no particular order)
1. Make sure the sprints will never be longer than 4 weeks - no matter how important a feature is; if it's too big, it must be broken down.
2. If product manager says that the feature is too important (game is indeed a cash cow), then it is necessary to explain to him why a feature needs to be broken down (maybe crunch necessary if schedules slip (even QA), milestones will be missed, not all features can be tested)
3. Hire external QA teams (but that seems out of the question right now)
4. Keep a core team of testers on each game, so that they can explain new things to joining testers quickly, when the other members of that team are assigned to other projects temporarily
5. Have a dedicated QA team for each game (difficult right now as the product managers require more or less testers depending on the situation)
6. Have a dedicated team for each game nonetheless, put testers that "have nothing to do" on a different team to help out, but never take testers off a team when they are required to work on "their own" game
7. Using burn down charts and daily sprint backlog trends to visualize every tester's progress as well as the overall progress, so the lead can give a good estimate whether a goal can be achieved
8. Speeding up testing by making test plans more detailed and include every step so that the time to think will be reduced
This is a lot of information, and I hope you understand the problem. If anyone of you has encountered this problem before, and/or has practical solutions up his sleeve, please do comment. Thank you very much!
1. My experience is 2 week sprints is optimal. I worked at a company over the course of 2 years, we experimented with different lengths sprints, for 2 or 3 sprints using different sprint lengths. We tried 4 week sprints, 2week sprints, 1/2 month sprints, 1 week sprints. We eventually settled on 2 week sprints. This pretty much worked out be, 1st week researching, developing, automating tests, and integration, 2nd week refactoring, bug fixing, and documentation. We also experimented with different scrum team sizes during this time, 5-8 members per scrum team appeared to work best for us.
2. It's good to have things broken down and release a piece at a time, even if it's hidden from the customers. It'll allow you to weed out the integration and performance impacts your changes may have on the greater system. It's a much easier call to roll back a partial feature, than it is to roll back a full feature that's designed wrong with the rest of the organization ready to sell it. That's one of the major causes of late time hackathon bug fixing. (which most likely leads to more bugs as hotfixes don't go under as much code review and testing)
8. To add to this. It's a good idea to come up with High Level test plans. For the last company I worked for, I came up with High level test plans that outlined what I expected, given the type of change. This reduced the amount of time writing boiler plate documents.
Above is over simplified. But the idea is to comeup with a high level table, break what your software does, group them in important category and less important ones, and outline what types of testing and what level of that testing should be for each given situation. It could be as simple as how many lines of code is edited, or as complex as breaking the change down into areas like : DB changes, Configuration Changes, Large Code Change, Small Code Change, etc...
Amount of change
Low | High
Important Plan B Plan A
Not as Important Plan C Plan B
Plan A : Full Regression for that area, 2 hrs of Ad hoc testing, Code review and focused adhoc along the changes.
Plan B : Light regression for that area, 2 hrs Ad hoc testing.
Plan C : 3 hrs. Ad hoc testing
Last edited by dlai; 03-01-2013 at 12:00 AM.
Its more like a hard thing because multiple attempts at one time is a little hard i mean when its about going or putting the attempt on a good way it has to be like that.
What i feel is if someone can perform this sort of thing they can surely take it all to another level.