how to design the automation project
Using selenium2 to do Web UI automation testing, how to design the automation project.
A lot of things to discuss here, any specifics?
Originally Posted by haidai
If you don't know where to start, here are some topics you might want to look into first then ask something specific to get opinions on more details.
* Framework setup - what frameworks to use to compose your tests
* How to compose your test and project structure - how to organize your folders to improve maintainence
* Test deployment setup - how to run your tests against different enviornments like cloud, on a developers machine, internal test systems, on the cloud.
* Test platform setup - Setting up supporting platforms such as Selenium Grid to run your tests.
* Build Asset pipeline - Commonly referred to as CI.. but this topic can extend to Continuous Delivery or Continuous Deployment
* Reporting setup - How to get all the things reported and tracked and measured.
* How to optimize - parallelism, optimizing test setup via promises, async programming. Strategies to scaling up test execution.
I know that I did not start this thread, but I figured I would jump in since I have similar questions. Mine are related to adding the tests to a CI tool like Jenkins. Our automated tests are currently in 2 categories. Monitoring tests that run like every hour. Regression tests that run every night.
First, how do I set up Jenkins to run scheduled tasks like this. I am a complete neophyte to Jenkins. Second, how do I control which test cases are executed? For our older testing framework, we use an excel spreadsheet that contains all of the test cases and we just flip "Execution flags" to a yes/no. This seems to be effective and very easy to configure for anyone. Just wondering if there is a better approach to this?
I won't go into too many details here as there are lots of resources to be found on the Internet that can describe it way better than I can but:
- build your Selenium tests to run as JUnit tests. This makes for easy running of your tests from Jenkins using Ant or Maven.
- create a TestSuite (see TestSuite (JUnit API)) containing the tests you want to execute and excluding the ones you don't want to run
Running your tests as JUnit tests also makes it easy for Jenkins to pick up the results and generate reports.
For whatever reason, after about an hour of searching, I could not find much actually. That is why I came here. If you know of any such resources to help out with this, let me know.
Originally Posted by basd
I am actually using C# and VS TestRunner for my tests. So I got all that squared away. Your example helps to construct an object known as a test suite. THis is nice for programmatic organization. But what if I had a manual tester who wanted to executed only 15 tests out of 500. How can they do that without opening the code and modifying it? I was thinking some kind of a spreadsheet that contains all of the test cases and they can be turned on/off, external of the framework.
Jenkins is a bit overkill for nightly builds. A simple build script and cronjob will accomplish that just fine. What Jenkins and other build managers is good at is managing build dependencies and you can organize your tests as downstream jobs of your build and deploy job so you can have a true CI setup. I would encourage you to be more ambitious in your setup.
Originally Posted by smartrussian24
As for controlling which tests gets executed. I prefer to use tagging. Most unit test frameworks will allow tags. Then you can run tests filtering on those tags. For example, I have tags for test type (test.smoke, test.lightRegression, test.fullRegression), tags by systems exercised (sys.AuthService, sys.CommunicationService, sys.WebFrontEnd, sys.Redis, sys.RabbitMQ, etc...), and tags by feature (feat.Login, feat.Search, feat.ManageUsers, feat.Admin, etc...)
In my CI jobs, when a module builds, it publishes a set of tags in the artifacts. Then when a downstream test job runs, it'll run the set of tests associated with those tags. For example, say the Authentication module gets built. As part of the build artifacts, a tags.txt gets generated with "sys.AuthService". Then when the downstream job for RunTests gets called, it'll see it was delivered a "sys.AuthService" tag, then go execute those tests. If it fails, it'll fail the upstream job, the AuthService, and the developer who last checked in code on that branch will see his name on the Wall Of Shame (that's what we call our CI build monitor), and he'll be in charge of fixing the issue.