SPONSORS:






User Tag List

Thanks Thanks:  0
Likes Likes:  0
Dislikes Dislikes:  0
Page 1 of 3 123 LastLast
Results 1 to 10 of 24
  1. #1
    Junior Member
    Join Date
    Mar 2002
    Location
    Bangalore, India
    Posts
    22
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Credibility of QA testers (Subset of QAGIRLS topic)


    This can be considered as a subset of the topic posted by QAGIRL with the subject "Finding Bugs versus Proving Lack Of Bugs".
    Many great responses, I agree. Most of them spoke about meeting the buisness requirements of a project. Now, consider this, meeting the buisness requirements of a project is only a part of QA activity. All we do here is trace each test case / test script to a requirement and ensure that each requirement is covered.
    But shouldn't we all agree to the fact that a testers job is not as simple and blind as this? By BLIND what I mean is that tracing a test case to a requirement is a more like a very basic work. Isn't a testers major expertise to fool proof a system by finding more number of unexpected/hidden defects? If testing is merely covering the requirements, I do not think that makes us good testers and we as qualified testers, add no value to the team. This statement can be backed up saying that merely covering the requirement can also be done by a developer, who can use the requirements document as a check list.
    Can I know what the industry experts think about this?

    ------------------

  2. #2
    SQA Knight
    Join Date
    Aug 2000
    Location
    Elanora Heights, NSW, Australia
    Posts
    3,271
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Re: Credibility of QA testers (Subset of QAGIRLS topic)

    sanalmenon

    Depends..

    If it is a large project you are referring to a testers job may indeed be as simple as tracing a test case to a requirement. There will be several different testers all working in various parts of the overall verification process. Where I work I arrange the Systems Integration Testing, which will primarily ensure that a product is robust. Others will do the User Acceptance testing to ensure the user requirements are being met. Others will do the Stress and Volume testing aswell (we have a speacilist group for that). And sometimes we call in security experts to do security testing.

    I guess the QA group and or Test Group (if it is seperate) term is used I guess a test groups responsibility is in fact:

    "fool proof a system by finding more number of unexpected/hidden defects?"

    I generally find that informal (bash testing) testing tends to find more bugs that formal. Programmers generally are ok with the identified pieces of functionality (not always) but functional areas that only becoome apparent, or hidden (or private in OO terms) functionality may require more thorough testing. Testing based on an understanding of the architecture, where this understanding may not of existed when you were planning testing.

    An example of a Project I worked had 120 Identified test cases, some of these test case had upto 10 different scenarios based on the input data. It took about 4 weeks (w/ends) to test the product employing 8 testers. About 80% of the tests conducted where negative tests. This level of testing is very important for a Telco meeting Telecommications Regulatory requirements. The company was looking at a 10 Million dollar fine if our software was not implemented on time and a 1 Million dollar a day fine after that date. They wanted an iron glad guarantee the software would work and paid an absolute fortune for this level of fine tooth comb testing.

    Quite suprisingly few bugs were discovered giving the level of scrutiny the product had to endure.

    ------------------
    Robert Tehve
    rtehve@bigpond.com

    [This message has been edited by rtehve (edited 05-10-2002).]
    Robert Tehve
    rtehve@bigpond.com

  3. #3
    Senior Member
    Join Date
    Dec 1999
    Location
    Chicago,Illinois,USA
    Posts
    2,537
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    1 Thread(s)
    Total Downloaded
    0

    Re: Credibility of QA testers (Subset of QAGIRLS topic)

    <BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by sanalmenon:
    Now, consider this, meeting the buisness requirements of a project is only a part of QA activity. All we do here is trace each test case / test script to a requirement and ensure that each requirement is covered.

    But shouldn't we all agree to the fact that a testers job is not as simple and blind as this? By BLIND what I mean is that tracing a test case to a requirement is a more like a very basic work.
    <HR></BLOCKQUOTE>

    It is very basic work in a certain sense, just as one might say that test casing is, in its essential essence, very basic work. Also remember, however, that part of a good quality effort is not just tracing a given test case (or set of test cases) to a given requirement, but also making sure the requirements themselves are valid, and by "valid" I mean the slew of terms we apply to requirements: complete, quantifiable, consistent, testable, etc.

    <BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Isn't a testers major expertise to fool proof a system by finding more number of unexpected/hidden defects?<HR></BLOCKQUOTE>

    Realize that finding hidden defects (i.e., via a process like hidden fault analysis, which I am an advocate of) is not a way to "foolproof" a system. Rather it is sometimes a way to show that complete "foolproofing" of a system is not possible in all cases. What a lot of hidden fault analysis methods show is that some faults may always be hidden.

    There is also a lot more, however, that might fall under the rubric of the "major expertise" of a tester. For example, an effective tester does not just generate test cases to requirements and match them up via a traceability scheme. An effective tester also knows good amounts of test cases to apply to a given requirement to state that the requirement has, in fact, been tested adequately. An effective tester knows how to partition test cases effectively, such that one is not saturating the effort with useless (i.e., redundant) test cases but also is not short-shrifting the effort by missing critical variances from the requirements that might otherwise go untested.

    That all really speaks to test case estimation methods. Those are something that are usually a unique talent to an effective tester that speaks directly to the scheduling done on a project. But then consider something like error seeding or fault insertion. That traditionally is a development activity and yet, fundamentally, it is an act of testing. Thus it would fall under the idea of being part of the "expertise" of a tester. As far as finding "unexpected defects", as you mention, a lot of that would go towards the practice of test forecasting of defect density prediction or even test intensity prediction. (Personally I prefer a combination of the two that is measured under the combined term "test coverage" that takes a wider view of just how much of the application, or the requirements, you think you are testing.)

    <BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>If testing is merely covering the requirements, I do not think that makes us good testers and we as qualified testers, add no value to the team. This statement can be backed up saying that merely covering the requirement can also be done by a developer, who can use the requirements document as a check list.<HR></BLOCKQUOTE>

    By the same logic, though, one could teach a developer how to do hidden fault analysis, as just one example, and thus you end up back in the same boat - where the tester does not add value, because, after all, the developer could do the work. (This is actually more true than not with this example because hidden fault analysis is best done at the unit test stage or even prior to that. And, in actuality, one could train developers to be critical of the written requirements themselves and thus do just what a tester would do during a requirements gathering and elicitation phase.) Keep in mind, of course, that "just" testing the requirements does contribute value in and of itself. I agree though that requirements coverage, particularly of the reactive sort, is only one type of coverage and you can be an effective tester, who adds value, by covering those requirements. However there are degrees of effectiveness. And I would say that using other forms of coverage, in addition to requirements coverage, makes one an even more effective tester.

    ------------------

  4. #4
    Senior Member
    Join Date
    Aug 2001
    Location
    Atlanta, GA
    Posts
    2,693
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Re: Credibility of QA testers (Subset of QAGIRLS topic)

    To clarify my own points...

    In the context of the thread referenced, I was speaking to validating that the system does what is expected (meets its requirements) as a first priority, over implementing some of the 'formal methods' described by the author of that particular article.

    I agree with much of what everyone here has posted - In no way does 'testing to requirements' imply creating a traceability or coverage matrix and considering things 'done'. All of the various methods Jeff discussed, and that you raise, sanalmenon, are part of what I mean.

    In the other thread, I was speaking in a very general or high level sense. Validating the requirements of an application can and should be as complex as it needs to be in order to validate that the application meets the needs of the end user, and that my work meets the needs of the project. So when you stated

    "Now, consider this, meeting the business requirements of a project is only a part of QA activity"

    I would agree - and in my mind, it's a small part that is done at an organizational level just to ensure things are covered - validating or analyzing how well they are covered is key.

    You then stated "All we do here is trace each test case / test script to a requirement and ensure that each requirement is covered"

    Did you mean we, as in at your company, or "WE" as in our profession? If you mean the latter I would wholeheartedly disagree, because to me, validating that the application meets the business requirements involves much more than that.

    Much of this depends on the context of the statement that testing is merely 'covering the requirements'. I'd argue that regardless of the level of detail of tests, and how much validation is done at a code, system, and architecture level, in the end, at a base level, we are still validating the business requirements. We are ensuring that a stable, quality system is released and that it meets the needs of the customers.

    In the other thread, I was attempting to state that while I can see formal methods such as those in the referenced article as useful, if they require the additional project time that was mentioned in that particular situation, on the idea that the theorums and formulae are simply going to prove that there are NOT defects, as opposed to testing FOR those defects, I would disagree.

    Hopefully, that clarifies my own thoughts a bit.

    ------------------
    ** To affect the Quality of the day, that is the highest of arts ** H.D. Thoreau

    ~ Annemarie Martin ~
    annemarie[dot]martin2[at]verizon[dot]net
    Annemarie Martin
    Secretary
    Association for Software Testing

  5. #5
    Senior Member
    Join Date
    Dec 1999
    Location
    Chicago,Illinois,USA
    Posts
    2,537
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    1 Thread(s)
    Total Downloaded
    0

    Re: Credibility of QA testers (Subset of QAGIRLS topic)

    <BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by QAGirl:
    I'd argue that regardless of the level of detail of tests, and how much validation is done at a code, system, and architecture level, in the end, at a base level, we are still validating the business requirements. We are ensuring that a stable, quality system is released and that it meets the needs of the customers.<HR></BLOCKQUOTE>

    I would agree with this statement wholeheartedly. I think what often gets lost is that QA is acting as a sort of end-user advocate. The business has certain requirements (stated or implied) that it feels are what those end-users want and/or need. As such, the business requirements are ostensible statements of what the business feels it should deliver to the end-users. (And hopefully that information was gathered with end-user input and feedback). As such, a critical component of the QA effort is to act as the surrogate end-user and validate that what the business thinks the customer should get is, in fact, what the customer is going to get!

    Going down the old granularity chute, the business will hopefully further refine its business requirements by stating how they are going to provide the required wants and needs of the user. We already have the what (coupled, hopefully, to a why) - now we need the how. Those will manifest as another type of requirements, usually system requirements and functional specifications. We also have the how in terms of "how will the end-user actually use the functionality we are saying they wanted or needed". In this case your requirments take the form of use cases or other means of user scenarios. Even when requirements are completely absent in some sort of formal form, there is always the implied requirement that the system has to work to some degree and has to deliver functionality relative to its purpose. That is where the concept of what I call manifestly-apparent defects comes in. If that kind of stuff does not work, you have a defect - even though are lacking formally stated requirements. So there is a lot to the notion of requirements beyond just tracing test cases to them.

    This all speaks to a form of requirements coverage, even when the requirements are strictly informal. So I agree with QAGirl: there is a lot more to requirements verification and validation than just the notion of test case traceability. Going beyond that, there are different ways to be an effective tester (or where a tester can apply their "expertise") within that context of requirements verification and validation. Some of those will be more or less useful depending upon the project and product constraints. So to consider just one "formal" method: hidden fault analysis, while always valuable as a concept, is not always valuable in practice, such as when (a) people do not know how to do it, (b) there is not enough valid information about the underyling product source to perform the analysis, or (c) time-to-market needs demand less rigorous analysis of that sort.

    However - I want to clear up a possible misconception, if it exists. (I am not sure that it does, but I just want to be sure.) It needs to be realized that hidden fault analysis (again, sticking with that "formal" method) is actually just another way of testing for defects. It is not "proving" that there are not defects. Rather it is saying that there are defects, estimating that some of those will be hidden (in a strictly defined sense), and then using those estimations as a determinative base for how much more testing should be applied to given areas of an application or what areas of an application should have more test cases applied to them. It is a statistical forecasting method for faults (defects) within a given application that is directly applicable to test case estimation and test scheduling. So, with all of that, the idea of requirements coverage itself and the idea of techniques within which requirements coverage is done are two separate subjects, both of which speak to a different aspect of the "expertise" of the tester. (And, again, I am just using one example since that is what the initial post used.)

    ------------------

  6. #6
    Senior Member
    Join Date
    Aug 2001
    Location
    Atlanta, GA
    Posts
    2,693
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Re: Credibility of QA testers (Subset of QAGIRLS topic)

    <BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by JeffNyman:
    However - I want to clear up a possible misconception, if it exists. (I am not sure that it does, but I just want to be sure.) It needs to be realized that hidden fault analysis (again, sticking with that "formal" method) is actually just another way of testing for defects. It is not "proving" that there are not defects. Rather it is saying that there are defects, estimating that some of those will be hidden (in a strictly defined sense), and then using those estimations as a determinative base for how much more testing should be applied to given areas of an application or what areas of an application should have more test cases applied to them. It is a statistical forecasting method for faults (defects) within a given application that is directly applicable to test case estimation and test scheduling. So, with all of that, the idea of requirements coverage itself and the idea of techniques within which requirements coverage is done are two separate subjects, both of which speak to a different aspect of the "expertise" of the tester. (And, again, I am just using one example since that is what the initial post used.)<HR></BLOCKQUOTE>

    Jeff, I'm not sure there is or isn't, but the article I'm referencing back to is actually at the top of this thread, and a comment the author made therein stating:

    "There is an often quoted remark that "Program testing can be used to show the presence of bugs, but never to show their absence!" This seems to imply that something else - proving - can show the absence of bugs.

    That is more what I was responding to than the information you've posted in your Hidden Fault Analysis thread and others - the methods you describe are, I believe, MUCH easier/simpler than his, in that they are flexible in what is necessary to make that proof (you explained this to us in several places discussing the use of requirements, or any of a number of resources to determine numbers). He is speaking of a very formal system, in my opinion, that advocates creation even of system level requirements in a mathematical sense. I can see the need or benefit, but many of my comments were in the context of Application Development that is pressured by a time to market and other 'outside of technology' concerns, and the fact that the kind of timeline necessary for the project described in the article above is unrealistic for most projects.

    ------------------
    ** To affect the Quality of the day, that is the highest of arts ** H.D. Thoreau

    ~ Annemarie Martin ~
    annemarie[dot]martin2[at]verizon[dot]net
    Annemarie Martin
    Secretary
    Association for Software Testing

  7. #7
    Senior Member
    Join Date
    Dec 1999
    Location
    Chicago,Illinois,USA
    Posts
    2,537
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    1 Thread(s)
    Total Downloaded
    0

    Re: Credibility of QA testers (Subset of QAGIRLS topic)

    Ah ha! I see. I tend to agree with you. The author of the article is describing Z, which I have used before and I have never been too thrilled with. I much prefer Tom Gilb's Planguage to something like Z, which can become very problematical in even large-scale environments where time-to-market may not be as pressing. The basis of his paper is good, I think, but he does not really show a practical application and, in fact, glosses over a few points in terms of how Z (like something like LOTOS or VDM) can become cumbersome.

    An excellent book that I highly recommend for those who want a viewpoint into how to practically apply formal specification methods is Automating Specification-Based Software Testing by Robert M. Poston. It is not about automation tools, per se. What it talks about is the use of more formal methods of specification and how those can be utilized even in environments that are driven by time-to-market, particularly because automating specification-based testing (which is requirements coverage) can be of the greatest help in exactly those environments: where there are external pressures and time-to-market priorities. I do not believe Anthony Hall's article truly conveys how helpful that is, although I will grant that his focus was a little narrower. However, I agree with you: he takes a much harder stance than I do in terms of the viability of approaches that are "mathematical". He would be what I might provisionally (and with tongue-firmly-in-cheek) refer to as a mathematical extremist.

    ------------------

  8. #8
    Junior Member
    Join Date
    Mar 2002
    Location
    Bangalore, India
    Posts
    22
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Re: Credibility of QA testers (Subset of QAGIRLS topic)

    <BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by QAGirl:
    You then stated "All we do here is trace each test case / test script to a requirement and ensure that each requirement is covered"

    Did you mean we, as in at your company, or "WE" as in our profession? If you mean the latter I would wholeheartedly disagree, because to me, validating that the application meets the business requirements involves much more than that.
    <HR></BLOCKQUOTE>
    A little clarification on what I meant by this statement. Yes, I did mean our profession by stating WE.I don't know whether you QAGirl and Jeff have come across similar situation. I have verified many buisness requirements documents and found that there are many part of requirements which are not explicitly mentioned in the document. A very simple example, the document does not specify that an edit box in an application must be validated for max integer value. All it says is that "The edit box takes an input of a numeric integer". It is the tester who decides whether to pass it through Equivalenece class, BVA, Negative/Positive test etc. Ofcourse, I know that this is a very basic example. What I am trying to convey is that a buisness requirement document can be looked at as a checklist and there are certain tests which are implicitly performed. Now, what tests to perform and how to perform forms one of many expertise of the tester.
    I have not seen or heard many testers talking about forcasting a defect on the production environment. And as a fact, not all the tests are performed on a test environment which is a 100% replication of the production environment. The question here is, have we thought about derieving a methodology to predict a failure of a project when deployed on site, instead of waiting for a report from the client? This is not a simple task and involves (I suppose) a lot of statistical analysis. I haven't done it yet or rather, I am unable to do it with my limited knowledge, though have given it a serious thought. What bothers me is that as the technology advances over time (which is very rapid), we tester must be geared up to face this challenge. Why can't we consider this software applications as a more critical functions like testing an aircraft engine, where a slight mistake from the testing team would cause a major loss.
    ------------------


    [This message has been edited by sanalmenon (edited 05-11-2002).]

  9. #9
    Junior Member
    Join Date
    Mar 2002
    Location
    Bangalore, India
    Posts
    22
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Total Downloaded
    0

    Re: Credibility of QA testers (Subset of QAGIRLS topic)

    Thank you Jeff/QAGirl/Robert. The responses definitely clarifies my views. I also discovered certain interesting topics within the responses, which I would love to get more clarity on and try implementing them on the projects that I work for.
    <BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by JeffNYMan:
    It is very basic work in a certain sense, just as one might say that test casing is, in its essential essence, very basic work. Also remember, however, that part of a good quality effort is not just tracing a given test case (or set of test cases) to a given requirement, but also making sure the requirements themselves are valid, and by "valid" I mean the slew of terms we apply to requirements: complete, quantifiable, consistent, testable, etc.
    <HR></BLOCKQUOTE>
    How do we effectively validate a requirement to ensure that it is testable? Is there any base guide?

    <BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR> Originally posted by JeffNYMan:
    Realize that finding hidden defects (i.e., via a process like hidden fault analysis, which I am an advocate of) is not a way to "foolproof" a system. Rather it is sometimes a way to show that complete "foolproofing" of a system is not possible in all cases. What a lot of hidden fault analysis methods show is that some faults may always be hidden.
    <HR></BLOCKQUOTE>
    How do I do a Hidden Fault Analysis? What are the steps involved? Is there any book or web site to learn more about it?
    Per your thought, I plan to take this session to the developers and testers to ensure better quality output.
    <BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by QAGirl:
    Much of this depends on the context of the statement that testing is merely 'covering the requirements'. I'd argue that regardless of the level of detail of tests, and how much validation is done at a code, system, and architecture level, in the end, at a base level, we are still validating the business requirements. We are ensuring that a stable, quality system is released and that it meets the needs of the customers.
    <HR></BLOCKQUOTE>
    I do very well agree with you QAGirl that whatever the level of testing we do, we are validating the buisness requirements. My second posting in this thread puts my views on it from the Quality perspective.
    <BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by QAGirl:
    In the other thread, I was attempting to state that while I can see formal methods such as those in the referenced article as useful, if they require the additional project time that was mentioned in that particular situation, on the idea that the theorums and formulae are simply going to prove that there are NOT defects, as opposed to testing FOR those defects, I would disagree.
    <HR></BLOCKQUOTE>
    I am not sure whether you mean this too. As a tester our aim is to prove that there are defects in the software. The more the number of defects we find, the better it is for the product. It is the Project Management and QA's (Clear distinction: QA = Process / Product Quality and QC = Testing) role to determine whether these defects are of high priority for the release and whether to correct them or not, prior to delpoyment.

    ------------------


    [This message has been edited by sanalmenon (edited 05-11-2002).]

  10. #10
    Senior Member
    Join Date
    Dec 1999
    Location
    Chicago,Illinois,USA
    Posts
    2,537
    Post Thanks / Like
    Mentioned
    0 Post(s)
    Tagged
    1 Thread(s)
    Total Downloaded
    0

    Re: Credibility of QA testers (Subset of QAGIRLS topic)

    <BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by sanalmenon:
    How do we effectively validate a requirement to ensure that it is testable? Is there any base guide?<HR></BLOCKQUOTE>

    One way, as a base, is to apply the requirement to the concept of "being quantifiable". When you can quantify something, to any degree, you have a baseline with which to work, at least in terms of looking for that specific quantity during an actual testing phase. A requirement that is quantifiable goes a long way towards being testable right out of the starting gate, so to speak.

    Testability, in the traditional sense, speaks to the ease with which a given software application or a given component within it can be tested to find defects. (Note: the notion of testability has broadened somewhat.) In that same kind of way, testability for a requirement states the ease by which a given requirement can be tested for in order to determine if it has been satisified. So what it means is looking at a requirement and saying: "How can I prove if this requirement has not been met?" If you find that you cannot really "prove" it in a fairly rigorous sense (i.e., if there is a lot of gray area of fuzziness), then the requirement is not testable (to a certain degree).

    So, one of the best ways to make sure requirements are testable is to make sure they are quantifiable. And the key to that: realize that all requirements are quantifiable. What people often do not realize is that even if you cannot come up with exact numbers, you can come up with scales of measure. Part of this also means getting rid of language like "non-functional requirements" and doing a breakdown of something like: functional requirements (what the system does), quality requirements (how well it does it), cost requirements (budgets for resources to help the function perform at specified quality levels), and constraints (delineations between allowed and not-allowed specific requirements).

    I sort of talked about this in the thread Operational Requirements. That thread is not entirely devoted to what I think you are talking about, but it may be something just to check out and see. It might give you a few other ideas.

    <BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>How do I do a Hidden Fault Analysis? What are the steps involved? Is there any book or web site to learn more about it?<HR></BLOCKQUOTE>

    Hidden fault analysis is a large subject in itself and there are a couple of different ways to do it. Keep in mind this is just one type of forecasting method. Check out my thread TOOLBOX: Hidden-Fault Analysis. This is not a tutorial thread (although you just gave me another idea!) but it is a general thread that shows a practical application of how hidden-fault analysis can work, at least at a high level. See if that helps. If not, we can certainly discuss it further.

    ------------------

 

 
Page 1 of 3 123 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Search Engine Optimisation provided by DragonByte SEO v2.0.36 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Resources saved on this page: MySQL 11.54%
vBulletin Optimisation provided by vB Optimise v2.6.4 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
User Alert System provided by Advanced User Tagging v3.2.8 (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
vBNominate (Lite) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Feedback Buttons provided by Advanced Post Thanks / Like (Pro) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
Username Changing provided by Username Change (Free) - vBulletin Mods & Addons Copyright © 2016 DragonByte Technologies Ltd.
BetaSoft Inc.
Digital Point modules: Sphinx-based search
All times are GMT -8. The time now is 10:18 AM.

Copyright BetaSoft Inc.