| || |
We currently use RoboHelp to create our help files and we are in the process of adding context sensitive help to all screens. Does anyone have any known areas where this might be a problem so that I don't miss them when testing? Or any suggested test cases?
Re: Context Sensitive
A couple of things to look at;
1) Robohelp produces a warnings and errors report when you compile the help, which in turn is fed through to the error wizard. This catches things such as broken links, duplicate links, unused help IDs, non portable picture formats etc... The first thing that i would do to test a RoboHelp generated help file is to thoroughly analyze this file.
2) When testing help files, I usually print out a paper copy of the help, and compare it to what i get on screen. If you are using RoboHelp for Word, you can catch a few nasties here. For example, section breaks are treated as new topics by Robohelp, so if the author has inadvertently included a few of these, they can leave portions of the online help in a position that you can't navigate to. Make sure that you can navigate to every help item in the printed document.
3) Check the machine specs for your software, particularly minimum screen resolution, and try testing you help at that resolution. Sometimes the help author will forget to convert bitmaps to re sizable (SHG) formats, which means they go off screen at lower resolutions or fail to display at lower colour depths.
4) Check that there is a context sensitive link for every menu option, tool bar and dialog. For dialogs, this is done by pressing F1, for other controls, it is achieved by placing the help item on a tool-bar button or control.
5) Obvious one here; check the help documentation matches the program functionality. Often the programmer will change functionality or appearance of the software, and the help author will not update the help.
6) Check your install script includes any additional DLLs required for your help system for all your targeted platforms, e.g. RHMMPLAY.DLL, ROBOEX32.DLL, etc...
IMO, testing on-line help can be very time consuming and duplicates much of the functional testing. As a small outfit with limited resources, we use our help documentation to generate functional and regression test cases. This not only removes the duplicate effort, but helps ensure consistency between the documentation and the rest of the product.