Plan Your Project's Testing - or Plan to Fail
Failure to carefully plan testing in the early stages of a project could doom it to failure. What are the client's goals for a project? What are the critical success factors for everyone involved? What resources are available for testing? These are just a sampling of the questions that must be addressed before opportunities for testing -- and deadly errors -- are missed.
Does testing require a plan? That may sound like a silly question, but many organizations leave the testing to the end of the project, and whatever time is left is the time that will be spent on testing.
That philosophy brings up many questions: What is the purpose of the test plan? What is to be accomplished with the testing? How are resources allocated for testing? How much time should be spent on testing? And, of course, how much testing is enough? In the event that defects are uncovered, will they be resolved prior to shipment? Will the client be made aware of these problems? Have resources been allocated to resolve the critical defects?
Many people assume that testing will continue until the product ships, but there must be a plan for how to do the required testing that takes into account the resources that will be available and when will they be necessary. Testing, like any other part of a project, must be carefully planned in order to accomplish the objectives. Budgets, people, time, required testing -- all have to be identified up front and scheduled during the project cycle.
What Is the Purpose of the Test Plan?
The test plan identifies what needs to be tested. It is essential to identify what needs to be tested and prioritize those tasks before beginning to allocate resources such as budget, time, people, and tools. So, how are the client's needs communicated to the test group? It should be the same way all resources are identified -- by understanding the requirements.
Requirements are the must-haves. Clients may request more than what a project is capable of delivering, so it's critical to separate the "needs from the wants." This must be done prior to the beginning of any design or coding. Is there a clear understand of requirements prior to the beginning of the project? The requirements-gathering process should eliminate most of the scope creep associated with a project.
Keep in mind that the overall purpose of testing is to demonstrate that the product will meet the client's needs, and not just to demonstrate that the program runs. If those needs are not clearly identified prior to the design phase, time may be spent on less important features and the required functions may be missed. If a project fails to meet requirements, then it's a complete waste of time. The objective of any project should be customer satisfaction. That can be achieved only if everyone is in agreement on what the final outcome of the project should be.
Testing must be done in parallel with development. Programs should be designed modularly so they can be tested as the project moves along, not just at the end. The best time to test is immediately after a module is finished -- when it's still fresh in the programmers' heads, and any problems encountered can be readily addressed and resolved.
That's also the time to resolve any misunderstandings about the requirements and then share them with the other project members. It's highly likely that programmers will be removed from a project when their work is done, and it is often difficult to get those resources back in the event of a problem. The same problem won't arise again if it is uncovered early in a project.
It is much cheaper to do it correctly the first time than to redo it later on. This is part of the "lessons learned" during the project. It is also very likely that some of the original staff will be reassigned during the project cycle and if they are, then who will be resolving the problems? Where will those resources come from?
The phrase, "It works on my machine" is heard all too often. The program may be right -- that is, it runs as designed -- but does it meet the user's needs? Is it the right program? This must be demonstrated early in the project, not only at the end. If it is necessary to fix problems later on, then how much of the additional coding and testing will have to be subjected to regression testing to demonstrate that nothing has been negatively affected?
Regression testing is rarely done on a full system , since there is never enough time to retest -- but if it is touched, then it must be retested, along with any other features affected by the change. Regression testing must be done throughout the project cycle, so there are no surprises at the end.
What Are Scripts and Test Cases?
These two are often confused as being the same thing. The script is the scenario of what events will happen, and the test case is the parameter that is passed to the script. Scripts are reusable, and the test case will verify that the program does or does not work. There will always be a minimum of two test cases, one positive and one negative. The same script can be used with the negative and positive parameter passed to the script.
Again, it is not the quantity of testing but the quality that counts. The number of test cases will be determined by the importance of the particular feature and the ingenuity of the tester. Test scripts and test cases should be saved in a folder that is labeled by function, so they can be reused on other projects that are testing for the same functionality. It is not a good idea to bury scripts and test cases in the test plan, as they will be very difficult to recall when needed. Having them reside in a folder makes them more easily accessible and reusable. This will save time and money -- especially when enhancement or maintenance is required. Regression testing will use these files as well.
Test specifications, not the test plan, will be used to develop test cases and test scripts. The test specification will define the functionality in greater detail. Any testing that is done needs to be documented so that in the event of a defect or enhancement, the recertification of the system will be much easier.
Critical Success Factors are those features or functions that are absolutely necessary for the success of the program. They should be identified as early as possible, prioritized and scheduled. The critical success factors are most important, and need to be coded and tested as soon as possible. If they are not done early, and time runs out, then they might be skipped.
All the parties to a project will have their own sets of critical success factors. They can be identified by the clients, programmers, project managers, etc. All of them must be identified and addressed. This is very difficult to do when a project team is not communicating well. Again, everyone involved in a project must work together as a team -- and include the test group as a part of that team.
How Much Testing Is Enough?
Quality and quantity do not necessarily go hand in hand. Have you not seen people work for eight hours and do only four hours of work? It goes the other way, also. Many people can do the same amount of work in less time than others. This is one of the main reasons for the test plan. Have all the objectives of the test plan been achieved? Will the client be pleased with the product? Will it meet the client's needs? All of the critical success factors must be demonstrated.
The System Test will be performed by the black box test team. This test is often referred to as the "systems and integration testing cycle." However, integration testing should have been done all along the project cycle and not left to the end. If it fails at this time, there is little time to fix it before the product ships. This testing should be an accumulation of all the module testing, but now done as a complete system. The level of confidence of the system should be high prior to the beginning of system test. No critical success factor errors should be uncovered at system test time. If there are any critical factor errors uncovered, then there is something wrong with the process. There should be no surprises during this testing. "What you see is what you get," is the motto of the system test cycle. The system test is a dry run of the Acceptance Test.
In additional to the standard Black Box Testing, which is done by the test team, there will be additional testing beyond the scope of the test team. The scope of this testing will be determined by the needs of each organization. These are some of the additional types of testing that may be done on the product:
- Alpha Testing. This testing will be done by people outside of the test group, which may include the project managers, business analysts, support center and internal users. This additional testing allows others to review and test according to their understanding of the system. Since these people are external to the test group, they will provide a different set of eyes and possibly different test scripts and test cases. These test cases should be saved for future use by the test team.
- Beta Testing. This testing is done at the client's site. Often referred to as "pilot testing," this testing will help to verify that the product will work in the client's environment. Sometimes, the client will have features or functions not fully identified prior to the start of the project that may present problems unanticipated by the team. They need to be identified and addressed prior to delivery. The beta testing should also have a test plan to ensure that testing is complete and applicable.
- Parallel Testing. This testing compares old versus new. Does the new system still have the features of the old -- and if not, was that the intent? If not, the discrepancies need to be addressed prior to the delivery of the system. Rarely will the new work like the old. That is why the system was upgraded. Have some of the previous features stopped working? Was that the plan, or did something go wrong? Was a particular feature inadvertently changed?
- Acceptance Testing. The acceptance test should have been identified prior to the beginning of development and testing. This test will constantly be referred to during the project cycle to ensure that the overall objective will be achieved and there will be no surprises. The acceptance test, sometimes referred to as the "UAT," will be performed by the client with the cooperation of the business analyst.
All of the test scripts and test cases used during these three testing phases should be captured and added to the test bed repository. Since these tests will not be done by testers, they should add valuable input on how to improve succeeding test cycles. This information will enhance the data base, and in future applications, any errors identified will be prevented. Lessons learned.
There is a need to demonstrate that what used to work still works during each and every one of these test cycles. Too often, testing is concentrated on the changes and does not include recertification that what used to work still does. Many projects fail because the fix caused a different function to stop working. This is why it is so important to have a library of regression tests.
There must be test plans for each of these testing cycles. It is not the responsibility of the test group to design all of these test plans; each testing group must design its own. These plans should also include scripts and test cases. Testing by the seat of the pants will invariably produce a product that is not going to meet the client's needs.
There is a lot more to testing than meets the eye. It is a reusable process that will constantly be modified and enhanced. The test plan should be a template that can more easily be accessed and customized to a specific project.
Good test planning will result in cost savings not only during the project cycle, but will greatly reduce ongoing maintenance costs.
James F. York is president of C/J System Solutions, which provides consulting and training for computer professionals.