There are three levels of quality assurance: testing, validation, and certification.
In system testing, the common view is to eliminate program errors. The is extremely difficult and time-consuming, since designers cannot prove 100 percent accuracy. Therefore, all that can be cone is to put the through a "fail test" cycle-determine what will make it fail. A successful test, then, is one that finds errors. The test strategies discussed earlier are used in system testing.
System validation checks the quality of the software in both simulated and live environments. First the software goes through a phase (often referred to as alpha testing) in which errors and failures based on simulated user re4uirements are verified and studied. The modified software is then subjected to phase two (called beta testing) in the actual user's site or a live environment. The system is used regularly with live transactions. After a scheduled time, failures and error are documented and final' correction, and enhancements are made before the package is released for use.
The third level of quality assurance is to certify that the program or software package is current and conforms to standards. With a growing trend toward purchasing ready-to-use software, certification has become more important. A package that is certified goes through a team of specialists who test, review, and determine how well it meets the vendor's claims. Certification is actually issued after the package passes the test. Certification, however, does not assure the use' that it is the best package to adopt; it only attests that it will perform what the vendor claims.
In summary, the quality of an information system depends on its design, testing, and implementation. One aspect of system quality is its reliability or the assurance that it does not produce costly failures. The strategy of error tolerance (detection and correction) rather than error avoidance is the basis for successful testing and quality assurance.