Showing posts with label System Testing And Qualitiy Assurance. Show all posts
Showing posts with label System Testing And Qualitiy Assurance. Show all posts

Friday, April 24, 2009

Levels Of Quality Assurance

There are three levels of quality assurance: testing, validation, and certification.

In system testing, the common view is to eliminate program errors. The is extremely difficult and time-consuming, since designers cannot prove 100 percent accuracy. Therefore, all that can be cone is to put the through a "fail test" cycle-determine what will make it fail. A successful test, then, is one that finds errors. The test strategies discussed earlier are used in system testing.

System validation checks the quality of the software in both simulated and live environments. First the software goes through a phase (often referred to as alpha testing) in which errors and failures based on simulated user re4uirements are verified and studied. The modified software is then subjected to phase two (called beta testing) in the actual user's site or a live environment. The system is used regularly with live transactions. After a scheduled time, failures and error are documented and final' correction, and enhancements are made before the package is released for use.

The third level of quality assurance is to certify that the program or software package is current and conforms to standards. With a growing trend toward purchasing ready-to-use software, certification has become more important. A package that is certified goes through a team of specialists who test, review, and determine how well it meets the vendor's claims. Certification is actually issued after the package passes the test. Certification, however, does not assure the use' that it is the best package to adopt; it only attests that it will perform what the vendor claims.

In summary, the quality of an information system depends on its design, testing, and implementation. One aspect of system quality is its reliability or the assurance that it does not produce costly failures. The strategy of error tolerance (detection and correction) rather than error avoidance is the basis for successful testing and quality assurance.

Maintenance And Support

This phase provides the necessary software adjustment for the system to continue to comply with the original specifications. The quality assurance goal is to develop a procedure for correcting errors and enhancing software. 

This procedure improves quality assurance by encouraging complete reporting and logging of problems, ensuring that reported problems are promptly forwarded to the appropriate group for resolution, and reducing redundant effort by making known problem reports available to department that handles complaints.

Software Testing And Implementation

The quality assurance goal of the testing phase is to ensure that completeness and accuracy of the system and minimize the retesting process. In the implementation phase, the goal is to provide a logical order for creation of the modules and, in turn, the creation of the system.

Software Design Specifications

In this stage, the software design document defines the overall architecture of the software that provides the functions and features described the software requirements document. It addresses the question; How it be done? The document describes the logical subsystems and their respective physical modules. It ensures that all conditions are covered.

Software Requirements Specifications

The quality assurance goal of this stage is to generate the requirements document that provides the technical specifications for the design development of the software. This document enhances the system's quality by formalizing communication between the system developer and the and provides the proper information for accurate documentation.

Quality Factors Specifications

The goal of this stage is to define the factors that contribute to the quality of the candidate system. Several factors determine the quality of a system:

   1. Correctness-the extent to which a program meets system specifications and user objectives.
   2. Reliability-the degree to which the system performs its intended functions over a. time.
   3. Efficiency-the amount of computer resources required by a program to perform a function.
   4. Usability-the effort required to learn and operate a system.
   5. Maintainability-the ease with which program errors are located and corrected.
   6. Testability-the effort required to test a program to ensure its correct performance.
   7. Portability-the ease of transporting a program from one hardware configuration to another.
   8. Accuracy-the required precision in input editing, computations, and output.
   9. Error tolerance-error detection and correction versus error avoidance.
  10. Expandability-ease of adding or expanding the existing database
  11. Access control and audit-control of access to the system and extent to which that access can be audited.
  12. Communicativeness-how descriptive or useful the inputs and out of the system an
.

Quality Assurance Goals In The Systems Life Cycle

The software life cycle includes various stages of development, and each stage has the goal of quality assurance. The goals and their relevance to the quality assurance of the system are summarized next.

Quality Assurance

The amount and complexity of software produced today stagger the imagination. Software development strategies have not kept pace, however, and software products fall short of meeting application objectives. Consequently, controls must be developed to ensure a quality product. Basically, quality assurance defines the objectives of the project and reviews the overall activities so that errors are corrected early in the development process. Steps are taken in each phase to ensure that there are no errors in the final software.

User Acceptance Testing

 An acceptance test has the objective selling the user on the validity and reliability of the system. It verifies the system's procedures" operate to system specifications and that integrity of vital data is maintained. Performance of an acceptance test is actually the user's show. User motivation and knowledge are critical for the successful performance of the system. Then a comprehensive test report is prepared. The report indicates the system's tolerance, performance range, error rate, and accuracy. 

System Documentation.

All design and test documentation should be finalized and entered in the library for future reference. The library is central location for maintenance of the new system. The format, organization, and language of each documentation should be in line with system standards.

System Testing

System testing is designed to uncover weaknesses that were not found in earlier tests. This includes forced system failure and validation of the total system as it will be implemented by its user(s) in operational environment. Generally, it begins with low volumes of transactions based on live data. The volume is increased until the maximum level for each transaction type is reached. The total system is also tested recovery and fallback after various major failures to ensure that no data lost during the emergency. All this is done with the old system still operation. After the candidate system passes the test, the old system is discontinued.

String Testing

 Programs are invariably related to one- another interacts in a total system. Each program is tested to see whether it conforms to related programs in the system. Each portion or' the system is test against the entire module with both test and live data before the entire System is ready to be tested.

Program Testing

 A program represents the logical elements of system. For a program to run satisfactorily, it must compile and' test data correctly and tie in properly with other programs. Achieving an error-free program is the responsibility of the programmer. Program testing checks for two types of errors: syntax and logic. A syntax error is a program statement that violates one or more rules of the language in which it is written. Improperly defined field dimension or omitted key words are comma syntax errors. These errors are shown through error messages generated. The computer. A logic error, on the other hand, deals with incorrect data fields, out-of-range items, and invalid combinations. Since diagnostics d not detects logic errors, the programmer must examine the output carefully for them.

When a program IS tested, the actual output is compared with expected output. When there is a discrepancy, the sequence of instructions must be traced to determine the problem. The process is facilitated by breaking the program down into self-contained portions, each of which can be checked at certain key points.The idea is to compare program values against desk-calculated values to isolate the problem.

Thursday, April 23, 2009

Types Of System Tests

After a test plan has been developed, system testing begins by testing program modules separately, followed by testing "bundled" modules as a unit. A program module may function perfectly in isolation but fail when interfaced with other modules. The approach is to test each entity with successively larger ones, up to the system test level.

System testing consists of the following steps:
1. Program(s) testing.
2. String testing.
3. System testing.
4. System documentation.
5. User acceptance testing.

System Teasting

The purpose of system testing is to identify and correct errors in t candidate system. As important as this phase is, it is one that is frequently compromised. 'Typically, the project is behind schedule or the user is eager to go directly to conversion.

        In system testing, performance and acceptance standards are developed. Substandard performance or service interruptions that result in sys­ tem failure are checked during the test. The following performance criteria are used for system testing:

1. Turnaround time is the elapsed time between the receipt of the input and the availability of the output. In online systems, high-priority processing is handled during peak hours, while low-priority processing is done later in the day or during the night shift. The objective is to decide on and evaluate all the factors that might have a bearing on the turnaround time for handling all applications.

2. Backup relates to procedures to be used when the system is down. Backup plans might call for the use of another computer. The software for the candidate system must be tested for compatibility with a backup computer.

         In case of a partial system breakdown, provisions must be made for dynamic reconfiguration of the system. For example, in an online environment, when the printer breaks down, a provisional plan might call for automatically "dumping" the output on tape until the service is restored.

3. File protection pertains to storing files in a separate area for protection against fire, flood, or natural disaster. Plans should also be established for reconstructing files damaged through a hardware malfunction. 

4. The human factor applies to the personnel of the candidate system. During system testing, lighting, air conditioning, noise, and other environmental factors are evaluated with people's desks, chairs, CRTs, etc. Hardware should be designed to match human comfort. This is referred to as ergonomics. It is becoming an extremely important issue in system development.

Prepare Operational Documents

During the test plan stage, all operational documents are final· including copies of the operational formats required by the candidate system. Related to operational documentation is a section on the experience, training, and educational qualifications of personnel for the pro operation of the new system.

Prepare Job Performance Aids

In this activity the materials to be used by personnel to run the system are specified and scheduled. This includes a display of materials such program codes, a list of input codes attached to the CRT terminal, and posted instruction schedule to load the disk drive. These aids reduce the training time and employ personnel at lower positions.

Compile/Assemble Programs


All programs have to be compiled/assembled for testing. Before this however, a complete program description should be available. Included is the purpose of the program, its use, the programmer(s) who prepared it and the amount of computer time it takes to run it. Program and system flowcharts of the project should also be available for reference.

In addition to these activities, desk checking the source code uncovers programming errors or inconsistencies. Before actual program testing, run order schedule and test schemes are finalized. A run order schedule specifies the transactions to test and the order in which they should be tested. Hi priority transactions that make special demands on the candidate system are tested first. In contrast, a test scheme specifies how program software should be debugged. A common approach, called bottom-up programming, tests small-scale program modules, which are linked to a higher-level module, and so on until the program is completed. An alternative is the top down approach, where the general program is tested first, followed by the addition of program modules, one level at a time to he lowest level.

Plan User Training

User training is designed to prepare the user for testing and converting the system. User involvement and training take place parallel with programming for three reasons:

   1. The system group has time available to spend on training while programs are being written.
   2. Initiating a user-training program gives the systems group a image of the user’s interest in the new system.
   3. A trained user participates more effectively in system testing.

For user training, preparation of a checklist is useful Included are provisions for developing training materials and other documents to complete the training activity: In effect, the checklist calls for commitment of personnel, facilities, and efforts for implementing the can date system.

         The training plan is followed by preparation of the user training manual and other text materials. Facility requirements and the necessary hardware specified and documented. A common procedure is to train supervisors and department heads who, in turn, train their staff as they see fit. The reasons are obvious:

   1. User supervisors are knowledgeable about the capabilities of their staff and the overall operation.
   2. Staff members usually respond more favorably and accept instructions better from supervisors than from outsiders.
   3. Familiarity of users with their particular problems (bugs) makes them better candidates for handling user training than the systems analyst The analyst gets feedback to ensure that proper training is provided.

Prepare Test Data For Transaction Path Testing

This activity develops the data required for testing every condition transaction to be introduced into the system. The path of each transaction from origin to destination is carefully tested for reliable results. The t verifies that the test data are virtually comparable to live data used conversion.