Search

Loading

If you find the information within this blog useful please take the time to support the site and visit one of the Google advertisers.


Share

11 May 2010

Efficient Software Testing



Testing, as with all validation activities, is not only performed to demonstrate regulatory compliance but to ensure that the system operates correctly.

The level of testing performed should be dependant on the complexity, maturity and criticality of the system.

A standard laboratory instrument should not need the same level of testing as a bespoke automated control system. This is where the GAMP software categories can define the level of testing. Other mechanisms just as initial risk assessments can also be used.

This article looks at testing bespoke systems (GAMP Category 5).

Plan It


Test Plans should be developed that document how the testing will be constructed. Testing should consider the associated risk from the Functional Risk Assessments that identify the critical attributes.

During the planning phase the following should be considered.

  • Module Testing
  • Factory Acceptance Testing (FAT)
  • Commissioning and Site Acceptance Testing (SAT)
  • Installation Qualification (IQ)
  • Operational Qualification (OQ)
  • Performance Qualification (PQ)

Planning the testing is critical and a Requirements Traceability Matrix (RTM) can provide a suitable tool that initially plans where functions and attributes will be tested and updated to include the actual test reference, to support inspections.

The RTM also provides a useful tool to ensure that each function is tested and only tested at the most efficient intervals. For example on a DO Controller, if standard alarms are being utilised, this module of software should not require retesting at each phase of the testing.

The associated controller may require testing at a number of phases. At the FAT tested to demonstrate operation (simulated), at SAT when connected to the services for configuration (PID Settings) and accuracy. Finally at the OQ some verification performed to ensure that the final accuracy and control meets specification. Note this may be a quality review of the SAT.

Reduce the Risk


When planning the testing ensure that as much of the testing is performed as early in the project as possible. Testing at FAT at the suppliers’ premises ensures that failures can be handled by the supplier before shipping to site. The later in a project cycle that a test failure is detected the more expensive it is to fix. Correct supplier resource on site to rectify the fault, quality review of test failure and rectification and re-testing.

Through the planning phase consider front loading the testing (test as much as possible at FAT) and then leverage the test data forward to ensure that unnecessary repeat testing is not performed.

See test diagram below.




Where the change in the environment (FAT to SAT) does not impact the test results then retesting should not be performed.

In the above DO Control example the system is only connected to the supplies once installed on site.

The FAT only demonstrates that the controller exists and that it can be controlled during the correct phases. Once installed on site it is commissioned and the accuracy verified during SAT and a quality review performed at OQ.

Summary

  • Plan testing
  • Test early as possible in project lifecycle
  • Do not repeat testing unnecessarily
  • Use supplier testing where ever possible
Further articles will be posted with more details about testing, please feel free to leave comments.

No comments:

Post a Comment

All comments on the computer systems validation blog are welcome.

Share