The IT and automated systems are used in data collection, storage, management, reporting, labeling, tracking and other key operational processes related to the conduct of a clinical trial. These “touch” an FDA-regulated product (pharmaceutical, biologic, medical device, etc.) during the process and thus the systems must be validated in accordance with agency requirements for Computer System Validation (CSV).
IT and automated “GxP” systems include those in development, clinical research and study management- manufacturing, Warehouse storage, distribution, adverse event monitoring and post-marketing surveillance.
FDA Regulatory Oversight
Companies regulated by the Food and Drug Administration (FDA) have good reason for meeting compliance guidelines issued by the Agency. This is to focus on customer safety, product quality and data integrity, continued efficient business operations without unnecessary time and effort to respond to issues and concerns, good relations with FDA and any other regulatory agency, positive company image and reputation.
The FDA operates on two key premises:
• If you didn’t document it, you didn’t do it
• If you could have committed fraud, you did commit fraud.
There is no recourse, since the notion that “you are innocent until proven guilty” does not hold up when dealing with FDA. The FDA can issue fines, confiscate materials and/or products, and can shut down some or all of your operations. The classic example is Schering-Plough’s Consent Decree issued in 1998. It cost $500MM in fines and resulted in the shut-down of more than one manufacturing plant. It required 10 years to recover from the losses.
Since 2007, overall FDA inspections have increased 20%. At the same time, 54% of companies inspected were cited for violations ($50,000 average fines), and 80 major drug companies failed more than half their inspections. Any system that “touches” product in any way is game. Agents can padlock your doors, confiscate product and shut down your operation.
Computer System Validation (CSV)
The FDA Guidance for Computer System Validation (CSV), also known as the FDA “Blue Book,” was issued in 1983. The CSV assures that a system does what it purports to do, and has been thoroughly tested and validated. It ensures that the system is based on the standard System Development Life Cycle (SDLC) methodology and includes all phases from “cradle to grave” for a system. It assures the system remains in a validated state throughout its life and requires rigorous planning, testing and documentation.
The SDLC supports FDA requirements for GxP system validation and is integral to the CSV methodology. It is the methodology used for selecting, implementing, maintaining and retiring a system (“cradle to grave”) and includes a series of life cycle “phases”.
SDLC Includes validation strategy/plan, GAMP 5 software categorization, risk assessment and mitigation, User Requirements Specification (URS), Functional Requirements Specification (FRS), System Design Specification (SDS) and system configuration specification.
It also includes implementation (configuration, custom development), test plan, scenarios and scripts, installation qualification (IQ), operational qualification (OQ), performance qualification (PQ) [user acceptance testing (UAT)], test summary reports, validation summary report (VSR), system acceptance report, notification of system release, operational maintenance and support procedures, logs, etc. - system support, security and access monitoring, and change control, disaster recovery and business continuity planning, periodic review. (at least annually) and system retirement plan.
Once a system is selected, a strategic approach should be developed for testing and validating it:
Is there an overall company approach? What rationale will be used to demonstrate the system is fully tested and validated to meet FDA compliance? Who will be involved in the validation process? How will the documentation and approvals be completed? How will training be incorporated into the project? How will organizational change management be handled?
GAMP 5 Software Categorization
All categories of systems must undergo testing in order to be properly validated in accordance with FDA requirements. The type or category of system you select will determine the level of validation and, therefore the amount of testing, that will be required. This determination is based on specific criteria and documents specific rationale. Thus it must start with requirements and system selection.
ISPE has determined categories to classify software types for monitoring systems:
• Category 1: Operating Systems (Record ID and Version)
• Category 2: Instruments and Controllers (Record Configuration and Calibration)
• Category 3: Off-the-shelf
• Category 4: Configured
• Category 5: Custom
Record the information that identifies the operating system software, instrument, controller or other component:
· Version number
Instruments and Controllers
Record the Installation Qualification (IQ) results
· Date installed
· Capture any screen shots that will help demonstrate it was installed properly.
Typically the simplest category to validate; there will be few, if any, changes that will need to be tested and challenged.
Requires a moderate increase in validation activity, but is balanced by increased functionality and specialized features.
There is a large increase in testing required; these systems meet specific requirements tailored to your business processes and must be adequately challenged
A risk-based approach to validation is an industry best practice. FDA does not have adequate staffing to inspect every system in every company visited. It expects companies to categorize their regulated systems based on risk. A standard risk approach should be developed for the company and used consistently. The approach must be based on a carefully thought-out rationale. Each system’s risk profile should include probability, severity and mitigation components.
To understand the impact on the process, clinical trial, or product if the system used to support clinical trials were to fail, identify risk scenarios, assess severity of impact, assess likelihood the risk scenario will occur, assess how detectable it would be if it did occur. Identify ways to mitigate the risk. From these, determine the risk priority.
Create a system inventory for inspection i.e., a prioritized list of the company’s GxP-regulated system inventory, the level of risk assigned to each system- systems used to support clinical trials are likely to be high in risk, as there is potential for harming a patient or participant, the value of a clinical trial may be at risk, should any data become invalidated and, the approach to validation that will be done to assure risk is minimized- requirements definition, testing to ensure every requirement is met, monitoring to ensure the system remains in a state of validation over its life cycle.
System Validation Plan
Having completed a validation strategy, it is necessary to develop a very specific validation plan. You need to understand the SDLC phases and steps are required for the system, and how specifically will they be determined and rationalized, who is going to do specific tasks, what level of testing is required and what will the project management schedule look like
Detailed User Requirements Specification (URS) must address all system functionality at a high level. The users must define and approve the URS, usually written in business terminology. The URS should be the basis for developing the detailed Functional Requirements Specification (FRS). The requirements should be maintained as current (“living” document). Care should be taken to include only those requirements that represent functionality that will be used, as each will require specific testing, which can become time-consuming.
Detailed Functional Requirements Specification (FRS) must address all system functionality at a detailed level. The users must define and approve functional requirements. Every requirement must be “unique” and “testable”. The requirements should be maintained as current (“living” document). Care should be taken to include only those requirements that represent functionality that will be used, as each will require specific testing, which can become time-consuming.
“Constraints, demands, necessities, needs or parameters that must be met or satisfied, usually within a certain timeframe”
Requirements must be unique, and have a unique identifier, able to be tested, technically feasible, able to support a business process, complete and exhaustive, well thought out, balanced, clearly understood and the basis of a user commitment.
What the software does is directly perceived by its users – either human users or other software systems that are integrated. When a user performs some action, the software responds in a particular way; when an external system submits a request of a certain form, it gets a particular response. Therefore you and the users must agree on actions they can perform and the response they should expect.
This refers to how the software responds to the agreed upon request as is addressed in the design specification document. For example, this might include screen layouts, database schemas, and descriptions of communication layers. For example, it is a requirement for a laboratory application to allow the user to open a data file for which they have approved access, it is a design issue whether to build a customized file selection tool or use a platform standard file selection tool.
This specifies something the system should do, such as a behavior, function.
• High-level Requirement Example: Allow entry of product information
• Detailed Requirement Example: Allow entry of a product number as an 8-character numeric (plus many more to satisfy the high-level requirement)
These include business rules, transactions, authorization levels, data entry requirements, administrative functions, reporting requirements, archival of historical data, regulatory requirements and certification requirements.
This specifies how the system should behave and that it is a constraint upon the system’s behavior (quality attributes of a system), the criteria that judge the operation of the system, rather than specific behaviors. For example: Allow users to be authorized to access an application.
These include performance, scalability, capacity, availability, reliability, recoverability, maintainability, security, data integrity, environmental and interoperability.
Each business requirement must be translated into one or more user requirements at a high-level, then these must be broken down into functional requirements at a more detailed level. It is always more cost-effective to include the requirements up front vs. later on in the project; avoid scope creep and do not change any requirements without the users’ permission. Know who asked for the requirement and track their name along with it.
There are key requirements that should never be missed- security permissions, error messaging, error logging, system shutdown and system overload handling.
Categorize requirements for ease of review and tracking i.e., business, technical, environment, quality and service. If you do not think you are going to use functionality, consider whether to include it, as it will cost time and money to test, validate and maintain. For every requirement, there must be one or more design specification elements. For every requirement, there must be one or more test scripts that, when executed, prove the requirement is met. The requirements, design specifications and test scripts that are linked together should be documented in the Requirements Traceability Matrix (RTM).
There must be one or more design elements for each functional requirement. There must be one or more test elements for each functional requirement. This is one of the most critical documents to have available during an FDA inspection. The requirements definition can be a time-consuming and costly proposition, and usually depends on the complexity of the system functionality, number of stakeholders and number of business processes. Every company must determine how much time and money they want to invest in capturing requirements, as there is a level of risk associated with scaling the effort back.
System Design Specification (SDS)
A detailed System Design Specification (SDS) must address all defined functional requirements. The design specifications should be maintained as current (“living” document). In the case of a COTS (computer off-the-shelf software), the design will be replaced with a configuration specification. Also the users must sign off on them.
Business Process Re-engineering
When implementing a system, there is often an opportunity to modify one or more current business processes to reflect more efficient and effective ways of operating. Current business processes should be adequately documented. Workshops can be used to engage users and identify opportunities for process improvement. Process changes should be those that are feasible, given the system’s functionality.
System Testing: IQ/OQ/PQ
Testing is one of the most critical steps required before placing a system in production.
Installation Qualification (IQ) should be performed on hardware, operating software and applications. Operational Qualification (OQ) should be performed on any code (unit and integration testing) and Performance Qualification (PQ) should be specific to the way the system will be used and must be executed by the users.
The test cases should be written and executed against documented requirements and/or design specifications. Different types of requirements/specifications are subject to different kinds of testing - IQ verifies DS, OQ verifies FRS and DS, and PQ verifies URS.
The testing approach should be set forth in a test plan. Develop a detailed test plan, including test scenarios and scripts. Segregation of duties should be followed.
Preparing a Test Protocol (IQ, OQ and PQ)
A Test Protocol must include:
Signature Page: Printed name for tester, reviewer and approver, signature of each person, date and meaning of each signature.
Introduction: Background that includes the reason for testing and the rationale for the chosen approach to testing.
Purpose: Set expectations for what you are trying to prove.
Scope: The included items and the excluded items.
Responsibilities: All levels involved in the effort. Remember to segregate duties for all activities- Prepare/Execute, review, and approve.
System Description: Diagram of the system and components, flow chart of the process(es) involved
Test Procedure: Steps required to complete testing, test data to be included/excluded, notes on how to test or what to watch for.
Test Results: Pass/fail – it either meets the expected results or does not. Indicate any deviation from expected results
Deviations and Resolutions:
System error – “glitch” that must be fixed,
Script error – must rewrite script and re-test,
Tester error – Executed incorrectly, documented incorrectly. Must re-execute testing and document.
Revision History: Always maintain the history of document versions.
Test Protocol Re-Use: Can be modified for subsequent re-testing or re-validation.
Test Protocol “Cases” and “Scenarios” are used to demonstrate the effectiveness of a system in a real business situation:
A “Test Case” is HOW to be tested:
A set of conditions or variables under which the tester will determine whether an application, software system or one of its features is working as it was originally established for it to do.
A “Test Scenario” is WHAT to be tested:
Making sure that end-to-end functionality of an application or system under test is working as expected; requires checking of the business flows. It requires client input to make sure the testing reflects how they would actually use the system in a working environment.
Testing should account for both “positive” and “negative” scenarios:
Positive Scenarios test that a system does what it is intended to do and Negative Scenarios test that a system does not do what it is not intended to do.
Testing should include boundary and stress testing:
Testing should force the user to exercise the boundaries of functionality. It should demonstrate that maximum use of the system does not negatively affect performance.
A test protocol for a system used in distribution for clinical trial samples may include the product or material specification, when it was manufactured, the lot and batch number, who handled it showing chain of custody and the environmental storage conditions.
A test protocol for an alarm system used to monitor temperature/humidity/etc. related to clinical trial sample preparation/storage may include setting the alarm at a certain level, changing the conditions to cause the alarm to go off, the specification for those conditions, the length of the alarm and whether it is repeated and the impact the alarm has on any other processes.
A test protocol for a lab system used in the conduct of testing clinical trial samples may include sample identification, method of analysis, testing instrument(s) involved, environmental conditions, sampling procedure, sample specification and expected range for results.
Test “Scripts” are the specific procedural steps to be executed in support of the test scenario; for the lab system.
For example, Record the sample number as “NN-NNNNN”; Record the specific temperature (deg F) and humidity; Record the instrument Serial number as “N-NNNNNNNN”; Set the instrument to the value “NNN”; Run the test; Manually verify the results recorded as output; Click on the “Analysis” button and Verify the analyzed results are within a specified range, “NNN” to “NNN”.
Every deviation from an expected result must be assigned a consecutive number, investigated and the reason for failure determined (root cause analysis). It should be noted as a tester error, script error or system error. If there’s a tester or script error, conduct a re-test to ensure accuracy and completeness. If there’s a system error or defect, testing must be stopped until it is resolved, which may require contact with the vendor and correction of code (or correction internally if home-grown). The vendor may refer to specific unit and integration testing completed, but must resolve the issue promptly. If code must be corrected, the entire testing process will need to be redone from unit and integration testing by the vendor, and installation testing and forward, by the company. Once the defect has been determined to be corrected, the resolution must be recorded in the test deviation and resolution section of the Test Protocol. All defects must be resolved