It has been identified that ten adverse events were not reported in the trial prior to the database lock. What action should be taken to determine the next step?
Get the AE data entered immediately so the database can be locked again.
Evaluate the potential effect of the omission on the validity of the safety and efficacy analysis.
Notify upper management immediately so the monitor can contact the site.
Check the data from all sites again before relocking the database.
When adverse events (AEs) are discovered after adatabase lock, the appropriate first step is toevaluate the impactof the missing data on theintegrity, safety analysis, and regulatory validityof the study results.
According toGCDMP (Chapter: Data Quality Assurance and Control), any post-lock data discovery requires aroot cause assessment and impact analysisbefore deciding whether to unlock the database. The key question is whether the missing AEs:
Affectprimary safety endpoints,
Introducebiasin safety reporting, or
Alterefficacy conclusions.
Based on the assessment, the Data Management and Biostatistics teams determine if unlocking and correction are justified. Simply entering data immediately (A) or repeating checks (D) without analysis may violate data control procedures.
Hence,option Bis correct — the first step is to assess theimpact on data validity and analysis.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Data Quality Assurance and Control, Section 5.5 – Post-Lock Findings and Impact Assessment
ICH E6(R2) GCP, Section 5.1.1 – Quality Management and Risk Assessment
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations, Section 6.5 – Post-Lock Data Management
During a database audit, it was determined that there were more errors than expected. Who is responsible for assessing the overall impact on the analysis of the data?
Data Manager
Statistician
Quality Auditor
Investigator
TheStatisticianis responsible for assessing theoverall impact of data errors on the analysis and study results.
According to theGood Clinical Data Management Practices (GCDMP, Chapter: Data Quality Assurance and Control)andICH E9 (Statistical Principles for Clinical Trials), while theData Managerensures data accuracy and completeness through cleaning and validation, theStatisticiandetermines whether the observed data discrepancies are statistically significant or if they may affect thevalidity, power, or interpretabilityof the study’s outcomes.
TheQuality Auditor (C)identifies and reports issues but does not quantify analytical impact. TheInvestigator (D)is responsible for clinical oversight, not statistical assessment. Thus, after a database audit, theStatistician (B)performs a formal evaluation to determine whether the magnitude and nature of the errors could bias results or require reanalysis.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Data Quality Assurance and Control, Section 7.3 – Data Audit and Impact Assessment
ICH E9 – Statistical Principles for Clinical Trials, Section 3.2 – Data Quality and Analysis Impact Assessment
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations – Data Validation and Analysis Review
A study is collecting pain levels three times a day. Which is the best way to collect the data?
Using paper pain diary cards completed by study subjects
Sites calling patients daily and administering a pain questionnaire
Study subjects calling into an IVRS three times a day to enter pain levels
Using ePRO with reminders for data collection at each time point
The optimal method for collectingfrequent patient-reported pain datais throughelectronic Patient-Reported Outcomes (ePRO)with built-inreminder functionality.
According to theGCDMP (Chapter: Electronic Data Capture Systems), ePRO systems provide avalidated, real-time, and user-friendly interfacefor subjects to record time-sensitive data accurately. The use ofautomated remindersensures compliance with protocol-specified data collection times, improving data completeness and accuracy.
Paper diaries (option A) are prone torecall bias and backfilling, while daily site calls (option B) areresource-intensiveand introduce human error. IVRS systems (option C) are acceptable but less efficient and user-friendly than modern ePRO applications, which can integrate timestamp validation, compliance monitoring, and real-time alerts.
ePRO systems also comply withFDA 21 CFR Part 11andICH E6 (R2)for audit trails, authentication, and validation, making them the preferred solution for repeated PRO data collection.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Electronic Data Capture (EDC) Systems, Section 6.1 – Use of ePRO for Repeated Measures
FDA Guidance for Industry: Electronic Source Data in Clinical Investigations, Section 5 – ePRO Compliance and Validation
ICH E6 (R2) GCP, Section 5.5.3 – Electronic Data Systems and Recordkeeping
Which Clinical Study Report section would be most useful for a Data Manager to review?
Clinical narratives of adverse events
Enumeration and explanation of data errors
Description of statistical analysis methods
Rationale for the study design
The section of theClinical Study Report (CSR)that is most useful for a Data Manager is the one that includes theenumeration and explanation of data errors. This section provides a summary of thedata quality control findings, including error rates, missing data summaries, and any issues identified during data review, validation, or database lock.
According to theGCDMP (Chapter: Data Quality Assurance and Control), post-study reviews of data errors and quality findings are essential for evaluating process performance, identifying recurring issues, and informing continuous improvement in future studies.
Other sections, such as clinical narratives (A) or statistical methods (C), are outside the core scope of data management responsibilities. Thedata error enumeration sectiondirectly reflects the quality and integrity of the data management process and is therefore the most relevant for review.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Data Quality Assurance and Control, Section 6.4 – Quality Reporting and Error Analysis
ICH E3 – Structure and Content of Clinical Study Reports, Section 14.3 – Data Quality Evaluation
Which metrics report listed below would best help identify trends in the clinical data?
Percent of data/visits cleaned
Last patient/last visit date to data lock date
Number of subjects screened/enrolled
Query frequency counts per data element
TheQuery frequency counts per data element(Option D) is the best metric for identifyingdata trends and potential systemic data issuesin clinical trials.
According to theGood Clinical Data Management Practices (GCDMP, Chapter: Data Quality Assurance and Control),trend analysisinvolves identifying recurring data issues across subjects, sites, or variables to detect training gaps, protocol misinterpretation, or CRF design flaws. A high number of queries generated for specific fields (e.g., visit date, lab values, or dosing information) may indicate systemic problems such as unclear CRF instructions or site-level misunderstandings.
While metrics such aspercent of data cleaned (A)andtime to database lock (B)reflect overall progress and efficiency, they do not identifyspecific data pattern issues. Thenumber of subjects screened/enrolled (C)pertains to recruitment rather than data quality.
Therefore,query frequency per data elementprovides actionable insights for quality improvement, process refinement, and early identification of potential risks.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Data Quality Assurance and Control, Section 6.3 – Metrics and Trend Analysis
ICH E6 (R2) Good Clinical Practice, Section 5.18.4 – Risk-Based Quality Review and Data Trends
FDA Guidance for Industry: Oversight of Clinical Investigations – Risk-Based Monitoring, Section 6 – Data Metrics and Trend Evaluation
The serious adverse event (SAE) database should be reconciled against the clinical trial database prior to which occasion?
Case report form data entry
Expedited safety reporting
Database quality audit
Database closure or locking
SAE reconciliationmust becompleted before database lock or closureto ensure all safety data are consistent between theclinical databaseand thepharmacovigilance (safety) database.
According to theGCDMP (Chapter: Safety Data Handling and Reconciliation), SAE reconciliation involves verifying that all adverse events reported in the clinical trial database are also captured and accurately recorded in the safety system (and vice versa). This is essential to confirm thatno SAE is missing, misclassified, or inconsistently dated or codedbetween the two systems.
Performing this reconciliation before database lock ensures that any discrepancies are corrected, and both databases reflect consistent, verified information for regulatory submission. Conducting this after closure (or only at audit time) would risk data inconsistencies in the final submission datasets.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: SAE Reconciliation, Section 6.1 – Timing and Procedures for Reconciliation
ICH E2A/E2F – Clinical Safety Data Management: Definitions and Standards
FDA Guidance for Industry: E2A – Clinical Safety Data Management: Processing Standards for Safety Reports
For a study, body mass index is calculated from weight and height. Which information is needed to document the transformation?
Algorithm and algorithm version associated with the calculated value
Algorithm associated with the calculated value
User ID making the change and reason for change
Algorithm documented in the Data Management Plan
When derived or calculated variables (likeBody Mass Index) are created, it is essential to document thealgorithmused and itsversionto ensure full data traceability and reproducibility.
According toGCDMP (Chapter: Database Design and Derived Data), every derived field must include metadata describing:
Thederivation algorithm(e.g., BMI = weight [kg] / height² [m²])
Theversionof the algorithm (if updates or revisions occur)
Any associateddata sourcesor transformation rules
This ensures consistent calculation across systems, prevents discrepancies during regulatory submissions, and aligns withFDAandCDISCdocumentation expectations.
Option B lacks version control, which is critical for traceability. Option C describes audit trail data (not derivation metadata), and option D refers to broader documentation, not specific algorithm traceability.
Hence,option A (Algorithm and algorithm version associated with the calculated value)is the correct and compliant answer.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Derived Data and Algorithms, Section 5.3 – Documentation and Metadata Requirements
ICH E6(R2) GCP, Section 5.5.3 – Derived Data and Validation Traceability
FDA Guidance for Industry: Providing Regulatory Submissions in Electronic Format – Data Definitions (Define.xml)
A study numbers subjects sequentially within each site and does not reuse site numbers. Which information is required when joining data across tables?
Subject number and site number
Subject number
Study number and subject number
Site number
When subjects are numberedsequentially within each site, it means that thesubject identification numbers (Subject IDs)restart from 001 at each site. For example, Site 101 may have Subject 001, and Site 102 may also have a Subject 001. In such cases, thesubject number alone is not globally uniqueacross the entire study. Therefore, when integrating or joining data across multiple database tables (for example, linking demographic, adverse event, and laboratory data), both thesite number and the subject numberare required to create a unique key that accurately identifies each record.
According to theGood Clinical Data Management Practices (GCDMP, Chapter on CRF Design and Data Collection), every data record in a clinical trial database must be uniquely and unambiguously identified. This is typically achieved through acomposite key, combining identifiers such assite number,subject number, and sometimesstudy number. The GCDMP specifies that a robust data structure must prevent duplication or mislinking of records across domains or tables.
Furthermore,FDA and CDISC standards (SDTM model)also emphasize the importance ofunique subject identifiers (USUBJID), which are derived from concatenating the study ID, site ID, and subject ID. This ensures traceability, integrity, and accuracy of subject-level data during database joins, data exports, and regulatory submissions.
Thus, in the described scenario, since subject numbering restarts at each site,both the site number and subject numberare required to uniquely identify and correctly join subject data across different datasets or tables.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: CRF Design and Data Collection, Section 4.1 – Unique Subject Identification
CDISC SDTM Implementation Guide, Section 5.2 – Subject and Site Identification (Variable: USUBJID)
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations, Section 6 – Data Integrity and Record Identification
A study team member suggests that data for a small, 50-patient, 2-year study can be entered and cleaned in two weeks before lock. Which are important other considerations?
Processing the data in two weeks after the study is over would save money because the data manager would not be involved until the end
Without the ability to capture the data electronically, the data cannot be checked or used to monitor and manage the study
Processing the data in two weeks after the study is over would save money because the EDC system would only be needed for a month
It would take more than two weeks to get second iteration queries generated and resolved
The most critical consideration is thatdata cleaning is an iterative process, and completing all necessary steps — includingquery generation, site resolution, and second-pass validation— cannot realistically be accomplished within two weeks after study close.
According to theGood Clinical Data Management Practices (GCDMP, Chapter: Data Validation and Cleaning), data cleaning must occurcontinuously throughout the study, not only at the end. Post-database lock activities typically include running final validation checks, resolving outstanding queries, performing reconciliation (e.g., SAEs, labs, coding), and conducting final quality review.
Even in small studies,query turnaround and response cyclesfrom sites take time — typically2–4 weeks per iteration— making a two-week total cleaning period unrealistic.
Therefore,Option Dis correct: it would take more than two weeks to handle second-round (follow-up) queries and confirm final resolutions prior to database lock.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Data Validation and Cleaning, Section 5.4 – Ongoing vs. End-of-Study Data Cleaning
ICH E6 (R2) Good Clinical Practice, Section 5.5.3 – Data Quality and Timeliness
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations – Data Management and Cleaning
ePRO data are collected for a study using study devices given to subjects. Which is the most appropriate quality control method for the data?
Programmed edit checks to detect out of range values after submission to the database
Manual review of data by the site study coordinator at the next visit
Data visualizations to look for site-to-site variation
Programmed edit checks to detect out of range values upon data entry
When electronic patient-reported outcomes (ePRO) devices are used, data are captured directly by subjects through validated devices and transmitted electronically to the study database. To ensurereal-time data quality control,programmed edit checksshould be implementedat the point of data entry— that is, as subjects input data into the device.
According toGood Clinical Data Management Practices (GCDMP, Chapter: Data Validation and Cleaning),front-end programmed edit checksare the optimal method to prevent entry of invalid or out-of-range values in ePRO systems. This helps maintain data accuracy at the source, minimizing downstream queries and data cleaning workload.
OptionsAandBinvolve post-submission or manual review, which is less efficient and not compliant with the principle offirst-pass data validation.Option C(visualization) is a valuable secondary QC method for trends, but not for immediate data validation.
Therefore,option Dis correct —programmed edit checks upon data entryensure immediate validation and higher data integrity.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Data Validation and Cleaning, Section 5.3 – Automated Edit Checks and Front-End Validation
ICH E6(R2) GCP, Section 5.5.3 – Computerized System Controls and Validation
FDA Guidance for Industry: Electronic Source Data in Clinical Investigations (2013), Section 6 – Real-Time Data Quality Control
Who has primary responsibility for ensuring accurate completion of the CRF?
Clinical Data Manager
Site Coordinator
Clinical Research Associate
Investigator
TheInvestigatorholds theprimary responsibilityfor ensuring the accuracy, completeness, and timeliness of Case Report Form (CRF) entries. This responsibility is mandated by regulatory requirements underICH E6(R2) Good Clinical Practice (GCP).
The investigator may delegate CRF completion to aqualified designee (e.g., site coordinator), but the ultimate accountability remains with the investigator. The investigator’s signature (electronic or manual) on the CRF serves as certification that the data accurately reflect the source documents and the patient’s participation.
TheGCDMP (Chapter: CRF Design and Data Collection)reinforces this by stating that while data managers ensure design quality and CRAs verify consistency with source data,the investigator is legally responsible for CRF accuracy.
Thus,option D (Investigator)is correct, as it aligns with both GCP and CCDM standards.
Reference (CCDM-Verified Sources):
ICH E6(R2) GCP, Section 4.9 – Records and Reports (Investigator Responsibilities)
SCDM GCDMP, Chapter: CRF Design and Data Collection, Section 5.1 – Investigator’s Role in Data Accuracy
FDA 21 CFR Part 312.62 – Investigator Recordkeeping and Record Retention
Which is the best reason why front-end checks are usually kept minimal, when compared to back-end checks, in a paper-based clinical study?
Data entry staff should be able to enter a value into the database just as it appears in the paper CRF
There is no need to alert the site personnel immediately about a data issue, as the study has happened already
There are approvals required to raise a Data Clarification Form which could take time
Data review can be performed at a later time due to the paper-based studies being smaller in size
Inpaper-based clinical studies,front-end data checks(those performed during data entry) are intentionally kept minimal to ensure thatdata are entered exactly as recorded on the paper CRF. This principle ensuresdata integrityby maintaining fidelity between source and electronic records before any cleaning or edit validation occurs.
TheGCDMP (Chapter: Data Validation and Cleaning)explains that data entry operators should input values as written, even if they appear incorrect or inconsistent, because the purpose of front-end checks is not to interpret but to capture data faithfully. Theback-end edit checks—performed later by data managers—are designed to identify inconsistencies, out-of-range values, or logical errors that require clarification through queries.
This approach separatesdata capturefromdata cleaning, minimizing bias and preserving original investigator input. Hence,option Aaccurately states the rationale for keeping front-end checks minimal in paper-based studies.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Data Validation and Cleaning, Section 4.2 – Data Entry, Edit Checks, and Query Process
ICH E6(R2) GCP, Section 5.5.3 – Data Handling and System Controls
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations, Section 6.1 – Data Entry and Verification Processes
Which is the best way to identify sites with high subject attrition?
Proportion of patients for which two visit periods have passed without data by site
Number of late visits per site
Proportion of late visits by site
Number of patients for which two visit periods have passed without data
Thebest methodto identify sites withhigh subject attritionis to calculate theproportion of patients for which two visit periods have passed without data, by site.
According to theGCDMP (Chapter: Data Quality Assurance and Control), subject attrition is an important performance indicator for data completeness and site compliance. Evaluating missing or delayed data acrossmultiple consecutive visit periodsallows for early detection of potential dropouts or site-level operational issues.
By assessing this proportion at thesite level, the Data Manager can distinguish between random missing data and systematic site underperformance. Counting or proportioning late visits (options B and C) identifies scheduling delays, not attrition. Looking at missing data without site context (option D) fails to identify site-specific patterns, limiting corrective action.
This metric aligns withrisk-based monitoring (RBM)practices recommended byICH E6 (R2)andFDA RBM Guidance, which promote proactive identification of sites at risk of data loss.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Data Quality Assurance and Control, Section 5.4 – Site Performance Metrics
ICH E6 (R2) Good Clinical Practice, Section 5.18 – Monitoring and Site Performance Evaluation
FDA Guidance for Industry: Oversight of Clinical Investigations – Risk-Based Monitoring, Section 6 – Site Performance Metrics
In a study, data are key entered by one person after which a second person enters the data without knowledge of or seeing the values entered by the first. The second person is notified during entry if an entered value differs from first entry and the second person's decision is retained as the correct value. Which type of entry is being used?
Blind verification
Manual review
Third-party compare
Single entry
The described process isBlind Verification, also known asdouble data entry with blind verification. In this method, two independent operators enter the same data. The second operator isblindedto the first entry to avoid bias. When discrepancies arise, the system flags them for review, and the second entry (or an adjudicated value) is retained as the correct one.
According toGCDMP (Chapter: Data Entry and Data Tracking), blind double data entry is used primarily inpaper-based studiesto minimize transcription errors and ensure data accuracy.
Single entry (D):Only one operator enters data.
Manual review (B):Involves post-entry checking, not during entry.
Third-party compare (C):Used for reconciling external data sources, not CRF data.
Hence,option A (Blind verification)is the correct and CCDM-defined process.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Data Entry and Data Tracking, Section 5.1 – Double Data Entry and Verification Methods
ICH E6(R2) GCP, Section 5.5.3 – Data Entry and Verification Controls
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations, Section 6.2 – Data Accuracy and Verification
An asthma study is taking into account local air quality and receives that data from the national weather bureau. Which information is needed to link research subject data to the air-quality readings?
Location identifier
Location and time identifiers
Location, time and subject identifiers
Location, time, subject and site identifiers
When integratingexternal environmental datasuch asair quality readingswith clinical study data, it is essential to uselocation and time identifiersto properly align the environmental data with subject-level data.
According to theGood Clinical Data Management Practices (GCDMP, Chapter: Data Management Planning and Study Start-up), external data sources (like national weather or pollution databases) must be merged usingcommon linkage variablesthat allow synchronization without breaching subject confidentiality. In this case:
Location identifiers(e.g., city, postal code, or region) align the subject’s study site or residential area with the environmental dataset.
Time identifiers(e.g., date and time of data collection) ensure that the environmental readings correspond to the same period as the subject’s clinical observations.
Including subject identifiers (option C or D) is unnecessary and would poseprivacy and data protection risks. Instead, linkage is typically done at theaggregate (site or regional) level, maintaining compliance withHIPAAandGDPR.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Data Integration and External Data Handling, Section 4.3 – Linking External Data Sources
ICH E6 (R2) GCP, Section 5.5.3 – Data Traceability and External Data Management
FDA Guidance for Industry: Use of Electronic Health Record Data in Clinical Investigations, Section 5.2 – Linking and Integration Principles
Data characterizing the safety profile of a drug are collected to provide information for which of the following?
Survival curves
Efficacy meta-analyses
Product labeling
Quality of life calculations
Safety datacollected during a clinical trial are used primarily to supportproduct labeling, ensuring accurate communication of a drug’srisks, contraindications, and adverse reactionsto healthcare providers and patients.
According to theGCDMP (Chapter: Safety Data Handling and Reconciliation)andICH E2A/E2Fguidelines, alladverse events (AEs), serious adverse events (SAEs), and laboratory abnormalitiesare analyzed and summarized to define the safety profile of an investigational product. These data form the basis for regulatory submissions such as theClinical Study Report (CSR)andproduct labeling (e.g., prescribing information), as required by theFDAand other regulatory authorities.
While safety data may contribute indirectly to analyses such as survival curves (option A) or quality of life metrics (option D), theirprimary regulatory functionis to informproduct labelingand post-marketing surveillance documentation.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Safety Data Handling and Reconciliation, Section 4.3 – Use of Safety Data in Regulatory Submissions
ICH E2A – Clinical Safety Data Management: Definitions and Standards for Expedited Reporting
FDA Guidance for Industry: Adverse Event Reporting and Labeling Requirements
Which of the following data verification checks would most likely be included in a manual or visual data review step?
Checking an entered value against a valid list of values
Checking adverse event treatments against concomitant medications
Checking mandatory fields for missing values
Checking a value against a reference range
Manual or visual data reviewis used to identifycomplex clinical relationships and contextual inconsistenciesthat cannot be detected by automated edit checks.
According to theGCDMP (Chapter: Data Validation and Cleaning), automated edit checks are ideal for structured validations, such as missing fields (option C), reference ranges (option D), or predefined value lists (option A). However, certain clinical cross-checks—such as verifyingadverse event treatments against concomitant medication records—requireclinical judgmentandcontextual understanding.
For example, if an adverse event of "severe headache" was reported but no analgesic appears in the concomitant medication log, the data may warrant manual review and query generation. These context-based checks are best performed by trained data reviewers or medical data managers during manual data review cycles.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Data Validation and Cleaning, Section 6.3 – Manual Review and Clinical Data Consistency Checks
ICH E6 (R2) Good Clinical Practice, Section 5.18.4 – Clinical Data Review Responsibilities
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations – Data Verification Principles
A Clinical Data Manager reads a protocol for a clinical trial to test the efficacy and safety of a new blood thinner for prevention of secondary cardiac events. The stated endpoint is all-cause mortality at 1 year. Which data element would be required for the efficacy endpoint?
Drug level
Coagulation time
Cause of death
Date of death
The efficacy endpoint ofall-cause mortality at one yeardirectly depends on thedate of deathfor each subject, makingOption D – Date of deaththe required data element.
According to theGCDMP (Chapter: Clinical Trial Protocols and Data Planning)andICH E3/E9 Guidelines, the primary efficacy analysis must be based on time-to-event data, particularly when the endpoint involvesmortality or survival. Thedate of deathallows accurate calculation oftime from randomization to event, essential for survival analysis (e.g., Kaplan-Meier curves).
Whilecause of death (C)may be collected for safety or secondary analyses,all-cause mortalityspecifically includes any death regardless of cause.Drug levels (A)andcoagulation times (B)may serve as pharmacodynamic or exploratory endpoints but do not directly measure mortality.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Data Management Planning and Protocol Review, Section 5.4 – Defining Data Required for Endpoints
ICH E9 – Statistical Principles for Clinical Trials, Section 2.3 – Time-to-Event Endpoints
FDA Guidance for Industry: Clinical Trial Endpoints for Drug Development and Approval
When implementing a study utilizing an EDC application, it would be appropriate to use free text fields for which of the following?
Urine sedimentation rate
Adverse event verbatim term
Date of birth
Body Mass Index
InElectronic Data Capture (EDC)systems, free text fields should be usedonly when a predefined list of acceptable responses cannot accommodate the full variability of input data— most notably forAdverse Event (AE) verbatim terms.
According to theGood Clinical Data Management Practices (GCDMP, Chapter: CRF Design and Data Collection), AE verbatim terms are initially entered asfree textby site staff to accurately capture the investigator’s exact medical description of the event. These verbatim terms are latercodedusing standardized dictionaries such asMedDRAduring medical coding, ensuring both flexibility and standardization in reporting.
Conversely, fields such asurine sedimentation rate (A),date of birth (C), andBody Mass Index (D)requirestructured numeric or date formatsto enable validation, range checks, and consistency across datasets. Free text would compromise data integrity, accuracy, and validation efficiency for these structured data elements.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: CRF Design and Data Collection, Section 4.3 – Use of Free Text and Coded Fields
ICH E6 (R2) Good Clinical Practice, Section 5.5.3 – Data Structure and Validation
MedDRA Introductory Guide, Section 2.3 – Verbatim Entry and Coding Requirements
The Scope of Work would answer which of the following information needs?
To look up which visit PK samples are taken
To look up the date of the next clinical monitoring visit for a specific site
To determine the number of database migrations budgeted for a project
To find the name and contact information of a specific clinical data associate
TheScope of Work (SOW)is a contractual document that outlines thespecific deliverables, responsibilities, timelines, and budgetary detailsfor a given project between the sponsor and the contract research organization (CRO).
According to theGood Clinical Data Management Practices (GCDMP, Chapter: Project Management and Communication), the SOW defineswhat work will be performed,how many resources are allocated, andthe expected deliverables. This includes detailed information such as:
The number of database builds or migrations,
Timelines for deliverables (e.g., database lock),
Responsibility distribution between sponsor and CRO, and
Budget parameters for defined activities.
Therefore, if a Data Manager needs to determinehow many database migrations are budgeted for a project, theSOWis the correct document to reference.
Information such as PK sample scheduling (option A), site monitoring dates (option B), or staff contact details (option D) would be found in operational plans or contact lists, not in the SOW.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Project Management and Communication, Section 6.2 – Scope of Work Definition and Deliverables
ICH E6 (R2) GCP, Section 5.5.3 – Documentation and Responsibilities for Data Management Tasks
FDA Guidance for Industry: Oversight of Clinical Investigations – Sponsor and CRO Agreements
A study team member wants to let sites enroll patients before the system is ready. Which are important considerations?
Without the ability to capture the data electronically, the data cannot be checked or used to monitor and manage the study
If the study were audited, enrolling subjects prior to having the EDC system ready would become an audit finding
There is no way to identify, report and track adverse events and serious adverse events without the EDC system in place
Starting the study prior to the EDC system being ready will delay processing of milestone-based site payments
Enrolling subjects before theElectronic Data Capture (EDC) systemis ready poses majordata integrity and compliance risks. The primary issue is thatdata cannot be accurately captured, validated, or monitoredwithout the system in place.
Per theGCDMP (Chapter: Data Management Planning and Study Start-up), data collection systems must befully validated, tested, and releasedbefore enrollment begins to ensure:
Real-time data entry and quality control
Proper tracking of adverse events (AEs/SAEs)
Audit trails and traceability for regulatory compliance
Option A highlights the most critical consequence — without an operational EDC, data collection and verification processes cannot occur, compromising data quality and study oversight.
While options B, C, and D may be partially true, they aresecondary effects. The fundamental consideration is data capture capability and monitoring control, makingoption Acorrect.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Data Management Planning and Study Start-up, Section 4.2 – EDC Readiness and System Validation
ICH E6(R2) GCP, Section 5.5.3 – Computerized Systems Validation Before Use
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations, Section 6.1 – System Qualification Prior to Data Entry
Which attribute is NOT a characteristic of a standardized data collection element?
An unambiguous definition for the data element
A strictly enforced requirement for the positioning of each data element on a case report form
A standard set of values used to respond to a data collection question
A unique set of data storage metadata, including a variable name and data type
Astandardized data collection elementhas well-defined metadata, consistent naming conventions, and controlled terminology to ensure uniform data collection and interoperability across studies.
Key attributes, as perGCDMPandCDISC standards, include:
Aclear definitionof meaning (A)
Acontrolled set of response values(C)
Metadata specificationslike variable names, formats, and data types (D)
However, thephysical positioningof a data element on a case report form (B) is a matter ofform layout design, not a characteristic of data standardization. While consistent form structure aids usability, it is not part of data standardization or metadata management principles.
Hence,option Bis correct —form positioning is not a standardized data element attribute.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Standards and Data Integration, Section 4.1 – Data Element Standardization
CDISC CDASH Implementation Guide, Section 3.2 – Standardized Data Collection Elements and Metadata
ICH E6(R2) GCP, Section 5.5.3 – Data Handling and Standardization
Which statement is true regarding User Acceptance Testing (UAT) in an EDC application?
System tools in EDC do not remove the need for UAT
Data should not be collected in a production environment until UAT is completed
Every rule should be tested with at least one "pass" and one "fail" scenario
The extent of UAT (i.e., the number of test cases and rules) cannot be risk-based
InElectronic Data Capture (EDC)system validation,User Acceptance Testing (UAT)is a mandatory phase that must becompleted before data collection begins in the production environment.
According to theGCDMP (Chapter: Database Design, Validation, and Testing)andFDA 21 CFR Part 11, UAT ensures that the EDC system meets allprotocol-specific, functional, and regulatory requirementsbefore it is deployed for live use. The goal is to verify that the system performs exactly as intended by simulating real-world user interactions withtest datain avalidated test environment.
Data collection prior to UAT completion would violate validation requirements and risk noncompliance withICH E6 (R2) GCP Section 5.5.3, which mandates that all computerized systems be validated and tested before use.
While options A and C describe correct components of testing strategy,the key regulatory requirementis thatUAT must be completed and approved before live data entry begins. Option D is incorrect — risk-based UAT is an accepted modern validation approach under bothFDA and GAMP5principles.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Database Design and Validation, Section 5.3 – User Acceptance Testing
FDA 21 CFR Part 11 – Validation of Electronic Systems (Section 11.10(a))
ICH E6 (R2) GCP, Section 5.5.3 – Validation Before Use in Production Environment
The result set from the query below would be which of the following?
SELECT * FROM patient WHERE medical_record_number > 9000
Longer than the patient table
Shorter or of equal length than the patient table
Narrower than the patient table
Wider than the patient table
In Structured Query Language (SQL), theWHEREclause is used to filter records based on specified criteria. The query retrievesall columnsfrom thepatienttable (SELECT *) but onlythose rowswhere themedical_record_numbervalue is greater than 9000.
This means:
Thenumber of columns (fields)remains the same as the original table.
Thenumber of rows (records)will beequal to or less thanthe number of rows in thepatienttable, depending on how many patients meet the filter condition.
Hence, the result set can only beshorter or equal in lengthcompared to the original table. It cannot be longer, wider, or narrower, since no new rows or columns are created.
Therefore,option B—“Shorter or of equal length than the patient table”— is correct.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Database Design and Build, Section 5.2 – Relational Database Queries and Filtering Logic
ICH E6(R2) GCP, Section 5.5.3 – Data Retrieval, Filtering, and Storage Principles
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations, Section 6.4 – Query Logic and Record Subsetting
Which is the most important reason for why a data manager would review data before a monitor reviews it?
Data managers write the Data Management Plan that specifies the data cleaning workflow.
Data can be viewed and discrepancies highlighted prior to a monitor's review.
Data managers have access to programming tools to identify discrepancies.
The GCDMP recommends that data managers review data prior to a monitor's review.
Theprimary reasondata managers review data before a monitor’s review is toidentify and flag discrepancies or inconsistenciesso that site monitors can focus their efforts more efficiently during on-site or remote source data verification (SDV).
According to theGood Clinical Data Management Practices (GCDMP, Chapter on Data Validation and Cleaning), proactive data review by data management staff ensures data completeness and accuracy by identifying missing, inconsistent, or out-of-range values. This pre-review helps streamline the monitoring process, reduces the volume of open queries, and enhances data quality.
Option A is true but not the main reason for pre-monitor review. Option C highlights a capability rather than a rationale. Option D is partially correct, but the GCDMP emphasizesprocess purpose, not prescriptive order. Thus,option Bcorrectly captures the practical and process-oriented reason for early data review—to ensure data are ready and accurate for the monitor’s review phase.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Data Validation and Cleaning, Section 5.3 – Data Review Timing and Purpose
ICH E6(R2) GCP, Section 5.18 – Monitoring and Data Verification Requirements
Which of the following actions is particularly important in merging data from different trials?
Use of a common software platform
Enrollment of investigative sites with similar patient populations
Exclusion of studies that use a cross-over design
Use of a common adverse event dictionary
Whenmerging data from different clinical trials, theuse of a common adverse event (AE) dictionary(such asMedDRAorWHO Drug) is essential to ensure consistency and comparability across datasets.
According to theGCDMP (Chapter: Standards and Data Mapping)andCDISC SDTM Implementation Guide, data integration across studies requires standardized terminology for adverse events, medications, and clinical outcomes. Using the same AE dictionary ensures that similar terms are coded consistently, allowing accurate cross-study analysis, pooled summaries, and safety reporting.
A sharedsoftware platform (option A)is not necessary if data are mapped to standard formats (e.g., CDISC SDTM). Patient population similarity (option B) affects interpretation but not technical data merging. Study design differences (option C) may influence statistical analysis but not data integration mechanics.
Therefore,Option D – Use of a common adverse event dictionary– is the correct and most critical action for consistent multi-study data integration.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Standards and Data Mapping, Section 5.1 – Use of Standardized Coding Dictionaries
CDISC SDTM Implementation Guide, Section 4.3 – Controlled Terminology and Cross-Study Integration
ICH E3 and E2B – Clinical Data Standards and Safety Coding Requirements
A protocol amendment adds three data elements to the vital signs screen and two additional data-collection time points. Which is best practice for handling changes to the form completion guidelines?
Update the guidelines and notify sites of changes prior to implementing the change
Update the guidelines and post the new version on the trial portal
Rely on the revised CRF to enforce the changes without updating guidelines or notifying sites
Notify sites of the change without a guideline update
Thebest practicewhen implementing aprotocol amendmentthat affects CRF content or data collection timing is toupdate the eCRF completion guidelines and notify sites before implementing the change.
According to theGCDMP (Chapter: CRF Design and Data Collection), theeCRF Completion Guidelines (eCRF CG)are an essential study tool that instructs site personnel on accurate and consistent data entry. When new data elements or collection time points are added, the guidelines must berevised, version-controlled, and communicatedto all users prior to implementation to ensure sites collect and enter data correctly.
Simply relying on the revised CRF (option C) or updating the document without notification (option B) violates communication and training standards. Likewise, notifying sites without updating the documentation (option D) leaves insufficient reference material for data entry compliance.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: CRF Design and Data Collection, Section 5.5 – Managing CRF Revisions and Site Communication
ICH E6 (R2) GCP, Section 5.18.4 – Communication of Protocol Amendments and Documentation Updates
FDA Guidance for Industry: Electronic Source Data in Clinical Investigations, Section 4.3 – Site Communication and Documentation Management
If a data manager generated no additional manual queries on data in an EDC system and the data were deemed clean, why could the data appear to be not clean during the next review?
The study coordinator can change the data due to re-review of the source.
The CRA can change the data during a quality review of source to database.
The medical monitor can override safety information entered in the system.
The data manager may have accidentally changed the data.
In anElectronic Data Capture (EDC)system, even after a data manager completes all manual queries and marks data as "clean," the data may later appearuncleanifthe site (study coordinator)makes subsequent updates in the system after re-reviewing thesource documents.
According to theGood Clinical Data Management Practices (GCDMP, Chapter: Electronic Data Capture Systems), site users maintain the authority to modify data entries as long as the system remains open for data entry. TheEDC system audit trailcaptures such changes, which can automatically invalidate prior data reviews, triggering new discrepancies or changing system edit-check statuses.
This situation commonly occurs when the site identifies corrections in the source (e.g., wrong date or lab result) and updates the EDC form accordingly. These post-cleaning changes require additional review cycles to ensure the database reflects accurate and verified information before final lock.
Options B, C, and D are incorrect — CRAs and medical monitors cannot directly change EDC data; they can only raise queries or request updates.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Electronic Data Capture Systems, Section 6.3 – Post-Cleaning Data Changes and Audit Trails
ICH E6 (R2) GCP, Section 5.5.3 – Data Integrity and Change Control
FDA 21 CFR Part 11 – Electronic Records: Change Documentation Requirements
QA is conducting an audit on a study for ophthalmology which is ready for lock. Inconsistencies are found between the database and the source. Of the identified fields containing potential data errors, which fields are considered critical for this particular study?
Subject Identifier
Concomitant Medications
Weight
Medical History
In anophthalmology clinical study, data criticality is determined by how directly a data element affectssafety evaluation,efficacy assessment, andregulatory decision-making. According to theGood Clinical Data Management Practices (GCDMP, Chapter on Data Validation and Cleaning), critical data fields are those that:
Have a direct impact on theprimary and secondary endpoints, or
Are essential forsafety interpretation and adverse event causality assessment.
Among the listed options,Concomitant Medications (Option B)are consideredcritical datafor ophthalmology studies. This is because many ocular treatments and investigational products can interact with systemic or topical medications, potentially affectingocular response,intraocular pressure,corneal healing, orvisual function outcomes. Any inconsistency in concomitant medication data could directly influencesafety conclusionsorefficacy interpretations.
Other options, while important, are less critical for this study type:
Subject Identifier (A)is essential for data traceability and audit purposes but is not directly related to safety or efficacy outcomes.
Weight (C)may be relevant in dose-dependent drug trials but is rarely a pivotal variable in ophthalmology, where local administration (eye drops, intraocular injections) is common.
Medical History (D)provides contextual background but does not have the same immediate impact on endpoint analysis as current concomitant treatments that can confound the therapeutic effect or cause ocular adverse events.
PerGCDMPandICH E6 (R2) GCPguidelines, data validation plans must definecritical data fieldsduring study setup, reflecting therapeutic area–specific priorities. For ophthalmology,concomitant medications, ocular assessments (visual acuity, intraocular pressure, retinal thickness, etc.), and adverse eventsare typically designated as critical fields requiring heightened validation, source verification, and reconciliation accuracy before database lock.
Thus, when QA identifies discrepancies between the CRF and source, theConcomitant Medications field (Option B)is the most critical to address immediately to ensure clinical and regulatory data integrity.
Reference (CCDM-Verified Sources):
Society for Clinical Data Management (SCDM), Good Clinical Data Management Practices (GCDMP), Chapter: Data Validation and Cleaning, Section 6.4 – Critical Data Fields and Data Validation Prioritization
ICH E6 (R2) Good Clinical Practice, Section 5.18 – Monitoring and Source Data Verification
FDA Guidance for Industry: Oversight of Clinical Investigations — A Risk-Based Approach to Monitoring, Section 5.3 – Identification of Critical Data and Processes
SCDM GCDMP Chapter: Data Quality Assurance and Control – Therapeutic Area–Specific Data Criticality Examples (Ophthalmology Studies)
What action should be taken regarding the clinical database when MedDRA releases a new version of its dictionary?
Evaluate the extent and impact of the changes.
Continue using the existing version to code.
Upgrade the version immediately and recode.
Identify an alternative dictionary.
When a new version ofMedDRA (Medical Dictionary for Regulatory Activities)is released, the correct action is toevaluate the extent and impact of the changesbefore implementation.
According to theGCDMP (Chapter: Medical Coding and Dictionaries), MedDRA updates are published twice yearly (March and September). Each release may introduce new terms, modify hierarchies, or retire old ones. Prior to adopting a new version, theData ManagerandMedical Codermust:
Assess the number and type of term changes,
Determine the potential effect on ongoing coding consistency, and
Decide whether migration to the new version is warranted mid-study or deferred until database lock.
Immediate recoding (option C) without evaluation may cause inconsistencies and require additional validation. Continuing with the existing version (option B) may be acceptable short-term but must be justified. Using an alternative dictionary (option D) is noncompliant, as MedDRA is the regulatory standard for safety reporting.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Medical Coding and Dictionaries, Section 6.3 – Version Control and Impact Assessment
MedDRA Term Selection: Points to Consider (MSSO, Latest Version), Section 3 – Versioning and Maintenance
ICH E2B(R3) – Clinical Safety Data Management: Data Elements for Transmission of Individual Case Safety Reports
At a cross-functional study team meeting, a statistician suggests collecting blood gases electronically through the existing continuous hemodynamic monitoring system at sites rather than having a person record the values every five minutes during the study procedure. Assuming that sending, receiving, and integrating these data are possible, what is the best response?
Manual recording is preferred because healthcare devices are not validated to 21 CFR Part 11 standards
Manual recording is preferred because the sites may forget to turn on the machine and lose data
Electronic acquisition is preferable because more data points can be acquired
Electronic acquisition is preferable because the chance for human error is removed
Assuming the data transfer, integration, and validation processes are properly controlled and compliant,electronic acquisitionof clinical data from medical devices is preferred because it allowsmore frequent and accurate data collection, leading to higher data resolution and integrity.
Per theGCDMP (Chapter: Technology and Data Integration), automated data collection minimizes manual transcription and reduces latency in data capture, ensuring both efficiency and completeness. While manual processes introduce human transcription errors and limit frequency, continuous electronic data capture can record thousands of accurate, time-stamped measurements, improving the study’s analytical power.
However,option Dslightly overstates the case — human error isreduced, not entirely eliminated, since setup, calibration, and integration still involve human oversight. Therefore,option Cis the best and most precise response, emphasizing the advantage of more robust and complete data capture.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Technology and Data Integration, Section 5.4 – Automated Data Acquisition and Validation
ICH E6(R2) GCP, Section 5.5.3 – Validation of Computerized Systems and Electronic Data Sources
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations, Section 6.3 – Direct Data Capture from Instruments and Devices
A Data Manager receives an audit finding of missing or undocumented training for two database developers according to the organization's training SOP and matrix. Which is the best response to the audit finding?
Identify the root cause and improve the process to prevent it
Remove the training items from the training matrix
Reprimand the person responsible for maintaining training documentation
Send the two developers to the required training
When an audit identifiesmissing or undocumented training, the most appropriate and compliant response is toidentify the root causeof the issue andimplement corrective and preventive actions (CAPA)to ensure that similar findings do not recur.
According toGood Clinical Data Management Practices (GCDMP, Chapter: Quality Management and Auditing), effective quality systems require root cause analysis (RCA) for all audit findings. The process involves:
Investigating why the documentation gap occurred (e.g., poor tracking, outdated SOP, or lack of oversight).
Correcting the immediate issue (e.g., ensuring the developers complete or document training).
Updating processes, training systems, or oversight mechanisms to prevent recurrence.
While sending the two developers to training (D) addresses thesymptom, it does not resolve thesystemic issueidentified by the audit. Options B and C are non-compliant and do not address quality system improvement.
Therefore,option A (Identify the root cause and improve the process)is the best and CCDM-compliant response.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Quality Management and Auditing, Section 6.2 – Corrective and Preventive Actions (CAPA)
ICH E6(R2) GCP, Section 5.1.1 – Quality Management and Continuous Process Improvement
FDA 21 CFR Part 820.100 – Corrective and Preventive Action (CAPA) Requirements
Which is the MOST appropriate flow for EDC set-up and implementation?
CRF “wire-frames” created, CRFs reviewed, CRFs printed, CRFs distributed to sites
Protocol finalized, Database created, Edit Checks created, Database tested, Sites trained
Database created, Subjects enrolled, Database tested, Sites trained, Database released
Database created, Database tested, Sites trained, Protocol finalized, Database released
The correct and compliant sequence forEDC system setup and implementationbegins onlyafter the study protocol is finalized, as all case report form (CRF) designs, database structures, and validation rules derive directly from the finalized protocol.
According toGCDMP (Chapter: EDC Systems Implementation), the proper order is:
Protocol finalized– defines endpoints and data requirements.
Database created– built according to the protocol and CRFs.
Edit checks created– programmed to validate data entry accuracy.
Database tested (UAT)– ensures functionality, integrity, and compliance.
Sites trained and system released– only then can data entry begin.
Option B follows this logical and regulatory-compliant sequence. Other options (A, C, D) are eitherpaper-based workflowsor violateGCP-compliant timelines(e.g., enrolling subjects before database validation).
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Electronic Data Capture (EDC) Systems, Section 5.2 – System Setup and Implementation Flow
ICH E6(R2) GCP, Section 5.5.3 – Computerized Systems Validation and User Training Before Use
FDA 21 CFR Part 11 – Validation and System Release Requirements
What is the purpose of providing the central laboratory vendor with a complete listing of subjects' demographic data?
To provide for an independent reconciliation of the patient and remote databases after database lock
To assure that all subjects have lab data for valid visits
To provide for an independent reconciliation of the patient and remote databases during study conduct
To assure that lab data for screening failure subjects have not been included in the lab data transmission
Providing the central laboratory vendor with acomplete subject demographic listingallowsongoing reconciliationbetween the sponsor’s EDC system and the vendor’s laboratory databaseduring study conduct.
TheGCDMP (Chapter: External Data Transfers and Integration)emphasizes thatsubject reconciliationensures that all laboratory data correspond to valid enrolled subjects and visits. Regular reconciliation throughout the study prevents data mismatches, missing results, or misassigned lab reports.
This proactive measure supports timely query resolution and data integrity across systems. Waiting until after database lock (as in option A) would delay corrections and risk inconsistencies. Options B and D address secondary benefits but not theprimary purpose—ongoing subject-level reconciliation.
Thus,option Cis correct.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: External Data Transfers, Section 4.4 – Reconciliation and Vendor Communication
ICH E6(R2) GCP, Section 5.5.3 – Data Management, Reconciliation, and Integration
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations, Section 6.3 – External Data Management
In a cross-functional team meeting, a monitor mentions performing source data verification (SDV) on daily diary data entered by patients on mobile devices. Which of the following is the best response?
All diary data should be source data verified
The diary data should not be source data verified
Diary data to be source data verified should be selected using a risk-based approach
Diary data to be source data verified should be randomly selected
The best response is thatdiary data to be source data verified should be selected using a risk-based approach.
According to theGCDMP (Chapter: Data Quality Assurance and Control)andFDA Guidance on Risk-Based Monitoring (RBM), not all data require full SDV. Electronic patient-reported outcome (ePRO) or mobile diary data are typicallydirect electronic source data (eSource)captured at the time of entry, which already ensures authenticity and traceability.
Arisk-based SDV approachfocuses verification efforts on data critical tosubject safety and primary efficacy endpoints, as defined in the study’sRisk Assessment PlanorMonitoring Plan. Random or full verification of low-risk data (like diary compliance metrics) adds unnecessary effort and cost.
Thus,Option Caligns with current regulatory expectations and data management best practices.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Data Quality Assurance and Control, Section 7.3 – Risk-Based Monitoring and SDV
ICH E6 (R2) Good Clinical Practice, Section 5.18 – Risk-Based Quality Management
FDA Guidance for Industry: Oversight of Clinical Investigations — A Risk-Based Approach to Monitoring (2013)
In a study conducted using paper CRFs, a discrepancy is discovered in a CRF to database QC audit. What is the reason why this discrepancy would be considered an audit finding?
Discrepancy not explained by the protocol
Discrepancy not explained by the CRF completion guidelines
Discrepancy not explained by the data handling conventions
Discrepancy not explained by the data quality control audit plan
In aCRF-to-database quality control (QC) audit, auditors compare data recorded on the paper Case Report Form (CRF) with data entered in the electronic database. If discrepancies exist thatcannot be explained by documented data handling conventions, they are classified asaudit findings.
PerGCDMP (Chapter: Data Quality Assurance and Control),data handling conventionsdefine acceptable data entry practices, transcription rules, and allowable transformations. These conventions ensure that CRF data are consistently interpreted and entered.
If a discrepancy deviates from these established rules, it indicates a process gap or error in data entry, validation, or training. Discrepancies justified by protocol design or CRF guidelines would not constitute findings.
Therefore,option C (Discrepancy not explained by the data handling conventions)correctly identifies the criterion for a true QC audit finding.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Data Quality Assurance and Control, Section 6.1 – Data Handling Conventions and QC Auditing
ICH E6(R2) GCP, Section 5.1 – Quality Management and Documentation of Deviations
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations, Section 6.5 – Data Verification and Audit Findings
Which of the following statements would be BEST included in a data management plan describing the process for making self-evident corrections in a clinical database?
A senior level data manager may make audited changes to the database without further documentation.
Self-evident corrections made in the database will be reviewed and approved by a team leader or manager.
No changes will be made in the database without a query response signed by the investigator.
Self-evident changes may be made per the listed conventions and documented to the investigative site.
Aself-evident correction (SEC)refers to a data correction that is obvious, logical, and unambiguous — such as correcting an impossible date (e.g., 31-APR-2024) or standardizing a known abbreviation (e.g., “BP” to “Blood Pressure”). According to theGood Clinical Data Management Practices (GCDMP), SECs can be applied by data management stafffollowing pre-approved conventions defined in the Data Management Plan (DMP).
The DMP should explicitly describe the criteria for SECs, including the types of errors eligible for this correction method, the required documentation, and the communication procedure to inform the investigative site. The process must maintainaudit trail transparencyand ensure that all changes aretraceable and justified.
Options A and B suggest unauthorized or informal change procedures, which violate audit and compliance standards. Option C is too restrictive, as it prevents the efficient correction of non-clinical transcription or formatting errors.
Therefore,option Dis correct:“Self-evident changes may be made per the listed conventions and documented to the investigative site.”This approach aligns with CCDM expectations for balancing efficiency, accuracy, and regulatory compliance.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Data Validation and Cleaning, Section 6.2 – Self-Evident Corrections
FDA 21 CFR Part 11 – Electronic Records; Audit Trails and Traceability Requirements
Which metric will identify edit checks that may not be working properly?
Count by edit check of the number of times the check fired
Count by site of the number of times any edit check fired
Average number of edit check identified discrepancies per form
Average number of times each edit check has fired
The best metric to identifymalfunctioning or ineffective edit checksis thecount by edit check of the number of times the check fired. This allows data managers to assess whether specific edit checks are performing as intended.
According to theGCDMP, Chapter: Data Validation and Cleaning, edit checks are programmed logic conditions that identify data inconsistencies or potential errors during data entry. A properly functioning edit check should trigger only when data falls outside acceptable or logical limits. If an edit check fires too frequently or not at all, it may indicate alogic errorin the check’s programming or configuration.
By analyzing counts by individual edit checks, data managers can:
Identify checks that never trigger (potentially inactive or incorrectly written),
Detect overactive checks (poorly designed parameters causing excessive false positives), and
Optimize system performance and review efficiency.
This metric supports continuous improvement in data validation logic and contributes to cleaner, higher-quality clinical databases.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Data Validation and Cleaning, Section 6.2 – Edit Check Design and Performance Metrics
FDA Guidance: Computerized Systems Used in Clinical Investigations – Section on Validation of Electronic Data Systems
A sponsor may transfer responsibility for any or all of their obligations to a contract research organization. Which of the following statements is true?
Any written description is not transferred to the contract research organization.
A description of each of the obligations being assumed by the contract research organization is required.
A description of each of the obligations being transferred to the contract research organization is not required.
A general statement that all obligations have been transferred is acceptable.
UnderICH E6 (R2) Good Clinical Practiceand21 CFR Part 312.52, when asponsor delegates or transfers obligationsfor a clinical trial to aContract Research Organization (CRO), there must be awritten description of each specific obligation being assumed by the CRO.
According to theGood Clinical Data Management Practices (GCDMP), while sponsors may outsource responsibilities such as data management, monitoring, or biostatistics,ultimate accountability remains with the sponsor. The documentation of the transfer of responsibilities ensures regulatory transparency and compliance.
This written agreement, often referred to as aTransfer of Obligations (TOO)document, defines exactly which duties the CRO is responsible for (e.g., CRF design, data cleaning, database lock), as well as any retained sponsor oversight. A general statement that "all obligations are transferred" (option D) is insufficient per regulatory expectations, as sponsors must retain traceability of responsibility.
Therefore,Option Bis correct — a detailed written description of transferred obligations is required.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Regulatory Compliance and Oversight, Section 5.2 – Sponsor and CRO Responsibilities
ICH E6 (R2) Good Clinical Practice, Section 5.2.1 – Transfer of Trial-Related Duties and Functions
FDA 21 CFR 312.52 – Transfer of Obligations to a Contract Research Organization
Which of the following processes is the most likely to remain in a study that utilizes electronic data capture?
Tracking case report forms
Updating the in-house database
Resolving queries
Retrieving case report forms
In studies utilizingElectronic Data Capture (EDC)systems, many traditional paper-based processes such astracking and retrieving CRFsare eliminated or automated. However,query management and resolutionremain essential because discrepancies, missing data, and protocol deviations still require clarification and correction, regardless of the data collection medium.
According to theGCDMP (Chapter: Data Validation and Cleaning),data queriesare generated automatically or manually when inconsistencies are detected by edit checks. Sites must still respond to these queries electronically to ensure the integrity and completeness of data.
AandDare obsolete with EDC (no physical CRFs).
Brefers to manual data entry updates, which are replaced by direct EDC entry.
C (Resolving queries)continues as a key part of the data management workflow, even in fully electronic environments.
Thus,option Cis correct.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Data Validation and Cleaning, Section 5.4 – Query Generation and Resolution in EDC Systems
ICH E6(R2) GCP, Section 5.5.3 – Data Review and Query Resolution Requirements
FDA 21 CFR Part 11 – Electronic Records: Audit Trails and Query Documentation
C
What does 21 CFR Part 11 dictate in regards to a minimum expectation of EDC training prior to access?
Training must be performed
Training must include an exam
Training must be in the user's native language
Training must be face to face
UnderFDA 21 CFR Part 11, organizations using electronic systems must ensure thatall system users are trainedto perform their assigned functions before gaining access to the system. The regulation requires documented evidence of training but does not specify how it should be conducted (e.g., exam-based, in person, or language-specific).
TheGCDMP (Chapter: Computerized Systems and Compliance)further clarifies that personnel training should include instruction onsystem functionality, audit trails, data entry procedures, and electronic signaturesto maintain compliance and data integrity. Training must beperformed and documentedbut does not require a specific format or delivery method.
Therefore,option A—Training must be performed—is correct, as it reflects theminimum regulatory expectationper FDA and SCDM standards.
Reference (CCDM-Verified Sources):
FDA 21 CFR Part 11, Section 11.10(i) – Personnel Training Requirements
SCDM GCDMP, Chapter: Computerized Systems and Compliance, Section 5.4 – System Training and Documentation
ICH E6(R2) GCP, Section 2.8 – Qualified Personnel and Training Requirements
The best example of a protocol compliance edit check is:
An edit check that fires when a visit date is outside the specified window
An edit check that fires when a value is outside of the normal range for vital signs
An edit check that fires when a field is left blank
An edit check that fires when an invalid date is entered
Aprotocol compliance edit checkis designed to ensure that the data collected adheres to thespecific requirements defined in the study protocol, such as visit timing, procedure windows, and eligibility criteria.
The example inoption A— an edit check that triggers when a visit date falls outside the protocol-specified window — directly verifies compliance with the study design. This type of check supports real-time monitoring of protocol adherence, a critical quality and regulatory requirement underGCDMPandICH E6(R2).
Other options are examples of generaldata validation checks, not protocol compliance:
B:Ensures clinical plausibility (data range check).
C:Ensures completeness (missing data check).
D:Ensures format correctness (system validation check).
Thus,option Abest represents aprotocol compliance edit check, confirming that collected data conform to the visit schedule defined in the protocol.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Data Validation and Cleaning, Section 5.4 – Protocol Compliance Edit Checks
ICH E6(R2) GCP, Section 5.1.1 – Quality Management and Compliance Controls
FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations, Section 6.3 – Edit Check Design and Validation
What are the key deliverables for User Acceptance Testing?
Project Plan
Training
Test Plan/Script/Results
eCRF Completion Guidelines
Thekey deliverables for User Acceptance Testing (UAT)are theTest Plan, Test Scripts, and Test Results.
According to theGCDMP (Chapter: Database Design and Validation), UAT is the final validation step before a clinical database is released for production. It confirms that the system performs according to user requirements and protocol specifications.
The deliverables include:
UAT Test Plan:Defines testing objectives, scope, acceptance criteria, and responsibilities.
UAT Test Scripts:Provide step-by-step instructions for testing database functionality, edit checks, and workflows.
UAT Test Results:Document actual test outcomes versus expected outcomes, including any deviations and their resolutions.
These deliverables form part of the system validation documentation required underFDA 21 CFR Part 11andICH E6 (R2)to demonstrate that the database has been properly validated.
Project Plans (option A) and Training (option B) occur in earlier phases, while eCRF Completion Guidelines (option D) support site data entry, not system validation.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Database Design and Validation, Section 5.3 – User Acceptance Testing Deliverables
FDA 21 CFR Part 11 – Validation Documentation Requirements
ICH E6 (R2) Good Clinical Practice, Section 5.5.3 – System Validation Records
For ease of data processing, the study team would like the database codes for a copyrighted rating scale preprinted on the CRF. What is the most critical task that the CRF designer must do to ensure the data collected on the CRF for the scale are reliable and will support the results of the final analysis?
Consult the independent source and determine database codes will not influence subject responses.
Consult the study statistician regarding the change and determine that database codes will not influence the analysis.
Consult the independent source of the rating scale for approval and document that continued validity of the tool is not compromised.
Complete the requested changes to the instrument and ensure the correct database codes are associated with the appropriate responses.
When using acopyrighted or validated rating scale(e.g., Hamilton Depression Scale, Visual Analog Pain Scale), anymodification to the original instrument, including preprinting database codes on the CRF, must beapproved by the instrument’s owner or licensing authorityto ensure thevalidity and reliabilityof the instrument are not compromised.
According to theGCDMP (Chapter: CRF Design and Data Collection), validated rating scales are psychometrically tested tools. Any visual or structural modification (such as adding codes, changing layout, or rewording questions) can invalidate prior validation results. Therefore, the CRF designer mustconsult the independent source (copyright holder)for approval anddocument that the validity of the tool remains intact.
Merely consulting statisticians (option B) or verifying database alignment (option D) does not ensure compliance. Thus,Option Censures scientific and regulatory integrity.
Reference (CCDM-Verified Sources):
SCDM Good Clinical Data Management Practices (GCDMP), Chapter: CRF Design and Data Collection, Section 6.1 – Use of Validated Instruments and Rating Scales
ICH E6 (R2) GCP, Section 5.5.3 – Validation of Instruments and Data Capture Tools
FDA Guidance for Industry: Patient-Reported Outcome Measures – Use in Medical Product Development to Support Labeling Claims, Section 4 – Instrument Modification and Validation
Which of the following ensures that the trials are conducted and the data are generated, documented (recorded), and reported in compliance with the protocol, GCP, and the applicable regulatory requirement(s)?
Standard Operating Procedures (SOP)
Statistical Analysis Plan (SAP)
Data Management Plan (DMP)
CRFs
Standard Operating Procedures (SOPs)are formal, controlled documents that definestandardized processesto ensure clinical trials are conducted in compliance withGood Clinical Practice (GCP), the study protocol, and regulatory requirements (such as ICH and FDA).
According toGood Clinical Data Management Practices (GCDMP)andICH E6(R2) GCP, SOPs are fundamental to quality management systems. They describehowtasks are performed, ensuring consistency, accountability, and traceability across all studies and team members. Proper adherence to SOPs guarantees that data areaccurately generated, documented, and reportedin compliance with ethical and regulatory standards.
Other options serve different purposes:
SAP (B)defines statistical methodology, not compliance control.
DMP (C)focuses on study-specific data handling, not organizational compliance.
CRFs (D)are tools for data collection but do not enforce compliance by themselves.
Therefore,option A (SOP)is correct.
Reference (CCDM-Verified Sources):
SCDM GCDMP, Chapter: Quality Management and Compliance, Section 5.1 – Role of SOPs in Regulatory Compliance
ICH E6(R2) GCP, Section 2.13 and 5.1.1 – Quality Management Systems and SOP Requirements
FDA 21 CFR Part 312.50 – Sponsor Responsibilities and Compliance Systems
Copyright © 2014-2026 Examstrust. All Rights Reserved