When a data manager runs a report on resolution types of discrepancy status, which of the following would NOT be a part of resolution types?
Correct Answer: B
In a discrepancy management workflow, "Received from site and not yet reviewed" is not a resolution type - it represents a status, not a final resolution outcome. According to the GCDMP (Chapter: Data Validation and Cleaning), resolution types describe how a data discrepancy was finalized or addressed, such as: Resolved with data correction, Confirmed as correct (no change required), Self-evident correction applied by data management, or Unresolvable discrepancies documented. In contrast, statuses describe the stage of the query (e.g., open, sent, answered, pending review, closed). "Received from site and not yet reviewed" indicates an intermediate workflow state where the response awaits validation by data management. Proper classification of resolution types is essential for performance reporting, audit readiness, and ensuring the traceability of query management actions under ICH E6 (R2) and FDA 21 CFR Part 11. Reference (CCDM-Verified Sources): SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Data Validation and Cleaning, Section 5.3 - Discrepancy Resolution Lifecycle ICH E6 (R2) Good Clinical Practice, Section 5.5.3 - Data Handling and Record Management FDA 21 CFR Part 11 - Electronic Records; Audit Trails and Discrepancy Tracking Requirements
Question 2
Which is the best reason why front-end checks are usually kept minimal, when compared to back-end checks, in a paper-based clinical study?
Correct Answer: A
In paper-based clinical studies, front-end data checks (those performed during data entry) are intentionally kept minimal to ensure that data are entered exactly as recorded on the paper CRF. This principle ensures data integrity by maintaining fidelity between source and electronic records before any cleaning or edit validation occurs. The GCDMP (Chapter: Data Validation and Cleaning) explains that data entry operators should input values as written, even if they appear incorrect or inconsistent, because the purpose of front-end checks is not to interpret but to capture data faithfully. The back-end edit checks-performed later by data managers-are designed to identify inconsistencies, out-of-range values, or logical errors that require clarification through queries. This approach separates data capture from data cleaning, minimizing bias and preserving original investigator input. Hence, option A accurately states the rationale for keeping front-end checks minimal in paper-based studies. Reference (CCDM-Verified Sources): SCDM GCDMP, Chapter: Data Validation and Cleaning, Section 4.2 - Data Entry, Edit Checks, and Query Process ICH E6(R2) GCP, Section 5.5.3 - Data Handling and System Controls FDA Guidance for Industry: Computerized Systems Used in Clinical Investigations, Section 6.1 - Data Entry and Verification Processes
Question 3
Which of the following is a best practice for creating eCRFs for a study?
Correct Answer: C
The best practice for developing electronic Case Report Forms (eCRFs) is to involve cross-functional team members throughout the design process. According to the GCDMP (Chapter: CRF Design and Data Collection), eCRFs should be collaboratively developed by data management, clinical operations, biostatistics, medical, and regulatory teams. Each function provides a unique perspective - data managers focus on data capture and validation; statisticians ensure alignment with analysis requirements; clinicians ensure medical relevance and protocol compliance. Collaborative development ensures that the eCRFs are fit-for-purpose, capturing all required data accurately, minimizing redundancy, and supporting downstream data analysis. Options A and B violate good data management practice because sites should not directly access coded terms (to prevent bias), and fields should never auto-populate without explicit source verification. Option D is outdated; while paper CRFs may inform structure, EDC-optimized eCRFs should leverage system functionality rather than mimic paper. Reference (CCDM-Verified Sources): SCDM Good Clinical Data Management Practices (GCDMP), Chapter: CRF Design and Data Collection, Section 4.2 - Collaborative CRF Development ICH E6 (R2) GCP, Section 5.5.3 - Data Collection and System Validation FDA Guidance for Industry: Electronic Source Data in Clinical Investigations, Section 3.4 - CRF Design Considerations
Question 4
What significant difference is there in the DM role when utilizing an EDC application?
Correct Answer: A
The most significant difference in the Data Manager's role when using an Electronic Data Capture (EDC) system is that data updates are implemented directly by site personnel (Option A). According to the GCDMP (Chapter: Electronic Data Capture Systems), EDC technology shifts responsibility for data entry and correction from the sponsor or CRO to the investigator site, enabling real-time data entry and validation. This eliminates the need for double entry or remote data transcription, allowing Data Managers to focus on system validation, query management, and data quality oversight rather than physical data handling. However, the EDC system still requires full validation (contrary to Option B). Metrics generation (Option C) and CRF tracking (Option D) are important but not unique to EDC-based workflows. Thus, the correct answer is Option A - Data updates are implemented by the sites, reflecting the most fundamental operational shift introduced by EDC systems. Reference (CCDM-Verified Sources): SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Electronic Data Capture (EDC) Systems, Section 4.1 - Role of the Data Manager in EDC ICH E6 (R2) GCP, Section 5.5.3 - Electronic Data Entry and Responsibilities FDA 21 CFR Part 11 - Electronic Records and Signatures: Data Entry Responsibilities
Question 5
An asthma study is taking into account local air quality and receives that data from the national weather bureau. Which information is needed to link research subject data to the air-quality readings?
Correct Answer: B
When integrating external environmental data such as air quality readings with clinical study data, it is essential to use location and time identifiers to properly align the environmental data with subject-level data. According to the Good Clinical Data Management Practices (GCDMP, Chapter: Data Management Planning and Study Start-up), external data sources (like national weather or pollution databases) must be merged using common linkage variables that allow synchronization without breaching subject confidentiality. In this case: Location identifiers (e.g., city, postal code, or region) align the subject's study site or residential area with the environmental dataset. Time identifiers (e.g., date and time of data collection) ensure that the environmental readings correspond to the same period as the subject's clinical observations. Including subject identifiers (option C or D) is unnecessary and would pose privacy and data protection risks. Instead, linkage is typically done at the aggregate (site or regional) level, maintaining compliance with HIPAA and GDPR. Reference (CCDM-Verified Sources): SCDM Good Clinical Data Management Practices (GCDMP), Chapter: Data Integration and External Data Handling, Section 4.3 - Linking External Data Sources ICH E6 (R2) GCP, Section 5.5.3 - Data Traceability and External Data Management FDA Guidance for Industry: Use of Electronic Health Record Data in Clinical Investigations, Section 5.2 - Linking and Integration Principles