Posts

Featured Post

Cross-domain SDTM QC Checks You Should Automate, With SAS Snippets

SDTM Programming · SAS · Submission QC · Pinnacle 21 · Define.xml Most SDTM QC still stops at domain‑level review and Pinnacle 21 output. That leaves a big gap. A study can be structurally clean and still fail basic cross‑domain logic. AE timing can conflict with EX. Death can exist in DM without a matching DS record. RFSTDTC can disagree with the earliest exposure date. None of that is rare. None of it should be left to manual review. If you want stronger SDTM, automate the checks that validate how domains work together, not just whether each domain looks correct in isolation. These are not meant to replace protocol review, medical review, or P21. They are meant to catch the quiet, cross‑domain failures that sit between them. Why Cross-domain QC Matters P21 validates conformance. Cross‑domain QC validates coherence. That is the difference between: a dataset that follows SDTM rules and a dataset that actually represents the study correc...

The Illusion of P21 Clean: Why Passing Validation Is Not Enough

SDTM Programming  ·  Pinnacle 21  ·  Submission QC  ·  Define.xml Most SDTM teams still treat a clean run in Pinnacle 21 Enterprise as the finish line. It isn’t. It tells you one thing: your datasets passed a rule-based conformance check aligned to published standards such as the SDTM model, SDTM IG, controlled terminology, and define.xml schema. It does not tell you: if the data is clinically interpretable, if the relationships across domains make sense, if a reviewer can actually use the package without stopping to question it. That gap matters most in SDTM, because SDTM is the base layer of the submission. If SDTM distorts the study, everything downstream inherits that distortion, including define.xml, reviewer traceability, and the regulatory review itself. P21 clean means the package passed rules. It does not mean the package is correct. This is not a knock on Pinnacle 21 Enterprise. It is an essential to...

Character Encoding, Japanese Text, and Why Your SDTM Package Can Fail Even When the Data Logic Is Fine

StudySAS • SDTM • Define.xml • Regulatory Submissions Character Encoding, Japanese Text and Why Your SDTM Package Can Fail Even When the Data Logic Is Fine Clean derivations and a valid define.xml are not enough if the transport layer, XML encoding, and metadata pipeline are not controlled end to end. Your SDTM derivations are correct. Your P21 run is clean. Your define.xml opens and looks fine. And yet, the package still trips up in review. Not because of the data. Because of encoding. PMDA expectation What PMDA expects, and why it trips teams PMDA’s Technical Conformance Guide states that if languages other than English are used, including Japanese, the character set and encoding scheme must be documented in the reviewer’s guide. Source: PMDA Technical Conformance Guide on Electronic Study Data Submissions, April 2024 This is not a footnote. It shows up ...

For PMDA, a clean validation run is only part of the story. The real pressure is in the documentation layer, rule-version timing, reviewer guide detail, and how clearly the package explains itself at submission.

StudySAS • SDTM • Define.xml • Regulatory Submissions PMDA’s Extra Documentation Burden — What Programmers Should Prepare Before Handoff For PMDA, a clean validation run is only part of the story. The real pressure is in the documentation layer, rule-version timing, reviewer guide detail, and how clearly the package explains itself at submission. Most SDTM teams have a handoff checklist. Datasets locked. define.xml generated. Reviewer guide drafted. P21 run clean. Done. For PMDA, that checklist is not complete. The submission package is not just what you validated. It is: how you validated what you validated with what changed between runs how clearly you explained every finding that was not corrected That last part is where PMDA feels fundamentally different from FDA. Not in the data standards. In the documentation standards . ...

FDA vs PMDA Submissions: What Really Changes for SDTM Programmers and Define.xml Teams

StudySAS • SDTM • Define.xml • Regulatory Submissions FDA vs PMDA Submissions: What Really Changes for SDTM Programmers and Define.xml Teams The real differences are usually not in domain structure. They show up in validation timing, metadata discipline, reviewer guide structure, rule-version control, encoding, and how clearly the submission package explains itself. Most teams say they have “global submission-ready SDTM.” That usually means the datasets validate, define.xml opens, and the reviewer guides exist. But “submission-ready” is not the same as being ready for every agency. The FDA and PMDA overlap a lot. Both expect standardized study data. Both expect define.xml. Both run conformance checks. But the habits that work for one agency can still create extra work, or extra risk, for the other. For senior programmers, the real difference is usually not domain structure. It shows up ...

A Define.xml Review Checklist I Actually Use Before Submission

A Define.xml Review Checklist I Actually Use Before Submission StudySAS Blog A Define.xml Review Checklist I Actually Use Before Submission An SDTM-focused practical checklist for reviewing define.xml before submission, with emphasis on reproducibility, traceability, consistency, and the reviewer-facing problems that weak metadata creates. If you work on SDTM submissions long enough, you learn that define.xml is never just a metadata file. It is the reviewer’s map to the datasets, the controlled terminology, the derivations, the value-level rules, and the awkward corners of the study that never fully fit the standard. Over time, I stopped treating validation as the only sign-off gate. I started using a review checklist that asks a harder question: If I were a reviewer opening this package for the first time, would I understand the SDTM data without asking the sponsor what they meant? ...