Posts

Showing posts from March, 2026

Character Encoding, Japanese Text, and Why Your SDTM Package Can Fail Even When the Data Logic Is Fine

Your SDTM derivations are correct. Your P21 run is clean. Your define.xml opens and looks fine. And yet, the package still trips up in review. Not because of the data. Because of encoding. PMDA expectation What PMDA expects, and why it trips teams PMDA’s Technical Conformance Guide states that if languages other than English are used, including Japanese, the character set and encoding scheme must be documented in the reviewer’s guide. Source: PMDA Technical Conformance Guide on Electronic Study Data Submissions, April 2024 This is not a footnote. It shows up in real submissions when XML, metadata, or reviewer tools fail to render text consistently. Common misunderstanding The key misunderstanding Many teams assume: “We’ll just use ASCII.” The actual expectation is: Use Unicode, typically UTF-8, as the working encoding Restrict dataset content to A...

Passing Validation Isn’t Enough: What PMDA Actually Reviews in Your Submission Package

For PMDA, a clean validation run is only part of the story. The real pressure is in the documentation layer, rule-version timing, reviewer guide detail, and how clearly the package explains itself at submission. Most SDTM teams have a handoff checklist. Datasets locked. define.xml generated. Reviewer guide drafted. P21 run clean. Done. For PMDA, that checklist is not complete. The submission package is not just what you validated. It is: how you validated what you validated with what changed between runs how clearly you explained every finding that was not corrected That last part is where PMDA feels fundamentally different from FDA. Not in the data standards. In the documentation standards . Validation philosophy The core difference in validation philosophy Both FDA and PMDA use Pinnacle 21 Enterprise. But they do not treat the results the sam...

FDA vs PMDA Submissions: What Really Changes for SDTM Programmers and Define.xml Teams

The real differences are usually not in domain structure. They show up in validation timing, metadata discipline, reviewer guide structure, rule-version control, encoding, and how clearly the submission package explains itself. Most teams say they have “global submission-ready SDTM.” That usually means the datasets validate, define.xml opens, and the reviewer guides exist. But “submission-ready” is not the same as being ready for every agency. The FDA and PMDA overlap a lot. Both expect standardized study data. Both expect define.xml. Both run conformance checks. But the habits that work for one agency can still create extra work, or extra risk, for the other. For senior programmers, the real difference is usually not domain structure. It shows up in how metadata is described, how validation is explained, how rule versions are tracked, how text is encoded, and how the package is documented. What stays the same ...

A Define.xml Review Checklist I Actually Use Before Submission

A Define.xml Review Checklist I Actually Use Before Submission An SDTM-focused practical checklist for reviewing define.xml before submission, with emphasis on reproducibility, traceability, consistency, and the reviewer-facing problems that weak metadata creates. If you work on SDTM submissions long enough, you learn that define.xml is never just a metadata file. It is the reviewer’s map to the datasets, the controlled terminology, the derivations, the value-level rules, and the awkward corners of the study that never fully fit the standard. Over time, I stopped treating validation as the only sign-off gate. I started using a review checklist that asks a harder question: If I were a reviewer opening this package for the first time, would I understand the SDTM data without asking the sponsor what they meant? A strong define.xml does two jobs at once. It tells the reviewer what is in the submission, and...

Five Define.xml Phrases That Sound Fine, But Trigger Review Questions

Five Define.xml Phrases That Sound Fine, But Trigger Review Questions StudySAS Blog Five Define.xml Phrases That Sound Fine, But Trigger Review Questions A practical look at the wording patterns that pass internal review, validate cleanly, and still create trouble when a reviewer tries to understand your SDTM logic from metadata alone. Some define.xml wording looks perfectly acceptable during internal review. Then the same wording creates questions during submission review. Not because the data is wrong. Not because the programming is broken. But because the description leaves too much room for interpretation. That gap matters more than many teams realize. Define.xml is the reviewer’s first structured view of your SDTM package. If the metadata is thin, the reviewer starts guessing. And once guessing starts, questions follow. One useful standard A good define.xml description ...