FDA vs PMDA Submissions: What Really Changes for SDTM Programmers and Define.xml Teams

StudySAS • SDTM • Define.xml • Regulatory Submissions

FDA vs PMDA Submissions: What Really Changes for SDTM Programmers and Define.xml Teams

The real differences are usually not in domain structure. They show up in validation timing, metadata discipline, reviewer guide structure, rule-version control, encoding, and how clearly the submission package explains itself.

Most teams say they have “global submission-ready SDTM.” That usually means the datasets validate, define.xml opens, and the reviewer guides exist.

But “submission-ready” is not the same as being ready for every agency.

The FDA and PMDA overlap a lot. Both expect standardized study data. Both expect define.xml. Both run conformance checks. But the habits that work for one agency can still create extra work, or extra risk, for the other.

For senior programmers, the real difference is usually not domain structure. It shows up in how metadata is described, how validation is explained, how rule versions are tracked, how text is encoded, and how the package is documented.

Core SDTM discipline does not change

At the core, your SDTM package still needs to be CDISC-conformant, traceable, and reviewable.

That part does not change between FDA and PMDA. You still need to build traceable, conformant, and reviewable data.

  • Consistent derivations
  • Stable controlled terminology handling
  • Clean date logic
  • Correct SUPP usage
  • Complete value-level metadata
  • Reviewer guides that explain what a validator alone cannot explain
If your SDTM is not stable, no agency-specific packaging step is going to save you.

Documentation gets operational, not just technical

The biggest shift is documentation depth.

PMDA expects teams to be clear about the validation setup itself, not just the final outcome. That means the reviewer guide is no longer just background text. It becomes part of the operational record of how the package was checked.

  • Validation tool and version
  • Rule version used
  • Explanation of findings with rule IDs
  • Issue handling based on PMDA severity categories
PMDA pushes teams to demonstrate how they validated, not just that they did.

FDA also expects reviewer guides. But PMDA makes the validation process itself much more visible. If your team validates with one engine, fixes with another, and submits after a later engine becomes current, that gap needs to be visible and defendable.

What can actually break your submission

One important distinction is often underplayed. PMDA validation findings are not just documentation items. They can directly affect review acceptance.

  • PMDA uses Severity levels: Reject, Error, Warning
  • Reject-level findings can halt review until fixed
  • Reviewer guide issue summaries need to reflect that structure
This is not just a documentation problem. It is a submission acceptance risk.

FDA works differently. FDA no longer uses the same severity model in the same way, and the focus is on explanation of unresolved issues rather than a PMDA-style Reject gating model.

This changes how aggressively you fix findings before submission and how you structure the issue summary in the reviewer guide.

FDA and PMDA do not use the same rule system

One common mistake is treating validation as a single system. It is not.

  • FDA uses FDA Validator Rules
  • PMDA uses its own published, versioned rule sets
You are not running one validation. You are running two different rule systems.

This affects which findings appear, how those findings are grouped, and what must be fixed versus explained.

The rule-version issue is not small

PMDA applies the latest acceptable validation rules at submission, but follow-up data may use the rule version active when the application was filed.

Validation is not a one-time milestone.

It is time-sensitive. For programmers, that changes behavior in a practical way.

  • You need a rerun close to submission
  • You need traceability of engine and rule versions
  • You need alignment between validator output and the reviewer guide
  • You need to check whether your planned submission date changes which engine is acceptable
Operational reality: PMDA publishes acceptable validation engines and rule versions. A team should check the acceptable engine on the planned submission date, not just the engine used earlier in study closeout.
Example: A team validated with Pinnacle 21 using one PMDA engine during closeout, but the final submission happened after a newer acceptable engine became current. New rules flagged findings that did not exist earlier. The reviewer now sees issues the team never documented. That is not really a data problem. It is a submission timing problem.
If the engine used during validation is no longer acceptable at submission time, validation may need to be rerun and the reviewer guide updated.

Document rule versions directly in your cSDRG, and archive validation logs at each rerun point. That one step prevents a lot of avoidable review confusion.

Character encoding can become a late-stage submission issue

FDA-centered workflows often run with English-only assumptions. PMDA submissions can make character encoding much more visible, especially when Japanese text appears in supporting material, annotations, comments, or linked documentation.

This affects more than just the dataset itself.

  • SAS session encoding
  • XML generation
  • External file exports
  • Stylesheet rendering
  • Round-trip handling between tools
Unicode, typically UTF-8 session encoding, is the safer working setup. Dataset content still needs to stay ASCII-compatible where required.
Character encoding issues often surface late, during stylesheet rendering or define.xml validation, when they are hardest to fix.

Define.xml needs to do more than exist

Many teams still treat define.xml as a final publishing step. That is where trouble starts, especially for dual-agency submissions.

Define.xml is not just a technical artifact. It is part of what reviewers actually read. If the datasets have moved but your metadata still reflects an earlier state, you are going to create confusion even when the package technically opens.

Define.xml isn’t output decoration. It’s part of the submission.

For PMDA work, another practical point often missed is ARM. PMDA teams often need to think more carefully about Analysis Results Metadata placement and whether the ADaM definition package is telling the reviewer enough without forcing them into extra cross-referencing.

If your metadata lags behind your datasets, you are not submission-ready.

Small differences that show up in datasets

Not every FDA versus PMDA difference is structural. Some show up in day-to-day programming details.

  • Units: teams often need to think about conventional units for FDA-facing expectations versus SI-unit expectations in PMDA-facing work
  • Reviewer guide issue layout: PMDA severity categories change how issue summaries are written
  • ADaM metadata packaging: ARM handling can be more visible in PMDA-oriented builds

These are not always massive coding changes. But they affect how datasets and metadata are interpreted during review.

FDA vs PMDA, what actually differs

Area FDA PMDA
Validation Rules FDA validator rule set PMDA-specific published rule set
Define.xml Expected as part of the submission metadata package Expected with style sheet and checked closely against datasets
Reviewer Guide Expected and important for review context More operational, should document validation setup and findings clearly
Issue Classification Focus on explanation of unresolved issues Severity model matters, Reject/Error/Warning
Submission Risk Findings generally drive questions and clarification Reject findings can block or suspend review until fixed
Rule Version Handling Usually less visible in submission narrative Timing matters because the acceptable engine and rule context can change
Encoding Often English-only in practice Needs more care when non-English content is present
Units Conventional unit expectations more common SI-unit expectations more visible
Validation Scope Datasets and metadata must be reviewable Cross-checks across datasets, metadata, and XML structure matter more visibly

Recommended workflow

For efficiency, separate at the packaging and documentation layer, not the derivation layer.

  • One SDTM derivation pipeline
  • One controlled metadata source
  • One conformance issue log
  • Agency-specific reviewer guide wording
  • PMDA-specific engine and rule-version tracking
  • Explicit encoding checks
  • Final validation rerun close to submission

This keeps programming unified while allowing submission differences where they actually matter.

Example cSDRG excerpt for rule-version documentation

Validation Summary: All SDTM datasets were validated using Pinnacle 21 Enterprise with the PMDA engine and rule version acceptable at the time of final submission. Initial validation was performed earlier in study closeout using a prior acceptable engine. A final rerun was conducted prior to submission to align with the current acceptable engine and rule set. Any new findings introduced in the final rerun were reviewed and assessed before submission. Issue details, rationale, and resolution status are documented in Section 6.3. For FDA-facing review, unresolved issues are explained in the Issue Summary. For PMDA-facing review, issues are grouped and described using the applicable severity structure.

The point is not just to say what was used. The point is to show that the final submission package was checked against the acceptable rule context at the time of submission.

Example define.xml snippet showing value-level clarity

<ValueListDef OID="VL.AE.AESTDTC"> <ItemRef ItemOID="IT.AE.AESTDTC" Mandatory="Yes"/> </ValueListDef> <WhereClauseDef OID="WC.AE.PARTIAL"> <RangeCheck Comparator="EQ"> <CheckValue>PARTIAL</CheckValue> </RangeCheck> </WhereClauseDef> <ItemDef OID="IT.AE.AESTDTC" Name="AESTDTC" DataType="text"> <Description> <TranslatedText> Start date of adverse event. Partial dates are imputed to the first day of the month when day is missing. </TranslatedText> </Description> </ItemDef>

This kind of wording reduces reviewer confusion when partial date handling differs across domains or when imputation rules need to be stated plainly.

Final point

The difference is not really about standards versions alone.

It is about submission narration, what you validated, with what, when, under which rule set, and how clearly your metadata explains the data.

PMDA makes these expectations explicit and enforces them through validation outcomes. FDA expects the same clarity, but relies more on explanation and reviewer interpretation.