Posts

Featured Post

Define.xml for SUPPQUAL — Getting QNAM-Level Metadata Right

If you have worked on SDTM submissions long enough, you know SUPPQUAL define.xml is where packages start to break down. Not because the data is wrong, but because the metadata does not fully explain what the data represents. This is not a recap of SUPPQUAL structure. This is about how define.xml actually fails in submission and how to fix it before a reviewer points it out. SUPPQUAL is not difficult because of structure. It is difficult because the meaning of the data exists only in define.xml. Table of Contents The SUPPQUAL ItemGroupDef — What the Spec Actually Requires Value-Level Metadata — Why SUPPQUAL Demands It Building QNAM-Level VLM Entries Correctly WhereClauseDef Construction — Mechanics and Traps Origin Tracing for SUPPQUAL Variables Controlled Terminology in SUPPQUAL QVAL — Who Owns the Codelist? Common Submission Rejection Patterns PMDA-Specific Considerations SAS Utility: Gene...

Dynamic SUPPQUAL Generation Using Metadata-Driven SAS Macros

If you have managed SDTM deliverables across multiple studies at the same time, you already know what happens to SUPPQUAL programs. You start with one clean macro per domain. Then a protocol amendment adds three new supplemental variables to AE. A PMDA query forces a second QNAM into EX. The DMC needs something non-standard from LB. Six months later you have domain-specific programs with no shared logic, no central control, and every change requires manual updates across multiple files. The metadata-driven approach fixes that. One control dataset. One macro family. All SUPP domains generated in a single call. This post walks through the full design, control dataset structure, macro architecture, the edge cases that cause real production issues, and the validation checks you should run before submission. The Problem with Hard-Coded SUPPQUAL Programs Hard-coded programs break in predictable ways. A QNAM gets added mid-study, someone has to find the right program, und...