Mastering Duplicates Removal in SAS: A Comprehensive Guide to Using PROC SQL, DATA STEP, and PROC SORT

Removing Duplicate Observations in SAS: A Comprehensive Guide

Removing Duplicate Observations in SAS: A Comprehensive Guide

In data analysis, it's common to encounter datasets with duplicate records that need to be cleaned up. SAS offers several methods to remove these duplicates, each with its strengths and suitable scenarios. This article explores three primary methods for removing duplicate observations: using PROC SQL, the DATA STEP, and PROC SORT. We will provide detailed examples and discuss when to use each method.

Understanding Duplicate Observations

Before diving into the methods, let's clarify what we mean by duplicate observations. Duplicates can occur in different forms:

  • Exact Duplicates: All variables across two or more observations have identical values.
  • Key-Based Duplicates: Observations are considered duplicates based on the values of specific key variables (e.g., ID, Date).

The method you choose to remove duplicates depends on whether you are dealing with exact duplicates or key-based duplicates.

Approach 1: Removing Duplicates with PROC SQL

PROC SQL is a versatile tool in SAS, allowing you to execute SQL queries to manipulate and analyze data. When removing duplicates, you can use the SELECT DISTINCT statement or apply more complex conditions.

Example 1: Removing Exact Duplicates

proc sql;
    create table no_duplicates as
    select distinct *
    from original_data;
quit;

This code removes all exact duplicates, creating a new dataset no_duplicates that contains only unique records. The SELECT DISTINCT * statement ensures that every unique combination of variable values is retained only once.

Example 2: Removing Duplicates Based on Key Variables

proc sql;
    create table no_duplicates as
    select distinct ID, Name, Age
    from original_data;
quit;

Here, duplicates are removed based on the combination of the ID, Name, and Age variables. This is useful when you want to keep unique records for specific key variables, ignoring other variables in the dataset.

Advantages of PROC SQL:

  • Flexibility: PROC SQL can handle complex queries, allowing you to remove duplicates based on multiple or complex criteria.
  • Powerful Filtering: SQL allows you to apply conditions and filters easily, making it easier to control the exact duplicates you want to remove.

Disadvantages of PROC SQL:

  • Performance: The SELECT DISTINCT statement can be slower with very large datasets, as it requires scanning the entire dataset to identify unique records.
  • Complexity: SQL syntax may be less intuitive for those who are more comfortable with traditional SAS programming.

Approach 2: Removing Duplicates with the DATA STEP

The DATA STEP in SAS provides a programmatic approach to removing duplicates, giving you fine-grained control over the process. This method typically involves sorting the dataset first and then using conditional logic to remove duplicates.

Example 1: Removing Exact Duplicates

To remove exact duplicates, you must first sort the data by all variables and then use the DATA STEP to retain only the first occurrence of each observation.

proc sort data=original_data noduprecs out=sorted_data;
    by _all_;
run;

data no_duplicates;
    set sorted_data;
run;

The noduprecs option in PROC SORT removes exact duplicate records. The sorted and deduplicated dataset is then saved as no_duplicates.

Example 2: Removing Duplicates Based on Key Variables

If you want to remove duplicates based on specific key variables, you can sort the data by those variables and use the first. or last. functions in the DATA STEP to control which duplicates are kept.

proc sort data=original_data;
    by ID;
run;

data no_duplicates;
    set original_data;
    by ID;
    if first.ID;
run;

In this example, the dataset is first sorted by the ID variable. The first.ID statement ensures that only the first occurrence of each ID is kept, removing any subsequent duplicates.

Advantages of the DATA STEP:

  • Fine-Grained Control: The DATA STEP allows you to apply custom logic to the deduplication process, such as retaining the first or last occurrence based on additional criteria.
  • Efficiency: When dealing with large datasets, this method can be more efficient, especially if you need to apply complex logic.

Disadvantages of the DATA STEP:

  • Manual Sorting: You need to sort the data before removing duplicates, adding an extra step to the process.
  • Complexity: The logic required to remove duplicates can be more complex and less intuitive than using PROC SORT.

Approach 3: Removing Duplicates with PROC SORT

PROC SORT is one of the simplest and most commonly used methods for removing duplicates in SAS. This approach sorts the data and can automatically remove duplicates during the sorting process.

Example 1: Removing Exact Duplicates

proc sort data=original_data noduprecs out=no_duplicates;
    by _all_;
run;

Here, PROC SORT with the noduprecs option removes exact duplicates. The by _all_ statement ensures that the sort is applied to all variables, making the deduplication based on the entire record.

Example 2: Removing Duplicates Based on Key Variables

proc sort data=original_data nodupkey out=no_duplicates;
    by ID;
run;

In this case, PROC SORT uses the nodupkey option to remove duplicates based on the ID variable. The out= option specifies that the sorted and deduplicated data should be saved to the no_duplicates dataset.

Advantages of PROC SORT:

  • Simplicity: PROC SORT is straightforward and easy to use, requiring minimal code to remove duplicates.
  • Efficiency: PROC SORT is optimized for sorting and deduplication, making it very fast, especially for large datasets.

Disadvantages of PROC SORT:

  • Limited Flexibility: PROC SORT can only remove duplicates based on sorted keys, which might not be suitable for more complex deduplication needs.
  • No Complex Logic: Unlike the DATA STEP, PROC SORT does not allow you to apply custom logic or conditions during the deduplication process.

Comparison Summary

Each method for removing duplicates in SAS has its strengths and weaknesses:

  • Use PROC SQL when you need flexibility and the ability to apply complex conditions for deduplication, especially when working within a SQL-based framework.
  • Use the DATA STEP if you require precise control over the deduplication process and need to apply custom logic to determine which duplicates to keep.
  • Use PROC SORT for its simplicity and efficiency when dealing with large datasets, particularly when you only need to remove duplicates based on simple keys.

Conclusion

Removing duplicates is a crucial step in data cleaning and preparation, and SAS provides multiple tools to accomplish this task. By understanding the differences between PROC SQL, the DATA STEP, and PROC SORT, you can choose the most appropriate method for your specific data processing needs. Whether you need flexibility, control, or efficiency, SAS offers the right approach to ensure your data is clean and ready for analysis.

Comments

  1. Hi Sarath,

    what is the best way to do update/insert with large data?

    Thanks

    ReplyDelete
  2. Hi Sarath,
    I think the solution to remove duplicates using SAS data step will not work because
    : if not first.usubjid and last.usubjid;
    will actually give you duplicates
    My answer is use : if first.usubjid this condition will give you only the unique observations.

    ReplyDelete
  3. If dataset is sorted then data step will be the fastest otherwise go by proc sort.

    ReplyDelete
  4. Proc SQL noprint;
    create table unique as select distinct (*) from dsn;
    quit;

    should be as follows
    Proc SQL noprint;
    create table unique as select distinct * from dsn;
    quit;

    ReplyDelete
  5. Hello all,
    This has been really helpful! I had a related question. What if you have a single file that data was double-entered into (there are 2 rows for each individual) and you want to separate it into 2 new files, each with one copy of each individual? Basically, I want to dedup on a single variable ID, but keep the dups in a separate file instead of just deleting them. Any suggestions?

    ReplyDelete
  6. Hi Nicole, Use DUPOUT option to create a new dataset with only duplicate records.

    proc sort data=hasdups nodupkey dupout=dupsonly;
    by vars;
    run;

    ReplyDelete

Post a Comment

Popular posts from this blog

SAS Interview Questions and Answers: CDISC, SDTM and ADAM etc

Comparing Two Methods for Removing Formats and Informats in SAS: DATA Step vs. PROC DATASETS

Studyday calculation ( --DY Variable in SDTM)