Concerned about Coder Agreement in ICD-10? | 3M Health …

Results of some recent studies evaluating the percentage of coder agreement in ICD-10 both intrigued and concerned me. It was a topic of conversation at three national conferences I attended recently, during which several of the speakers addressed the topic.. One study identified was the HIMSS “ICD-10 National Pilot Program: Outcomes Report,” released in October, 2013, which details findings from 200 patient records coded by two independent ICD-10-CM/PCS AHIMA Approved Trainers. The average accuracy between the two coders was 63 percent. These results made me wonder if the study’s outcome was due to a lack of ICD-10 coding knowledge or something else.

In reviewing the study results I noted that accuracy was determined by assigning a one (1) for each correct answer and a zero (0) for each incorrect answer, resulting in a percentage of correct coding for each of the two independent coders. This was calculated by comparing the coding results from the two independent coders with the final coding summary agreed on by the HIMSS Testing Scenarios and Coding Work Group coders. One thing I noted while reviewing the results is that there was variation in what was and was not coded. For example, some of the coders assigned codes for family history of disease and others did not. This automatically skewed the results. I did a little more investigation and learned that the coders in the study received no study guidelines about what should and should not be coded. My evaluation of the study brought to mind the following questions:

  • Were the variations in codes due to the lack of study guidelines for the coders to follow (e.g., to code or not to code personal and family history)?
  • Were the variations in codes due to the inconsistent reporting of procedures such as blood transfusions, EEGs, radiology procedures, etc?
  • Were the variations in codes due to errors in ICD-9 coding? Even though the study was designed to code the cases natively in ICD-10, did the coders start to code in ICD-10 using ICD-9 as the basis?
  • Were there actual ICD-10-CM/PCS knowledge deficits on the part of the individual coders that caused the variations?

At this point, you may be thinking, “How does this discussion matter to me, my coders, and my hospital?” Well, even in ICD-9, two or more coders often do not agree on the codes that should be reported for a medical record. For example, should a code for V45.77 ‘acquired absence of genital organ’ be assigned for a patient undergoing previous bilateral oophorectomy?  I am not advocating for or against assigning codes for personal history. However, I am advocating for policies that establish what should and should not be coded and reported. The same goes for procedure coding. Coders should know what theyare expected to code and report.

Given the extra time provided by another ICD-10 delay, we should work to make sure there is agreement about the collection of data in order to supply data that is accurate, complete, and compliant. I have several recommendations that might help us all to get to the most accurate ICD-10 code:

  • Establish or refine coding policies and procedures regarding coding and reporting of personal and family history, allergies, external cause status, etc.
  • Review the Uniform Hospital Discharge Data Set (UHDDS) guidelines on reporting of significant procedures to determine which procedures should be coded and reported based on the definitions, billing requirements and institutional need
  • Work now to improve ICD-9 coding accuracy. In some cases, the inaccuracy of an ICD-9 code is driving the inaccurate ICD-10 code, even if it is natively coded
  • Work now to review the accuracy of the ICD-10 code
  • Have more than one coder code the same case and then compare the results and discuss any variations. This will help identify and resolve coding discrepancies among coders

We have time to fine tune the accuracy of coding and reporting in ICD-10 before go-live in October, 2015.