@alex_szykman

An Experimental Comparison of a Document Deception Detection Policy Using Real and Artificial Deception

, and . J. Data and Information Quality, 3 (3): 6:1--6:25 (August 2012)
DOI: 10.1145/2287714.2287716

Abstract

Developing policies to screen documents for deception is often hampered by the cost of data collection and the inability to evaluate policy alternatives due to lack of data. To lower data collection costs and increase the amount of data, artificially generated deception data can be used, but the impact of using artificially generated deception data is not well understood. This article studies the impact of artificially generated deception on document screening policies. The deception and truth data were collected from financial aid applications, a document-centric area with limited resources for screening. Real deception was augmented with artificial data generated by noise and deception generation models. Using the real data and artificially generated data, we designed an innovative experiment with deception type and deception rate as factors, and harmonic mean and cost as outcome variables. We used two budget models (fixed and variable) typically employed by financial aid offices to measure the cost of noncompliance in financial aid applications. The analysis included an evaluation of a common policy for deception screening using both fixed and varying screening rates. The results of the experiment provided evidence of similar performance of screening policy with real and artificial deception, suggesting the possibility of using artificially generated deception to reduce the costs associated with obtaining training data.

Links and resources

Tags

community

  • @dblp
  • @alex_szykman
@alex_szykman's tags highlighted