We can correct corrupt or inaccurate records from a recordset or database. The process of data cleaning includes data auditing, workflow specification, workflow execution, post-processing, and controlling, which are comprised of parsing, data transformation, duplicate elimination, and statistical methods. By analyzing the data using the mean, range, standard deviation, and clustering algorithms, we can determine values that are unexpected and thus erroneous. We can examine any standardized residual greater than about 3 (in absolute value), a hat element greater than 3p/n (p = k + 1, k degrees of freedom), a Cook’s distance greater than 1, and the Mahalanobis distance. We can run outlier analysis such as a run-sequence plot, scatter plot, histogram, and box plot. We can also test reliability (Cronbach’s alpha, test-retest reliability, split-half reliability, and inter-rater reliability) and validity (content validity, construct validity, criterion validity, internal validity, and external validity).