The multiple-choice (MC) item format has been adapted to the cognitive diagnosis (CD) framework. Early approaches simply dichotomized the responses and analyzed them with a CD model for binary responses. Obviously, this strategy cannot exploit the additional diagnostic information provided by MC items. De la Torre’s (2009, Applied Psychological Measurement, 33, 163–183) MC-DINA model was the first for the explicit analysis of MC items. However, the q-vectors of the distractors were constrained to be nested within the key and each other, which imposes serious restrictions on item development. Relaxing the nestedness-constraint, comes at a price. First, distractors may become redundant: they do not improve the classification of examinees beyond the response options already available for an item. Second, undesirable diagnostic ambiguity can arise from distractors that are equally likely to be chosen by an examinee, but have distinct attribute profiles pointing at different diagnostic classifications. In this article, two criteria, plausible and proper, are developed for detecting these problematic cases. Two theorems that permit for the detection and amendment of improper and implausible items are presented. An R function serving this purpose is used in several practical applications. Results of simulation studies and real data analysis are also reported.