The quality of information that informs decisions in expert domains such as law enforcement and national security often requires assessment based on meta-informational attributes such as source reliability and information credibility. Across 2 experiments with intelligence analysts (n = 74) and nonexperts (n = 175), participants rated the accuracy, informativeness, trustworthiness, and usefulness of information varying in source reliability and information credibility. The latter 2 attributes were communicated using ratings from the Admiralty Code, an information-evaluation system widely used in the defence and security domain since the 1940s. Ratings of accuracy, informativeness, and likelihood of use were elicited as repeated measures to examine intraindividual reliability. Across experiments, intraindividual reliability was best when levels of source reliability and information credibility were moderately consistent compared to when they were maximally inconsistent (i.e., one low and one high) or maximally consistent (both high or low). As well, trustworthiness ratings depended more on source reliability than on information credibility. Finally, the likelihood of using information was consistently predicted by accuracy ratings and not by judged informativeness or trustworthiness. The current findings offer insights into the ability of experts and novices to reliably use information-evaluation systems for structuring human judgments about intelligence.