This paper presents a Maximum Entropy learner of grammars and lexicons (MaxLex), and demonstrates that MaxLex has an emergent preference for minimally abstract underlying representations. In order to keep the weight of faithfulness constraints low, the learner attempts to fill gaps in the lexical distribution of segments, making the underlying segment inventory more feature-economic. Even when the learner only has access to individual forms, properties of the entire system are implicitly available through the relative weighting of constraints. These properties lead to a preference for some abstract underlying representations over others, mitigating the computational difficulty of searching a large set of abstract forms. MaxLex is shown to be successful in learning certain abstract underlying forms through simulations based on the [i]~[Ø] alternation in Klamath verbs. The Klamath pattern cannot be represented or learned using concrete underlying representations, but MaxLex successfully learns both the phonotactic patterns and minimally abstract underlying representations.