People still outperform computational methods on a diverse set of problems, ranging from language learning to recognizing objects in a scene. To understand human success in solving these problems, cognitive scientists appeal to representation as a key explanatory device. Puzzlingly, people are remarkably flexible, changing their representation of the same data depending on various factors, including context and experience. However, most formal methods use a fixed set of units (whether endowed or learned) to encode inputs without accounting for flexibility in feature representations. I will present a computational framework based in Bayesian nonparametrics that adaptively constructs primitive symbolic units to represent its input. I will explore how different factors, such as statistical co-occurrence, transformational invariance, and compositional form, are naturally accommodated by models in this framework. The models make novel predictions, which are supported by empirical results.