Recent research on speech perception and word recognition has shown that fine-grained sub-phonemic as well as speaker- and episode-specific characteristics of a speech signal are integrally connected with segmental (phonemic) information; they are all most probably processed in a non-distinct manner, and stored in the lexical memory. This view contrasts with the traditional approach holding that we operate on abstract phonemic representations extracted from a particular acoustic signal, without the need to process and store the multitude of its individual features. In the paper, I want to show that this turn towards the "particulars" of a speech event was in fact quite predictable, and the so-called traditional view would most probably have never been formulated if studies on language variation and language change-in-progress had been taken into account when constructing models of speech perception. In part one, I discuss briefly the traditional view ("abstract representations only"), its theoretical background, and outline some problems, internal to the speech perception theory, that the traditional view encounters. Part two will demonstrate that what we know about the implementation of sound changes has long made it possible to answer, once and for all, the question of integrated processing and storage of extralinguistic, phonemic and subphonemic characteristics of the speech signal.