man's.head UB Logo man's.head

Center for Cognitive Science

The Puzzle of the Mind


Academic Programs
Mailing Lists

Spring 2010 Colloquia

Last Update: 30 March 2010, 9:00 A.M.

Note: NEW or UPDATED material is highlighted

Regular colloquia are

Wednesdays, 2:00 P.M. –  4:00 P.M.,


280 Park Hall

(unless otherwise noted), North Campus,

and are open to the public.

To receive email announcements of each event, please subscribe to our Listserv mailing lists.

Background readings for each lecture are available to UB faculty and students on UB Learns.

Once you have logged in to UB Learns, select "Center for Cognitive Science" → "Course Documents" → "Background Readings for Spring 2010".
(Or you can link directly to the background readings.)

If you are affiliated with UB and do not have access to our UBLearns website, please contact Gail Mauner,

13 January 2010

Orientation meeting for students enrolled in SSC 391

27 January 2010

Micheal Dent

Comparative Bioacoustics Laboratory,
Department of Psychology, and Center for Cognitive Science
University at Buffalo

An Avian Cocktail Party:
Masking and Unmasking in Birds


Although laboratory experiments on hearing in animals generally begin with studies of absolute sensitivity in quiet environments, the reality of an animal's life is that it is rarely communicating in that type of sterile situation. There are auditory studies that more closely approximate ecologically-relevant conditions of an animal's life, such as discrimination, localization, and masking experiments, but even those can have limitations. Studies on the cocktail-party problem in birds have utilized various techniques to describe how well animals can effectively communicate in noisy environments and whether spatial separation of signals and noise aids in hearing out those signals. In humans, spatially separating a signal from a noise source significantly increases the audibility of that signal. Various species of birds also show this "release from masking" under natural field conditions as well as in controlled lab studies, using orienting responses as well as conditioned responses, and using simple pure tones embedded in broadband noises as well as calls and songs embedded in birdsong choruses. As a whole, these experiments suggest that the cocktail-party effect is a basic auditory process used by many animals to aid in signal detection under difficult and complex listening situations and that excellent localization skills are not necessary for the task.


Dent, Micheal L.; McClaine, Elizabeth M.; Best, Virginia; Ozmeral, Erol; Narayan, Rajiv; Gallun; Frederick J. Sen, Kamal; & Shinn-Cunningham, Barbara G. (2009), "Spatial Unmasking of Birsong in Zebra Finches (Taeniopygia guttata) and Budgerigars (Melopsittacus undulatus)", Journal of Comparative Psychology 123(4): 357–367.

3 February 2010

Business Meeting for Faculty Members of the Center

10 February 2010

Eon-Suk Ko

Department of Linguistics and Center for Cognitive Science
University at Buffalo

Children's Acquisition of Vowel Duration
Conditioned by Consonantal Voicing


The finding that English vowels are longer before voiced than voiceless consonants by a ratio of about 3:2 has been known for a long time. In this talk, I address the question of when and how young children learning American English develop their knowledge of this phonetic pattern. In the first part, I present an acoustic analysis of corpus data that was conducted to find out how early children begin to produce different vowel durations as a function of post-vocalic voicing (Ko 2007). The age range covered by the data was from 0;11 to 4;0. It was found that children control the vowel duration conditioned by voicing before the age of 2, and that there is no developmental trend in the acquisition of the vowel duration conditioned by post-vocalic voicing within the age range examined. In the second part, I present a study where 8- and 14-month-old infants' perceptual sensitivity to vowel duration conditioned by post-vocalic consonantal voicing was examined (Ko et al. 2009). Half the infants heard CVC stimuli with short vowels; half heard stimuli with long vowels. In both groups, stimuli with voiced and voiceless final consonants were compared. Older infants showed significant sensitivity to mismatching vowel duration and consonant voicing in the short condition, but not the long condition; younger infants were not sensitive to such mismatching in either condition. The results suggest that infants' sensitivity to extrinsic vowel duration begins to develop between 8 and 14 months. Taken together, these results suggest that the presence of the vowel-duration difference conditioned by post-vocalic consonantal voicing in early speech may reflect children's knowledge of English phonotactics in the perceptual domain. The current study thus provides some concrete data to corroborate the idea that the development of speech production is preceded by the development of perceptual sensitivity.


  1. Ko, Eon-Suk (2007), "Acquisition of Vowel Duration in Children Speaking American English", Proceedings of Interspeech 2007 (Antwerp): 1881–1884..
  2. Ko, Eon-Suk; Soderstrom, Melanie; & Morgan, James (2009), "Development of Perceptual Sensitivity to Extrinsic Vowel Duration in Infants Learning American English", Journal of the Acoustical Society of America 126(5) (November): 134–139.

17 February 2010

Veena D. Dwivedi

Department of Applied Linguistics, Brock University

Underspecification in Semantic Processing


Recent work in language processing suggests that interpretive processes are often incomplete, such that comprehenders do not commit to a particular meaning during a parse. Underspecified representations have implications for understanding ambiguity at the syntax-semantics interface, particularly for scope ambiguous sentences, such as

(i) Every kid climbed a tree.

Is the meaning of (i) underspecified, or is a particular scope assignment preferred? Also, how would this representation impact anaphoric resolution downstream? Previous behavioral studies are equivocal regarding the interpretation of (i). Kurtzman & MacDonald (1993) showed that plural anaphors in continuation sentences (e.g., The trees were in the park), consistent with a surface scope interpretation of (i), are preferred over singular continuations (e.g., The tree was in the park), consistent with the inverse scope interpretation. This is precisely what one would expect on theoretical grounds. However, this effect was not replicated in Tunstall (1998). Moreover, Kemtes & Kemper (1999) showed that judgments for sentences like (i) are modulated by age and working-memory span. In this talk, I discuss recent experiments investigating the interpretation of scope-ambiguous sentences using both EEG/ERP and self-paced reading paradigms. I show that, in fact, sentences such as (i) are left unresolved until further information arrives for disambiguation. Furthermore, findings regarding individual differences are discussed, suggesting that underspecification is a strategic use of allocational resources.


Dwivedi, Veena D.; Phillips, Natalie A.; Einagel, Stephanie;& Baum, Shari R. (2009, in press), "The Neural Underpinnings of Semantic Ambiguity and Anaphora", Brain Research, doi:101016/j.brainres.2009.09.102.

24 February 2010

Paul Luce

Language Perception Laboratory,
Department of Psychology, and Center for Cognitive Science
University at Buffalo

Competition among Variant Word Forms in Spoken Word Recognition


Traditionally, much of the research on the perception of spoken words has employed carefully produced, isolated words as experimental stimuli. However, words in casual speech exhibit considerable variation in articulation. For example, alveolar stop consonants (/t/ and /d/) in certain phonetic environments may be produced as taps, glottal stops, careful /t/s and /d/s, or they may be deleted altogether. We have been examining the representation and processing of variants of spoken words. In particular, we have attempted to determine whether words containing non-word-initial alveolar stops may be represented in memory as multiple specific variants, by comparing processing time for monosyllabic words that end in either alveolar or non-alveolar (bilabial or velar) stops. Alveolar-ending words were responded to more slowly than carefully matched, non-alveolar-ending words, in a variety of experimental tasks. This result did not hold for similarly composed nonwords. The results suggest that variant word forms compete at a stage beyond sublexical processing. Implications for characterizing competition in spoken word recognition are discussed. (Work done with Micah Geer.)


McLennan, Conor T.; Luce, Paul A.; & Charles-Luce, Jan (2005), "Representation of Lexical Form: Evidence from Studies of Sublexical Ambiguity", Journal of Experimental Psychology: Human Perception and Performance 31(6): 1308–1314.

3 March 2010

Kevan Edwards

Department of Philosophy, Syracuse University

Representation and Mental Processes:
Unity amidst Heterogeneity in the Study of Concepts


In the past few decades, a significant amount of work has taken place in both philosophy and (cognitive and developmental) psychology under the rubric of theorizing about the nature of concepts. As is often the case with relatively embryonic work on a topic at the intersection of academic disciplines, there has been a lot of conceptual confusion and cross-talk. Notably, it isn't easy to come up with an uncontroversial statement of exactly what a theory of concepts is supposed to do—never mind how to evaluate how well various candidate theories do it. Presumptive starting points vary from the assumption that concepts are the basic building blocks of cognitive states to the assumption that concepts are cognitive capacities, to the assumption that the aforementioned building blocks just are the aforementioned capacities.

The anchoring focus of the talk will be what I will describe as a fundamental tension between (i) various reasons—largely theoretically motivated—for maintaining that any viable concept (so to speak) of concepts needs to be robust across contexts, agents, and uses and (ii) the manifest flexibility of concepts as applied in practice—here the support is largely empirical and/or just good common sense. I want to sketch, in admittedly broad brushstrokes, an approach to concept individuation centered on the notion of what a concept refers to or represents. This approach has the virtue of tackling the tension between robustness and flexibility head on. However, the approach requires a significant departure from how the vast majority of philosophers and psychologists approach the topic. Moreover, the relatively impoverished nature of reference leads to some obvious prima facie stumbling blocks. These problems, I will claim, are just manifestations of the "fundamental tension" to which I want to draw attention. The key to resolving the problems (and the tension, in general) is to embracing substantial restrictions on a theory of concepts per se, and to acknowledge that a theory of concepts is only part of a much richer account of the structure and function of (the relevant components of) the cognitive mind.

In the process of getting all of this on the table, I hope to draw attention to some broader issues, in particular issues having to do with the nature of the contribution that even a relatively traditional philosopher of mind can hope to make to cooperative, inter-disciplinary research projects in cognitive science.


31 March 2010


Holly L. Storkel

Word and Sound Learning Lab,
Department of Speech-Language-Hearing: Sciences & Disorders
University of Kansas

Word Learning by Typically Developing Preschool Children:
Effects of Phonotactic Probability, Neighborhood Density, and Semantic Set Size


This talk explores how phonological (i.e., individual sound), lexical (i.e., whole-word), and semantic (i.e., meaning) representations contribute to word learning. Past work has shown that retrieval and retention of phonological information is influenced by phonotactic probability (the likelihood of occurrence of a sound sequence), whereas retrieval and retention of lexical information is influenced by neighborhood density (the number of similar-sounding words). Moreover, emerging work suggests that visual and auditory word-recognition is influenced by semantic-set size (the number of words that are meaningfully related to or frequently associated with a given word). In this series of studies, we explore how these three variables influence the creation of new representations during word learning by typically developing preschool children. Results showed that children learned low-phonotactic-probability nonwords more accurately than high-phonotactic-probability nonwords. In contrast, the effect of neighborhood density and semantic-set size varied across test points. In particular, children learned low-density nonwords more accurately than high-density nonwords at an early test point but then showed the reverse pattern, learning high-density nonwords more accurately than low-density, at a later retention test. Turning to semantic-set size, children showed no effect of set size at an early test point, but learned low-set-size nonwords more accurately than high at a later retention test. These results are discussed in terms of the potential effect of phonotactic probability, neighborhood density, and semantic-set size on different word-learning processes (i.e., triggering vs. configuration vs. engagement).


This reading explores the effects of phonotactic probability, neighborhood density, and semantic-set size in a database of words learned by infants. The reading provides an introduction to the variables that will be used in the talk and to the different word-learning processes (i.e., triggering vs. configuration vs. engagement) that will be discussed in the talk. The reading in conjunction with the talk illustrate the different methods that can be used to examine language acquisition.

7–8 April 2010

Distinguished Speaker
Center for Cognitive Science
and Department of Psychology Donald Tremaine Fund

Jeffrey L. Elman

Dean, Division of Social Science
Co-Director, Kavli Institute for Brain & Mind
Distinguished Professor of Cognitive Science
Chancellor's Associates Endowed Chair
University of California, San Diego

David E. Rumelhart Prize for Theoretical Contributions to Cognitive Science

UB Center for Cognitive Science Colloquium
Wednesday, 7 April 2010, 2:00 P.M.
280 Park Hall

Event Knowledge and Sentence Processing:
A Blast from the Past


Language processing has often focused on how language users comprehend and produce sentences. Although fluent use obviously requires integrating information across multiple sentences, the syntactic and semantic processes necessary for comprehending sentences have (with some important exceptions) largely been seen as self-contained. That is, it was assumed that these processes were largely insensitive to factors lying outside the current sentence's boundaries. This assumption is not universally shared, however, and remains controversial. In this talk, I shall present a series of experiments that suggest that knowledge of events and situations—often arising from broader context—plays a critical role in many intrasentential phenomena often thought of as purely syntactic or semantic. The data include findings from a range of methodologies, including reaction time, eye tracking (both in reading in the visual world paradigm), and event-related potentials. The timing of these effects implies that sentence processing draws in a direct and immediate way on a comprehender's knowledge of events and situations (or, the "blast from the past", on knowledge of scripts, schemas, and frames).

UB Center for Cognitive Science Distinguished Speaker Lecture
Thursday, 8 April 2010, 2:00 P.M.
Student Union Theater
(Room 106 if entering from ground floor; Room 201 if entering from 2nd floor)

Words and Dinosaur Bones:
Knowing about Words without a Mental Dictionary


For many years, language researchers were not overly interested in words. After all, words vary across language in mostly random and unsystematic ways. Language learners simply had to learn them by rote. Words were uninteresting. Rules were where the exciting action lay, and considerable effort was invested in trying to figure out what the rules of languages are, whether they come from a universal toolbox, and how language learners could acquire them. Over the past decade, however, there has been increasing interest in the lexicon as the locus of users' language knowledge. There is now a considerable body of linguistic and psycholinguistic research that has led many researchers to conclude that the mental lexicon contains richly detailed information about both general and specific aspects of language. Words are in again, it seems. But this very richness of lexical information poses representational challenges for traditional views of the lexicon. In this talk, I will present a body of psycholinguistic data, involving both behavioral and event-related-potential experiments, that suggest that event knowledge plays an immediate and critical role in the expectancies that comprehenders generate as they process sentences. I argue that this knowledge is, on the one hand, precisely the sort of stuff that on standard grounds one would want to incorporate in the lexicon but, on the other hand, cannot reasonably be placed there. I suggest that, in fact, lexical knowledge (which I take to be real) may not properly be encoded in a mental lexicon, but through a very different computational mechanism.


Elman, Jeffrey L. (2009), "On the Meaning of Words and Dinosaur Bones: Lexical Knowledge Without a Lexicon", Cognitive Science 33(4): 547–582.

14 April 2010

LouAnn Gerken

Tweety Language Development Lab,
Department of Psychology, Department of Linguistics,
and Director, Cognitive Science Program
University of Arizona

Predicting and Explaining Babies


The past 50 years or so of research in language and cognitive development have alternated between construing the learner's job as that of merely predicting new data in a particular domain from already-experienced input data, and that of explaining the state of affairs in the world that gave rise to the input data in the first place. Thus, in the domain of language development, researchers have debated about whether infants and children are learning linguistic grammars (explanations for linguistic data) or whether they are storing input in such a way that they can generalize to new instances without ever representing a grammar (prediction). The long back-and-forth about the nature of the English past-tense rule is an example of such a debate. One reason why the field has made somewhat less headway on this issue than we might hope is that the debate has been about the nature of stored representations, which are notoriously difficult to distinguish based on infant and child behavioral data. More recently, and largely under the banner of Bayesian inference, several labs have begun to approach the question of prediction vs. explanation in development in a different way. In my talk, I will briefly review some of the history of the prediction-vs.-explanation debate and discuss several examples of new studies supporting the view that infants and children are, in many domains, driven to explain.


Gerken, LouAnn (2010), "Infants Use Rational Decision Criteria for Choosing among Models of their Input", Cognition 115(20) (May): 362–366.

21 April 2010

Anna Papafragou

Department of Psychology and Department of Linguistics & Cognitive Science
University of Delaware

Space in Language and Thought


The linguistic expression of space draws from, and is constrained by, basic, probably universal, elements of perceptual/cognitive structure. Nevertheless, there are considerable cross-linguistic differences in how these fundamental space concepts are segmented and packaged into sentences. This cross-linguistic variation has led to the question whether the language one speaks could affect the way one thinks about space—hence whether speakers of different languages differ in the way they see the world. This talk addresses this question through a series of cross-linguistic experiments comparing the linguistic and non-linguistic representation of motion and space in both adults and children. Taken together, the experiments reveal remarkable similarities in the way space is perceived, remembered, and categorized, despite differences in how spatial scenes are encoded cross-linguistically.


Papafragou, Anna; Hulbert, Justin; & Trueswell, John (2008), "Does Language Guide Event Perception? Evidence from Eye Movements", Cognition 108: 155–184.

Copyright © 2010 by Prof. Gail Mauner, Director, UB Center for Cognitive Science
Prof. William J. Rapaport, UB CogSci webmaster (