elisakreiss / rrre_rt

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Reaction Time in Rationally Redundant Referring Expressions

Response to Engelhardt (2011)

Engelhardt 2011

  • Experimental setup
    • visual displays with two stimuli
    • referential expressions as auditory cue
    • time starts with modifier onset
    • stimuli were not color-diagnostic
    • reaction time was counted, when button was pressed, indicating which of the two objects was the target
  • Result
    • reaction time higher when modifier is used overinformatively than informatively
  • Potential problems
    • didn't use color-diagnostic stimuli (the more atypical an object, the more people use a modifier overinformatively -- maybe to correct the listener's visual search; a listener doesn't have a high expectation on what color a circle might have which could cause looking for a contrast set)
    • very low scene variation

Our experiment

  • have pilots for 3 and 6 stimuli
  • utterance choice probabilistically, inspired by typicality study
  • only visually presented utterances so far
To Do
  • Improve stimuli (e.g., improve orange pear, get rid of avocado)
  • Have written and auditory cues (in separate experiments)
  • Open questions
    • how to encode selection of an item (which keys to press)
    • when to identify time onset?
    • idea: take utterances from bda to get "human-like" refExp distribution (maybe also say that the refExps come from a previous experiment to ensure they expect human-like rational utterances)

About


Languages

Language:TeX 67.8%Language:JavaScript 17.6%Language:HTML 12.9%Language:R 1.0%Language:CSS 0.7%Language:Python 0.1%