CUO|Social Spaces will be presenting a full paper on the first phases of the ALADIN design process at the NordiCHI conference, 26-30/10/2014, in Helsinki. The paper is about how we approached the user-centered design of the self-learning ALADIN speech engine, trying to find out how people address such a speech system, and which words people use to talk to it.
Just to give you an idea, you can find the (preliminary) abstract below:
This paper describes the user-centered design of ALADIN, a speech recognition system targeted at people with physical disabilities, many of whom also have speech impairments. ALADIN is a self-learning system, designed to allow users to use their own specific words and sentences, adapting itself to the specific voice characteristics of the user. The design process described in this paper focuses on the specific interaction issues associated with this type of voice interaction. Specifically, the tests focused on how users with speech impairments address a speech interface, determining which types of variation in wording and sentence structure occur, and when they occur. The results provide a detailed analysis of the observed variation. Based on these results, we discuss potential causes of this variation, and how the users’ expectations can better match the capabilities of the ALADIN speech system through careful interaction design.