You are viewing the website for the aac-rerc, which was funded by NIDRR from 2008-2013.
For information on the new RERC on AAC, funded by NIDILRR from 2014-2019, please visit rerc-aac.psu.edu.
Dynamically Capturing and Representing Communication Environments
David Beukelman (University of Nebraska)
Dynamic communication environments present unique challenges to people
who rely on AAC strategies. This is particularly true in rapid-pace contexts like classrooms, employment, and medical settings. The content of interactions in these settings is constantly changing, which makes it difficult for people who rely on AAC to introduce topics, make comments, and exert control over conversational content.
Recent research has also shown that for some individuals (e.g., aphasia), the use of highly contextual and personally relevant scenes captured using digital photography, as well as the use of actual objects, projected images, photographs, messages on black/white boards, visual displays of maps, etc. can support interactions in ways that symbol sets do not. Thus, it appears that if relevant content (visual and perhaps auditory) can be captured and immediately displayed on AAC technologies, individuals, including those with cognitive/ linguistic limitations, may gain immediate access to content they want to talk about.
This project has three goals:
(1) identify the type, form, frequency and transience of dynamic communication content in three dynamic environments;
(2) develop design features that enable AAC users to dynamically capture environmental content (visual and auditory) and display it on their AAC device; and
(3) evaluate the acceptance, use and impact of prototype(s) that enable AAC users to capture information across a range of environments.
This project is organized in 4 phases:
Phase 1: Environmental Assessment. We will investigate the communication environments of 30 people with CCN. Participants include adolescents/adults with CCN and cerebral palsy (CP), TBI, stroke and degenerative diseases in medical settings (10), educational settings (10), and work (employment/volunteer) settings (10). We will obtain photographs and video recordings that document the specific types of content (personal, family, procedural, etc.), form of content (objects, pictures, drawings, print, photos, video) and transience of content change (duration of display and frequency of change) in each environment (education, medical, and work).
Phase 2: Design Specifications. Based on results from Phase 1, a research team made up of AAC experts (users and practitioners), individuals from dynamic communication environments (educators, nurses, and work supervisors) and AAC users who participate in education, medical, and work environments will develop a list of desirable features for the prototype(s).
Phase 3: Development of Prototype(s). The project engineer/co-investigator (Jakobs), with support from the corporate collaborator), will coordinate this phase. Prototype(s) will be developed that incorporate design features identified during Phase 2. Upon prototype completion, their effectiveness will be evaluated in Phase 4.
Phase 4: Prototype Evaluation and Use. We anticipate that the dynamic capture prototypes developed in Phase 3 will support more effective communication and participation in targeted settings than each participant’s current communication technology. Evaluation of the prototype(s) will occur at the Madonna Rehabilitation Hospital; the Lincoln Public Schools and the Lincoln Public Schools Employment Transition Program; and in specific work settings.
Annual Update (May, 2011)
To date, medical settings (patient rooms in Long-term Medical Care Hospitals and Acute Rehabilitation Hospitals) and educational settings have been investigated. On average, 110 visual communication contents items were identified in each of these settings. In educational settings, more content was displayed in elementary and middle school classrooms than in high school classrooms. Also, nearly twice as much content was displayed in regular classrooms as compared to resource classrooms.
Design features were developed that include (1) built-in cameras, (2) display of images on the AAC screen and interaction (touch) with the images on the display that had not been stored, (3) storage of images such that they could be retrieved onto the screen and enlarged (zoom feature) so that the person who relies on AAC could identify specific content items (4) images stored such that they could be loaded into an AAC application efficiently (drag and drop). Evaluation of prototype applications hosted by the mobile technology (IPad2), Dynavox Maestro, and Invotek LaserCam is ongoing.
Nordness, A., Beukelman, D., & Ullman, C. (in press). Impact of alphabet supplementation on speech and pause durations of dysarthric speakers with traumatic brain injury: A research note. Journal of Medical Speech Language Pathology.
Hanson, E., Beukelman, D., Heidemann, J., & Shutts, E. (2010). Impact of alphabet supplementation and word prediction on sentence intelligibility of electronically distorted speech. Speech Communication, 52, 99-105 (Abstract)
Nordness, A., Ball, L., Fager, S., Beukelman, D., & Pattee, G. (2010). Late AAC assessment for individuals with amyotrophic lateral sclerosis. Journal of Medical Speech Language Pathology, 18, 48-54.