An emotion inferencing interface? An interesting project when you consider how our current AAC technologies often fail in representing the emotional / affiliative aspects of our communications! :>) (link)
Ok, maybe I’m going off the deep end here, but this bolsters my view about the effect of split attention tasks – something suffers. All you scanners and eye trackers out there – what kinds of compromises do you have to make when trying to operate an AAC device and communicate at the same time? […]
(from futurity.org) Children with brain injuries may use gesture to signal they need help in developing language, research shows. The children who make the fewest gestures early in development also develop spoken vocabulary more slowly. Findings were published in the March issue of Child Development. (link).
A new book on speech synthesis in AT and AAC by John Mullennix and Steven Stern. A great resource for those of us interested in speech synthesis covering the spectrum from history, technical developments, new clinical research findings and societal impacts. (link).
Just when we thought the world was safe ….. (link)
Humans use the world around us to make meaning. Now researchers provide experimental evidence that emblematic gestures are represented in the brain similar to spoken language. (link). (article abstract)