Lois Jean Brady | Matthew Guggemos – Multi-Sensory Tools for Autism

Apr 30, 2018Medical & Health Sciences, Psychology and Neuroscience

For children with autism, communication can be a challenge. Drawing from a wealth of clinical experience, speech pathologists Lois Brady and Matthew Guggemos at iTherapy, LLC are developing innovative, engaging multi-sensory communication tools with the aim of improving quality of life for individuals with autism.

Autism spectrum disorder is a life-long neuro-developmental condition. Currently, one in 59 children in the USA and over one in 100 of the UK’s population are born with autism spectrum disorder – and the incidence is rising. It manifests itself as a number of symptoms and behaviours that affect the way in which individuals understand and react to the world around them. People on the autism spectrum often have problems with social interaction and communication because they find it hard to understand and use spoken language. In fact, around 40% of people diagnosed with autism are non-verbal.

Speech and language therapy, as well as behavioural, educational, and occupational interventions can help to provide support. People on the autism spectrum each learn and develop in different ways, so their needs and levels of support vary considerably. There are a range of tools available to help individuals overcome their communication barriers – including the use of pictures, gestures, sign language, visual aids, or speech-output devices like computers.

Finding your InnerVoice

One constraint of communication systems is that they often fail to convey emotions. This is where the work of Lois Brady and Matthew Guggemos at iTherapy LLC comes in. Brady and Guggemos lead a team of speech-language pathologists, communication scientists and assistive technology specialists who are dedicated to helping people on the autism spectrum.

Drawing on their clinical experience, the group aims to improve technology-based interventions for autism spectrum disorder. Through their evidence-based approach, they have developed a powerful software application InnerVoice. This innovative tool uses an emotionally expressive software-based speech-generating communication system, specifically designed for individuals who struggle with social communication. Their tool provides a combination of facial expressions, emotions, written words, and actions with speech, offering a complete multi-sensory learning experience.

Brady, a speech pathologist with over 25 years of experience says: ‘We at InnerVoice are dedicated to improving quality of life for people who struggle with communication challenges. We believe that the current technology is ready but underutilised and that communication can be mastered if people are provided with the right tools. We make those tools.’

The team has researched key areas of language development and skill acquisition and applied this knowledge to influence and shape the design of InnerVoice, truly addressing the complex communication needs of people on the autism spectrum.

 

‘We at InnerVoice are dedicated to improving quality of life for people who struggle with communication challenges. We believe that the current technology is ready but underutilised and that communication can be mastered if people are provided with the right tools. We make those tools.’

 

Say What You Feel

Our emotions, whether they be sadness, anger or happiness, are important communication tools that help us to express our ideas, feelings, and wishes. Brady and Guggemos recognised that the ability to convey emotions, through tone of voice or facial expressions, could have far-reaching benefits for individuals with communication challenges. In their project, ‘Say What You Feel’, the researchers explored the idea that synthesised emotional communication could be exploited to enhance educational communication systems. The team created and tested an emotionally expressive software-based speech-generating system, specifically designed for individuals with little or no verbal communication ability.

In a series of experiments, the team measured the ability of the software to portray the human emotions anger, happiness, sadness or neutrality. Integrating a combination of facial recognition software along with emotional speech algorithms was found to most accurately convey emotions on the InnerVoice tool.

 

Engaging Animated Avatars

Children with autism have difficulty understanding facial expressions, recognising familiar faces and mimicking social-communicative behaviour. This is in part due to slow-developing activity of mirror neurons in the brain, which play a key role in learning. One treatment intervention that has been shown to help stimulate mirror neuron activity in people with autism is video self-modelling. This intervention helps individuals imitate social behaviour, such as facial expressions, or imitate movement, such as speech production.

An innovative solution offered by InnerVoice is to animate the image of the user. This unique system features interactive video self-modelling using animated self-avatars – digital characters that incorporate the user’s face – giving users a greater opportunity to understand, predict and mimic social behaviour that they see in everyday situations. The avatars are intrinsically engaging to a child with autism. Since engagement has been shown to be vital for successful learning, this ability to pique the interest of the child makes the app especially effective.

Giving a Voice to Children with Autism

Joint attention is a pivotal skill that influences communication development and social interaction, enabling children to communicate with adults as well as their peers, sharing what is in their minds. Another crucial communication skill is theory of mind a social-cognitive ability that involves a number of skills such as understanding a person’s intentions, sharing attention, and tracking eye gaze.

Despite their difficulty sharing attention with another person, research has shown that people on the autism spectrum can be highly motivated and focused when they use computers or mobile devices as learning tools. The InnerVoice avatar attracts the interest of people with autism and motivates them to engage in communication. InnerVoice’s interface coordinates the user’s point on the screen with the avatar’s eye gaze, thereby helping teach the relationship between pointing and eye gaze – two crucial factors in learning word meaning and social behaviour.

Neurotypical Development

From an early age, neurotypical children – outside of the autistic spectrum – learn social, cognitive, and communicative skills through listening to speech and looking at faces. Infants identify socially important people, like parents or caregivers, and learn how to understand other people’s feelings and emotional well-being through recognising, interpreting, and mimicking their facial expressions.

Dr Deb Roy and his team at MIT found that children learn words by accumulating numerous interactions within linguistic, temporal, and spatial contexts. In fact, the ’birth of a word’ is triggered by the length and speed of caregiver’s words or sounds, produced in one or more of these three contexts.

By producing easy-to-imitate speech in a meaningful context, parents provide a model that their children can mimic. In autism spectrum disorders, the ability to imitate social behaviour develops slowly profoundly affecting the individual’s ability to communicate. The design of the InnerVoice app draws on Roy’s findings. It has the ability to adjust the avatar’s utterance length and speed. This can then be applied to any context a user might wish to create.

 

 

The Art of Communication

Speech requires a lot of practice and imitation to master, much like learning a musical instrument. It’s a complicated motor activity meaning that it involves movement where children have to imitate sounds that are combined into words and sentences. Research shows that video self-modelling is an effective way to teach skills to people with autism, possibly because it stimulates mirror neuron activity in the brain. Mirror neurons are key to learning pretty much any motor activity, including speech development.

When a person uses InnerVoice to learn to communicate, he/she is watching him/herself carry out a target behaviour, while being engaged. Equally important, engagement activates reward centres in the brain that act to reinforce behaviours like social interaction or eating. This reward mechanism plays a crucial role in learning – when someone likes doing something, it’s likely that they will do it more. The InnerVoice app aims to stimulate mirror neuron and reward centre activity in the brain, encouraging imitation and increasing engagement.

Semiotics is the study of signs and symbols – how they are used and what they mean. A word is a sign that carries meaning. People on the autism spectrum experience sensory-processing difficulties with establishing a semiotic process the process that pairs meaning to a symbol. One of the most important aspects of InnerVoice is the avatar-mediated interaction that creates a multi-sensory semiotics experience that simultaneously represents the associations shared among spoken, written, or gestured symbols with the meanings they represent.

For example, pairing dynamic media – such as a short video of a dangerous situation within a home environment – with spoken and written words helps establish a semiotic process, assigning meaning to a symbol or a word. This multi-sensory semiotic experience is vital for helping teach language to people on the autism spectrum.

Following the philosophy of Universal Design allowing universal access to all, whether or not they have disabilities InnerVoice is designed to be accessed, understood and used to the greatest extent possible by all people, regardless of age or ability. The app can be used as a multi-sensory semiotics tool, a video self-modelling tool or as an assistive communication device.

A powerful, researched and engaging tool, InnerVoice can help children become more effective communicators, support their own decision-making and allow them to become more independent: overall improving social communication and helping to improve quality of life for children with autism.

With a vision to expand their toolkit, the future at InnerVoice looks bright. Guggemos says: ‘we continue to expand the idea of Multi-Sensory Semiotics by creating more technology that makes it easy for people of all ages and abilities to communicate.’ Through a combination of science, technology and play, the InnerVoice approach is set to have a truly positive impact on speech and language therapy for many children.


 

Meet the Researchers

Lois Jean Brady, MA CCC-SLP,
iTherapy LLC
Martinez, CA
USA

Lois Jean Brady has over 25 years’ experience as a practicing Speech-Language Pathologist. She authored Apps for Autism, and co-authored Speech in Action and Speak, Move, Play and Learn with Children on the Autism Spectrum. She was winner of two Autism Speaks App Hack-a-thons, the Benjamin Franklin Award for Apps for Autism and an Ursula Award for Autism TodayTV. Lois’s current research developing multi-sensory products aims to enhance communication, attention, cognition and quality of life for individuals with autism. She also co-developed the VAST Autism Apps to increase speech. She was Principal Investigator on a 2015 National Science Foundation Small Business Innovative Research program, which focused on synthesised emotional communication for mobile devices.

CONTACT

E: loisjeanbrady@gmail.com
W: http://www.itherapyllc.com/


 

Matthew Guggemos, MS CCC-SLP,
iTherapy LLC
Napa, CA
USA


Matthew Guggemos is the co-founder and chief technology officer of iTherapy. He is also a speech-language pathologist, researcher, certified autism specialist, and professional drummer. He specialises in designing multi-sensory learning products for people on the autism spectrum, drawing from his experiences as a professional drummer and literary specialist, which influence his design ideas. He is currently studying the relationships shared among semiotics, skill mastery, improvisation, and communication and how these concepts can be applied to educational technology for autism. Matthew was director of research on a 2015 National Science Foundation Small Business Innovative Research grant, that led to the development of the synthesised emotional communication software, subsequently incorporated into InnerVoice. Notably, Matthew received the 2013 Mensa Intellectual Benefits to Society Award for his design contributions to InnerVoice.

CONTACT

E: matthewguggemos@gmail.com
W: http://www.itherapyllc.com/
W: https://www.drumlanguage.com
Project W: https://www.innervoiceapp.com/
Twitter: https://twitter.com/InnerVoiceapp  @InnerVoiceapp

KEY COLLABORATORS

Dr Tom Buggey, PhD

FUNDING

National Science Foundation
Small Business Innovation and Research
NewSchools Ignite
Autism Speaks  

REFERENCES

S Baron-Cohen, Mindblindness: An essay on autism and theory of mind, Cambridge, MA: The MIT Press, 1995.

S Baron-Cohen, Do People with Autism Understand What Causes Emotion? Child Development, 1991, 62, 385–95.

T Buggey, An examination of the effectiveness of videotaped self-modeling in teaching specific linguistic structures to pre-schoolers, Topics in Early Childhood Special Education, 1995, 15, 434458.

BC Roy, MC Frank, P DeCamp, M Miller and D Roy, Predicting the birth of a spoken word, Proceedings of the National Academy of Sciences, USA, 2015, 112, 12663–12668.