When Victoria University’s Professor Jeff Sigafoos offered autistic children the chance to communicate in three ways, it perhaps wasn’t surprising that the kids did best with the system they liked the most. It was also unsurprising, maybe, that most of them liked the computer-based system the best. But what is astonishing is that during the intervention some previously non verbal children actually started to speak.
It must be amazing for these children and families.
The families all seem to be very pleased with the progress of their children. The children also seem to really enjoy participating in this research. When the research assistants enter the classroom, for example, the children often jump out of their seats to greet them! All of the children have learned new communication skills as a result of participating in the research, and we have seen about 10% of the children start to speak during the intervention. One of our recent publications has documented that the emergence of speech appears related to using the iPad-based communication device.
What is the computer system they use?
We are using a relatively new type of speech-generating device. Speech-generating devices are computer systems that include graphic symbols, which function like words. For example, if the child tapped a line drawing of a glass of water the computer might say “I would like a drink of water please.” The iPad-based system we’re using has a high-quality synthesised voice output, a large set of drawings and pictures and is relatively low cost compared to other devices – hundreds rather than thousands of dollars.
What were the other two ways you gave them to communicate?
For children with autism who do not develop speech (which represents about 25% of children with autism), there are three main communication systems that have been studied. One of these is using manual signs, which are similar to what deaf people would use. The second system is called picture exchange. With picture exchange, the child hands over a plastic card with a picture and word on it. For example, if the child handed over a picture card with a picture of a glass of water and the printed word “water” it would be taken to mean something like “I want a drink of water.” So we looked at how quickly children learned these two systems compared to the iPad system. We also looked at which system they preferred and how well they did after the training programme ended.
We have found that the vast majority of the children show a very strong preference for one of the communication modes and in about 70% of the children, their preference was for the computer (iPad) system. Also one of the most consistent findings was that the children retained what they have learned much better with their preferred communication system. We suspect that this might be due to the fact that the children are perhaps more motivated to communicate using their preferred system.
Does this imply that children will all learn things better using systems they prefer?
Not necessarily, but it does seem to suggest that children have preferences that might influence their motivation. Because these children have not developed speech, they are often not able to express their preferences. Our approach to assessing preferences is therefore an important contribution to the rehabilitation literature as it suggests a way to enable such individuals to express preferences and thus exert an important degree of self-determination. Assessing preferences could thus be seen as being important in its own right, and it is also one way to promote self-determination and improve quality of life.
What are your future research directions?
In our future research we are looking at the existing (pre-linguistic) gestures, vocalisations and body movements that children with developmental disabilities use to communicate. Even when these children do not have speech, they often use various informal gestures, vocalisations and body movements to communicate. The problem is that these types of pre-linguistic communication behaviours are difficult to interpret, which causes communication breakdowns. We are thinking that we might be able to solve this problem using a novel technology-based intervention. The idea is that the children’s pre-linguistic behaviours might be translated into understandable speech by using motion-detecting microswitches. So when the child engages in a certain body movement that means I need help, for example, the microswitch will trigger relevant speech output from an iPad-based speech synthesiser (e.g., “Can I have some help please?”). The idea is to use technology to make the child’s existing communication forms easier for listeners to understand.
These interviews showcase researchers supported by the Marsden Fund which, since 1994, has been supporting fundamental, investigator-led research in New Zealand.