Loading…
2017 UDL Symposium has ended

 Welcome Letter with Pre-Conference Details
 Session Evaluation
 Goal Setting Organizer

Monday, July 31 • 1:00pm - 1:18pm
Toward Emotionally Accessible TTS Delivery of Online Learning

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
Until audio support for reading has the option for emotional expression we may not unlock the learning potential for all learners. There is a very small study that has indicated that people are affected by the choice between human voice verse synthetic voice during learning tasks. One potential explanation for this is that the human voice can carry with it a prosodic expression of emotion. Given that some students have performed better with a human voice the emotional expression could be a contributing factor. When looking at the visually impaired population we can examine preference as an entry into researching this topic. According the the Perkins school for the blind website when listening to an audio book a human voice is the preference even to the point of forgiving the pronunciation mistakes of the reader. So in long form text an emotional delivery may be better. In order to further investigate this question text from online courses have been analyzed for their emotional expression. This helps to understand that across various courses the text material inherently has different levels of emotional expression. 3 courses are sampled from 3 platoforms (EdX, FutureLearn, Coursera). In the three samples three different content areas are included (Physical Science, Social Science, and Personal Development). From these three samples 2 different voice options are used to read the text and these readings have been recorded. The recordings have been analyzed for the potential presence of prosodic detection of 5 emotions (Neutral, Happy, Anger, Sadness, Frustration). The detection technology has a predicted 66.5% accuracy when analyzing human voices. The predictions of this technology across the two voices will be compared with the emotional detection of the source text. Finally a recording on the source text will be read using a technology designed to express emotion so that we can compare the 2 recordings to an alternative option of synthetic speech that is attempting to appropriately express the emotion detected in the text. This will provide the audience with an experience of what an upcoming study will do when asking the question "how does emotional expression of synthetic voice impact reading comprehension?". This upcoming pilot study will use methods that are applicable for learning at scale as the goal of this research is to better understand how emotional speech impacts our learning outcomes to investigate a potential for personalized learning at scale. Outside of the content analysis that sparked the upcoming pilot study, the progress on the pilot work will also be detailed at the time of this presentation.

Speakers
avatar for Garron Hillaire

Garron Hillaire

Graduate Student, The Open University
I first heard about UDL in David Rose's course while doing an Ed.M. at HGSE when taking his course on the topic. After completing the masters I joined CAST and worked there as an educational software architect for 4 years. During this time I contributed to technical implementations... Read More →



Monday July 31, 2017 1:00pm - 1:18pm EDT
Bayview Room