Addressing Deaf or Hard-of-Hearing People in Avatar-Based Mixed Reality Collaboration Systems

Addressing Deaf or Hard-of-Hearing People in Avatar-Based Mixed Reality Collaboration Systems

We propose an easy to integrate Automatic Speech Recognition and textual visualization extension for an avatar-based MR remote collaboration system that visualizes speech via spatial floating speech bubbles. In a small pilot study, we achieved word accuracy of our extension of 97% by measuring the widely used word error rate.