A new deep neural network and mobile application have been developed to assist the deaf and hard-of-hearing community by classifying and detecting five significant sounds: a running faucet, a dripping faucet, a car engine, a car horn, and a fridge alarm. This research builds on prior work that utilized a long short-term memory (LSTM) model and an enhanced self-captured dataset but did not produce a mobile application. The current study introduced a “negative” class to include irrelevant sounds encountered in real-world settings. The refined model achieved an area under the curve (AUC) score of 0.97, balancing precision and recall effectively for critical and benign sounds. A novel approach was taken to fine-tune the YamNet audio classification model, employing convolutional layers to minimize performance overhead for real-time mobile use. The model has been deployed on the Web through TensorFlow.js and is available as a Progressive Web App for offline access. Future research will focus on user testing to ensure the application and model meet the needs of its intended users.
Developing Accessible Assistive Technology for the Deaf and Hard of Hearing by Deploying a Fine-Tuned Deep Neural Network to the Web
Flag this News post: Developing Accessible Assistive Technology for the Deaf and Hard of Hearing by Deploying a Fine-Tuned Deep Neural Network to the Web for removalFor more information, visit the original source.