Main Article Content
Most hearing aids on the market today employ digital noise reduction techniques. Unlike previous analogue systems, these manufacturer-specific algorithms are designed to acoustically analyse the incoming signal and change the gain/output characteristics based on predefined rules. Hearing-impaired persons frequently utilise digital hearing aids to improve their speech intelligibility and quality of life. However, hearing aid performance is typically reduced owing to acoustic feedback, which causes further issues. This effect occurs when sound travels from the speakers to the microphone. This creates instability and a high-frequency oscillation, which hearing-impaired persons can feel if the volume exceeds their hearing limits. Furthermore, these effects restrict the maximum gain that the hearing aid can achieve and degrade sound quality when the gain is nearing the limit. Several feedback reduction approaches based on adaptive algorithms have been utilised to minimise auditory feedback. The purpose of this paper is for the researcher to design and create an algorithm to reduce noise using a machine learning method. Stationary noise is removed from the input audio stream during pre-processing. Using a machine learning technique, the original signal is separated from the noise, Post-processing and performance testing of the developed system.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.