The first application would be sound localization. I want to use four microphones on the arms of the glasses to do beam forming. I plan to use an FPGA to do large amounts of parallel computation in real time.

Once a basic application is done, more features can be added. For those with impaired hearing at high frequencie, it should be enough to allow them to distinguish between closely related phonemes such as 'b' and 'v'.

The big problem with audio processing is echo elimination. It is called decorrelation. It is not at all clear how animals do this so well.

I am fully aware that the hearing impaired do not like to display bulky devices. The prototype will be for those who care more about sensing the world, than about their appearances. Four wires from the PDM microphones on the glasses will run into a backpack, and another wire will run from the backpack to the display glasses.


Please Comment

You can comment on this page by replying to the following Mastodon toot.  Eventually the entire thread will be displayed, not just the toot.


Built with the Forest WIki.
User contributed content is licensed under a Creative Commons CC By 4.0 License.