No, you don’t need Machine Learning for your conference calls
A while ago a friend suggested I use Krisp to mute background noise during conference calls. I gave it a try and was very impressed! It does work very well.
Except…It will keep your CPU at 100%, fans will likely start spinning and your battery life will suffer. Do we really need advanced machine learning to achieve something as simple as reducing background noise? There’s a better way (if you’re willing to use an external microphone).
Keep the input volume fixed to a minimum so that your voice can be heard but the noise cannot.
Depending on where you are you might want to set it to 5, 10, 20 etc. Personally I found a value of 10 to be good for most of my calls. macOS offers the osascript
tool to set system properties such as the input volume. A simple script to periodically reset it to the desired value looks like this:
while [ 1 ]; do osascript -e “set volume input volume {YOUR_VALUE}”; sleep 10; done
This will prevent macOS from automatically resetting the input volume level depending on the loudness of your voice or the noise around you. You can test everything by using an app such as Linein.
The same can be done on Linux (using PulseAudio):
while sleep 1; do pacmd set-source-volume alsa_input.???-?????.analog-stereo {YOUR_VALUE}; done
(No idea what’s available on Windows)
Now we can go back to employing ML for more serious tasks, such as recognizing whether something is a hotdog.