The Ph.D. students in SAIGE work as an intern in various research labs in the industry this summer: Amazon (Sanna), LinkedIn (Kai), Qualcomm (Sunwoo), and Spotify (Aswin).
We are attending ICASSP 2019 in Brighten, UK. Minje will chair a session on source separation and speech enhancement; Sunwoo and Sanna will present their papers about bitwise recurrent neural networks and a database…
The Deep Autotuner project got featured on BBC Radio 5 Live “Drive (from 1:27:09),”The Times, Daily Mail, and the New Scientist magazine. It also got featured on School’s news page.
Our papers are accepted for publication in ICASSP 2019: “Incremental Binarization On Recurrent Neural Networks for Single-Channel Source Separation” and “Intonation: a Dataset of Quality Vocal Performances Refined by Spectral Clustering…
A fun collaboration with folks in the Sunflower state resulted in a paper in RTCSA. It’s about DeepPicar, a deep learning based low powered autonomous driving system: “DeepPicar: A Low-cost Deep…
Our paper is accepted for publication in EUSIPCO 2018: Sanna Wager and Minje Kim, “Collaborative speech dereverberation: regularized tensor factorization for crowdsourced multi-channel recordings.”
We’ve got two ICASSP 2018 papers accepted:“Bitwise Source Separation on Hashed Spectra: An Efficient Posterior Estimation Scheme Using Partial Rank Order Metrics,” and“Bitwise Neural Networks for Efficient Single-Channel Source Separation.”…
At NIPS 2017 Workshop on Machine Learning for Audio, SAIGE presented two posters about efficient speech enhancement algorithms using bitwise machine learning models.
Intel Science and Technology Center for Pervasive Computing (ISTC-PC) extended their support for our efficient machine learning research, entitled “Bitwise Machine Learning for Efficient AI Engines Running in the IoT…