- Assistant Professor and Director of Graduate Studies
- Dept. of Intelligent Systems Engineering, School of Informatics, Computing, and Engineering, at Indiana University
- Core faculty
- Adjunct faculty
- My research group
To Prospective Students
- Send me your CV!
The Deep Autotuner project received extensive media coverage including BBC Radio 5 Live "Drive (from 1:27:09)", The Times, Daily Mail, and the New Scientist magazine. It also got featured on School's news page.
Two papers were accepted for publication in ICASSP 2019:
"Incremental Binarization On Recurrent Neural Networks for Single-Channel Source Separation"
"Intonation: A Dataset of Quality Vocal Performances Refined by Spectral Clustering on Pitch Congruence"
Very excited to join the Department of Statistics as adjunct faculty!
Very excited to join the Cognitive Science Program!
A fun collaboration with folks in the Sunflower state resulted in a paper in RTCSA: "DeepPicar: A Low-cost Deep Neural Network-based Autonomous Car"
My paper, entitled “Collaborative speech dereverberation: regularized tensor factorization for crowdsourced multi-channel recordings,” was accepted for publication in EUSIPCO 2018.
An exciting collaboration with cognitive scientists resulted in a paper in CogSci 2018: "Creative leaps in musical ecosystems: early warning signals of critical transitions in professional jazz"
Two papers were accepted for publication in ICASSP 2018:
"Bitwise Neural Networks for Efficient Single-Channel Source Separation"
"Bitwise Source Separation on Hashed Spectra: An Efficient Posterior Estimation Scheme Using Partial Rank Order Metrics"
Intel Science and Technology Center for Pervasive Computing (ISTC-PC) extended their support for my efficient machine learning research, entitled "Bitwise Machine Learning for Efficient AI Engines Running in the IoT Edge Devices." Thank you, Intel!
I was elected as a new member for the IEEE Audio and Acoustic Signal Processing Technical Committee.
Giving an invited talk at the Workshop on Computational Intelligence and Soft Computing (CISC 2017), a satellite workshop of the Int'l Conf. on Parallel Architectures and Compilation Techniques (PACT). Title: "Using Bitwise Machine Learning Models for Resource-Constrained Edge Devices"
Good to be back in Illinois! I'm attending the Midwest Music and Audio Day at Northwestern University, to present my work about "Bitwise Source Separation."
Two papers were accepted for publication in ICASSP 2017:
"Collaborative Deep Learning for Speech Enhancement: A Run-Time Model Selection Method Using Autoencoders"
"Towards Expressive Instrument Synthesis Through Smooth Frame-By-Frame Reconstruction: From String To Woodwind"
I'm joining the faculty of the School of Informatics and Computing at Indiana University Bloomington as an assistant professor starting from the fall semester. Within the school I'll be with the new Department of Intelligent Systems Engineering.
I was selected as an Outstanding Teaching Assistant for my TAship in Fall 2015, for the course "Machine Learning for Signal Processing".
I successfully finished my PhD study! Here's my dissertation.
My paper, entitled "Efficient Neighborhood-Based Topic Modeling for Collaborative Audio Enhancement on Massive Crowdsourced Recordings," was accepted in the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2016).
My paper, entitled "Adaptive Denoising Autoencoders: A Fine-tuning Scheme to Learn from Test Mixtures," was nominated for the best student paper on audio signal processing in the International Conference on Latent Variable Analysis and Signal Separation (LVA/ICA 2015).
My paper, entitled "Joint Optimization of Masks and Deep Recurrent Neural Networks for Monaural Source Separation," was accepted for publication in IEEE/ACM Transactions on Audio, Speech, and Language Processing.
Started my 4th internship at Adobe Research as a Creative Technologies Lab Intern.
My paper, entitled "Adaptive Denoising Autoencoders: A Fine-tuning Scheme to Learn from Test Mixtures," was accepted in the International Conference on Latent Variable Analysis and Signal Separation (LVA/ICA 2015).
My paper, entitled "Efficient Manifold Preserving Audio Source Separation Using Locality Sensitive Hashing," was accepted in the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2015).
My paper, entitled "Exploiting Structured Human Interactions to Enhance Estimation Accuracy in Cyber-physical Systems," was accepted for publication in the International Conference on Cyber-Physical Systems (ICCPS 2015).
My team was selected as one of the finalists for the Qualcomm Innovation Fellowship 2015.
I passed the preliminary exam (thesis proposal).
My paper, entitled "Collaborative Audio Enhancement: Crowdsourced Audio Recording," was accepted for publication in the NIPS 2014 Workshop on Crowdsourcing and Machine Learning.
My paper, entitled "Efficient Model Selection for Speech Enhancement Using a Deflation Method for Nonnegative Matrix Factorization," was accepted for publication in the IEEE Global Conference on Signal and Information Processing (Global SIP).
My paper, entitled "Mixture of Local Dictionaries for Unsupervised Speech Enhancement," was accepted for publication in the IEEE Signal Processing Letters.
My paper, entitled "Singing-Voice Separation From Monaural Recordings Using Deep Recurrent Neural Networks," was accepted for publication in the International Society for Music Information Retrieval Conference (ISMIR 2014).
Started to work at Adobe Research as a Creative Technologies Lab Intern.
Two of my papers, "Phase and Level Difference Fusion for Robust Multichannel Source Separation" and "Deep Learning for Monaural Speech Separation" were accepted for ICASSP 2014.
My paper, entitled "Non-Negative Matrix Factorization for Irregularly-Spaced Transforms," was accepted for publication in the IEEE Int'l Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA 2013).
My paper, entitled "Single Channel Source Separation Using Smooth Nonnegative Matrix Factorization with Markov Random Fields," was accepted for publication in the IEEE Int'l Workshop on Machine Learning for Signal Processing (MLSP 2013).
I won the Google ICASSP Student Travel Grant for the paper "Collaborative Audio Enhancement Using Probabilistic Latent Component Sharing."
I started to work at Adobe Research as a Creative Technologies Lab Intern.
My paper, entitled "Manifold Preserving Hierarchical Topic Models for Quantization and Approximation," was accepted in the International Conference on Machine Learning (ICML 2013).
My paper, entitled "Collaborative Audio Enhancement Using Probabilistic Latent Component Sharing," was nominated for the Best Student Paper Award at (ICASSP 2013).
My paper, entitled "Collaborative Audio Enhancement Using Probabilistic Latent Component Sharing," was accepted in the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2013).
I passed the PhD qualifying exam!
My recent paper, "Stereophonic Spectrogram Segmentation Using Markov Random Fields", was published in the IEEE Workshops on Machine Learning for Signal Processing (MLSP 2012).
A super duper compact, yet powerful neural network running on small devices. It uses just a bit of your precious resources.
You want to be a better singer, but are suffering from a lack of talent? Deep Autotuner is a deep learning-based solution that automatically tunes your out-of-tune singing. No worries. It keeps your expressive details so that the fixed melody still sounds like you, not a robot.
A collaborative deep learning effort towards the universal speech enhancement system. This monster eats whatever speech enhancement module (e.g. an already-trained DNN for a specific task) and dynamically chooses the best module for the unseen test signal.
A lightweight dictionary-based model for source separation. Once again, things are bitwise--separation is done by a cheap bitwise matching process between hashed (so binary) spectra.
Crowdsource your recording job, and we'll take care of the rest. We can handle up to a thousand crowdsourced recordings.
Wet speech does no good. But, we have another collaborative algorithm that dries it up from multiple user recordings.