Minje Kim, Ph.D.

Bio: Minje Kim is an Associate Professor in the Siebel School of Computing and Data Science at the University of Illinois at Urbana-Champaign and an Amazon Scholar. Before then, he was an Associate Professor at Indiana University (2016-2023). He earned his Ph.D. in Computer Science at UIUC (2016) after working as a researcher at ETRI, a national lab in Korea (2006 to 2011). Prior to that, he received his Master’s and Bachelor’s degrees in the Department of Computer Science and Engineering at POSTECH (Summa Cum Laude) and in the Division of Information and Computer Engineering at Ajou University (with honors) in 2006 and 2004, respectively. During his career as a researcher, he has focused on developing machine learning models for audio signal processing applications. He is a recipient of various awards, including the NSF Career Award (2021), IU Trustees Teaching Award (2021), IEEE SPS Best Paper Award (2020), Google and Starkey’s grants for outstanding student papers in ICASSP 2013 and 2014, respectively, and Richard T. Cheng Endowed Fellowship from UIUC in 2011. He is the Chair of the IEEE SPS Audio and Acoustic Signal Processing Technical Committee (2025-2026). He is serving as Senior Area Editor for IEEE Transactions on Audio, Speech, and Language Processing and IEEE Signal Processing Letters, Associate Editor for EURASIP Journal of Audio, Speech, and Music Processing, and Consulting Associate Editor for IEEE Open Journal of Signal Processing. He served as the General Chair of IEEE WASPAA 2023 and also as a reviewer, program committee member, or area chair for major machine learning and signal processing venues. He is the inventor of more than 60 patents.

My Research Team

Ph.D. Candidates

Haici Yang

Intro: Haici joined my group at IU in 2019. She has introduced various innovative ideas to neural coding systems, such as source separation and predictive coding conducted in the latent code space and the latent diffusion model for generative de-quantization. Other than neural coding, she also worked on music audio projects, such as end-to-end music source remixing networks and style transfer-based upmixing (from two to five channels). She interned at Amazon, Adobe, and MERL. Her dissertation research is about scalable diffusion models for neural speech/audio coding. Haici likes to learn musical instruments, loves outdoor activities, and reading books.

Homepage: https://haiciyang.github.io
Email: hy17-at-iu-edu
CV: PDF

Darius Petermann

Intro: Darius came to my group at IU with his prior experiences in music technology. His main research topics have been innovating neural coding architectures for high-fidelity audio signals. Meanwhile, he has also established strong relationships with leading industry research labs throughout his internships at MERL, Google, and Netflix, one of which led to the Best Student Paper Award at ICASSP 2023. Besides neural coding, he has also worked heavily on spatial source separation problems. More recently, his research extended to multimodal and generative models. I like talking to him on heavy metal music, while he also likes to spend time with his dog Mötley.

Homepage: http://www.dariuspetermann.com
Email: daripete-at-indiana-dot-edu
CV: PDF

Ph.D. Students

Tsun-An Hsieh

Intro: Tsun-An joined my group in 2022 at IU, but then he moved with me to UIUC. He is knowledgeable in the ML literature, including transfer learning, metric learning, and causal inference, which he developed during his tenure at Academia Sinica in Taiwan. For the PhD study, he started to focus on advancing speech separation using language models, the personalization concept, and generative methods.

Homepage: https://alexiehta.github.io
Email: tsunanh2-at-illinois-dot-edu
CV: HTML

Jaesung Bae

Intro: Jaesung joined my group in 2024, bringing a rich experience in speech technology gained during his master’s degree at KAIST and as a researcher at NCSOFT and Samsung Research. His research focuses on the intersection of speech synthesis, generative models, and data efficiency in ML.

Homepage: https://jaesungbae.github.io/
Email: jb82-at-illinois-dot-edu
CV: PDF

Jackie Lin

Intro: Jackie joined the lab in 2024 with her master’s degree from Aalto University in acoustics and audio technology. She has been working on spatial audio and is expanding her interests into generative models for room acoustics simulation and spatial aspects of speech processing applications.

Homepage: https://www.linkedin.com/in/jackie-lin1/
Email: jackiel4-at-illinois-dot-edu
CV:

Yutong Wen

Intro: Prior to joining the lab in 2024, Yutong did his BS degree at the University of Rochester, where he worked on HRTF representation learning. Currently, he is working on the intersection of generative methods and informed source separation, where the side information comes from users’ guidance or example sounds.

Homepage: https://www.linkedin.com/in/yutong-wen-171825227/
Email: yutong12-at-illinois-dot-edu
CV:

Cameron Churchwell

Intro: Cameron joined the lab in 2024. He developed his interest in AI-based speech and audio processing tasks, such as speech representation learning, during his undergraduate studies at Northwestern University. Currently, he works on learnable DSP filters, audio segmentation, and controllable speech editing.

Homepage: https://www.cameronchurchwell.com
Email: cc178-at-illinois-dot-edu
CV:

Jayeon Yi

Intro: A new student coming in Fall 2025.

Homepage: https://www.linkedin.com/in/jayeon-yi-214114237/
Email:
CV:

Master Students

Jocelyn Xu

Intro: Jocelyn is working on her Master’s thesis on singing voice separation from real-world music recordings. Prior to that, she finished her BS degree in computer science at UIUC with minors in music and statistics.

Homepage: https://www.linkedin.com/in/joosxu/
Email: yuex7-at-illinois-dot-edu
CV:

Visiting Scholars

Alumni

Sanna Wager, Ph.D.

  • Ph.D. in Informatics and minor in Statistics at Indiana University (Spring 2021)
  • Dissertation: “A Data-Driven Pitch Correction Algorithm for Singing Voice” (pdf)
  • The committee: Minje Kim (chair), Christopher Raphael (IU CS), Donald Williamson (IU CS), and Daniel McDonald (U. of British Columbia Statistics)
  • Homepage: http://homes.sice.indiana.edu/scwager/
  • Summary of Ph.D. studies: Dr. Sanna Wager began to work with Minje in fall 2016, and joined the group officially in spring 2017. During her Ph.D. study, she wrote five conference papers on various research topics including tensor decomposition-based multhchannel speech dereverberation, robust speech recognition, semi-supervised methods for collecting a large-scale singing performance dataset, and neural pitch correction algorithms for singing voice. She interned at various companies, such as Google, Smule, Spotify, and Amazon. Her dissertation research on neural pitch correction algorithms for singing received extensive media coverage, including BBC, The Times, Daily Mail, the New Scientist Magazine, etc. She is now at Amazon as an Applied Scientist.

Kai Zhen, Ph.D.

  • Ph.D. (dual degree) in Computer Science and Cognitive Science at Indiana University (Spring 2021)
  • Dissertation: “Neural Waveform Coding: Scalability, Efficiency, and Psychoacoustic Calibration” (pdf)
  • The committee: Minje Kim (chair), Robert Goldstone (IU Cognitive Science), Donald Williamson (IU Computer Science), and Shen Yi (U. of Washington, Speech and Hearing Sciences)
  • Homepage: http://www.kaizhen.us
  • Summary of Ph.D. studies: Dr. Kai Zhen joined the lab in fall 2017 and led multiple neural waveform coding projects, which pioneered a new research area. His papers were published in leading signal processing and speech processing conferences and journals, such as Interspeech, ICASSP, IEEE Signal Processing Letters, and IEEE T-ASLP. The Cognitive Science Program at IU recognized his research by awarding the Outstanding Research Award in 2021. During his Ph.D. study, Dr. Zhen interned at LinkedIn (2018 and 2019) and Amazon (2020). During his internship at Amazon, his work on efficient neural ASR systems was selected as one of the 17 best poster presentations out of 180 internship projects. He is now at Amazon as a Senior Applied Scientist.

Sunwoo Kim, Ph.D.

  • Ph.D. in Intelligent Systems Engineering and minor in Computer Science at Indiana University (Spring 2022)
  • Dissertation: “Model Compression for Efficient Machine Learning Inference” (pdf)
  • The committee: Minje Kim (chair), Peter Todd (IU Cognitive Science), Christopher Raphael (IU Computer Science), and Fan Chen (IU ISE)
  • Summary of Ph.D. studies: Dr. Sunwoo Kim joined the lab in Fall 2017 and has focused on developing efficient machine learning models, such as bitwise neural networks, boosted hashing methods, and knowledge distillation. Out of various papers he published, his ICASSP 2020 paper was recognized as the Best Student Paper runner-up. He interned at Qualcomm (2019) and Amazon (2020 and 2021). He is now at Qualcomm as a Senior Machine Learning Engineer.

R. David Badger, Ph.D.

  • Ph.D. in Intelligent Systems Engineering at Indiana University (Spring 2022)
  • Dissertation: “Open-Source Classification Systems for Frequency-Domain RF Signals: Robust Physical Layer Multi-Sample Rate Processing” (pdf)
  • The committee: Minje Kim (chair), Lei Jiang (IU ISE), Lantao Liu (IU ISE), and Ariful Azad (IU ISE)
  • Summary of Ph.D. studies: Dr. R. David Badger joined the lab in Fall 2018. He has worked on RF signal processing, with a focus on signal compression and classification. He has open-sourced his dataset and deep learning-based RF classification system, which was the first of its kind that aims at multi-class RF signal classification using neural networks. He is now at Naval Surface Warfare Center Crane Division.

Aswin Sivaraman, Ph.D.

  • Ph.D. in Intelligent Systems Engineering and minor in Computer Science at Indiana University (Spring 2024)
  • Dissertation: “Resource-Efficient Model Adaptation Methods for Personalized Speech Enhancement Systems”
  • The committee: Minje Kim (chair), Christopher Raphael (IU Computer Science), David Crandall (IU Computer Science), and Ariful Azad (IU ISE)
  • Homepage: https://actuallyaswin.github.io
  • Summary of Ph.D. studies: Dr. Aswin Sivaraman joined the lab in Fall 2017 and has focused on developing efficient machine learning models and data augmentation techniques, such as data purification, self-supervised learning from noisy signals, mixture of local experts, etc., for personalized speech enhancement. He wrote three conference papers (Interspeech and WASPAA) and one journal article (IEEE JSTSP) as the first author. He interned at Amazon, Spotify, Microsoft, and Google. He is now at Apple as a Machine Learning Engineer.

Anastasia Kuznetsova, Ph.D.

  • Ph.D. (dual degree) in Computer Science and Linguistics at Indiana University (Spring 2025)
  • Dissertation: “Data Efficiency and Model Complexity Reduction for Speech Processing Systems”
  • The committee: Minje Kim (chair), Francis Tyers (IU Linguistics; co-chair), Damir Cavar (IU Linguistics), David Crandall (IU CS)
  • Homepage: https://ana-kuznetsova.github.io
  • Summary of Ph.D. studies: Dr. Anastasia Kuznetsova’s research during her Ph.D. began in the area of speech technology, with a focus on self-supervised learning and low-resource languages. After joining my group, she expanded her interests to include generative data augmentation for speech enhancement, foundational models for multichannel audio, and neural codecs for ASR, which were published at top speech and audio conferences such as Interspeech, ICASSP, and WASPAA. Anastasia interned at Coqui.ai, Rev.com, Google, and Amazon. She is now at Rev as an Applied Research Scientist.

Mrinmoy Maity

  • MS in Intelligent Systems Engineering (Spring 2019)
  • Now at Microsoft as a Data Scientist

Jongmo Sung, Ph.D.

  • Dr. Jongmo Sung from ETRI visited the group from Aug. 2017 to Aug. 2018. We collaborated on neural speech and audio coding. Results were published in IEEE TASLP, SPL, ICASSP 2020, and Interspeech 2019.

Misuk Lee, Ph.D.

  • Dr. Misuk Lee from ETRI visited the group from July 2018 to June 2019. We collaborated on neural speech and audio coding. Results were published in IEEE TASLP, SPL, ICASSP 2020, and Interspeech 2019.

Heeyoul (Henry) Choi, Ph.D.

  • Prof. Heeyoul (Henry) Choi is an Associate Professor at Handong Global University. He visited the group from Sep. 2022 to May 2023. While he was also teaching courses at IU, he also collaborated with my group on language model-assisted loss functions for source separation, resulting in a paper published at Interspeech 2024 [pdf].

Inseon Jang, Ph.D.

  • Dr. Inseon Jang from ETRI visited the group from Feb. 2023 to Feb. 2024. We collaborated on personalized neural speech coding and published at ICASSP 2024 [pdf]

Riccardo Miccini

  • Riccardo Miccini was an Industrial PhD Student at the Technical University of Denmark and GN Audio/Jabra, when he visited UIUC during Fall 2024 and Spring 2025. We collaborated on scalable and efficient speech enhancement models, which led to a WASPAA 2025 paper [pdf].