I am a tenured Assistant Professor (Maître de Conférences) at CentraleSupélec, in the campus of Rennes, and a researcher in the AIMAC team of the IETR laboratory, a CNRS joint research unit (UMR 6164).
I received my Ph.D. in 2017 from Télécom Paris, where I was working on audio source separation for reverberant and multi-microphone recordings, under the supervision of Roland Badeau and Gaël Richard. Before joining CentraleSupélec, I was a postdoctoral researcher at Inria Grenoble Rhône-Alpes, within the Perception team led by Radu Horaud.
My research focuses on signal processing and machine learning for audio and speech applications. I am mainly interested in problems that consist in estimating a latent signal of interest from noisy and/or incomplete observations. I tackle such problems from a Bayesian perspective, as it offers a principled methodology for including knowledge about the underlying (e.g., physical) generative process of the data, for developing machine learning algorithms without or with few labeled data, and for addressing the generalization/adaptation difficulty of discriminative and supervised methods. My recent work focuses on weakly-supervised methods with deep-learning-based generative models, essentially dynamical variational autoencoders.
I teach machine learning, deep learning and audio signal processing to graduate-level students of CentraleSupélec.