//research

timpani_logo_v08jpgcrop

Musical auditory scene analysis and hearing impairment

How is music listening affected by hearing loss? What happens to your perception of a  complex musical scene as your hearing gets bad? Hearing aids are currently optimized for speech. How can we improve music listening with hearing aids?

These questions are addressed by current research at the University of Oldenburg. Collaborators include Simon Doclo, Volker Hohmann, and Kirsten Wagener.

 


timbrepic.001


Musical timbre perception and cognition

Is musical timbre just a fleeting sensation or is it fully retained in memory? And how do you define timbre anyways? What are the acoustic and cognitive factors that affect audio and timbre dissimilarity perception? What are the most relevant acoustic features? How can these be subsumed in reliable statistical models? Answers to these question might not only inform the psychological underpinnings of this important auditory parameter, but also improve our general understanding of music perception. As part of my PhD research, this work was conducted under the supervision of Stephen McAdams in the McGill Music Perception and Cognition Lab at McGill University and involved contribution from Ichiro Fujinaga and Shawn Mativetsky. I also collaborated with Daniel Müllensiefen on an extension of this line of work.

Here’s a talk on early portions of the project (UC Berkeley Redwood Institute for Theoretical Neuroscience seminar, Dec 2013), as well as a talk on later portions of the project (Berlin Interdisciplinary Workshop on Timbre, Jan 2017).

Key Publications:

K. Siedenburg & S. McAdams (2017). Four conceptual distinctions for the auditory ‘wastebasket’ of timbre. Frontiers in Psychology (Auditory Cognitive Neuroscience), doi: 10.3389/fpsyg.2017.01747

K. Siedenburg, I. Fujinaga, S. McAdams (2016): “A Comparison of Approaches to Timbre Descriptors in Music Information Retrieval and Music Psychology”, Journal of New Music Research, 45, pp. 27-42

K. Siedenburg, K. Jones-Mollerup, S. McAdams (2016): “Acoustic and categorical dissimilarity of musical timbre: Evidence from asymmetries between acoustic and chimeric sounds”. Frontiers in Psychology, 6:1977, doi: 10.3389/fpsyg.2015.01977

K. Siedenburg & S. McAdams (2017): “The role of long-term familiarity and attentional maintenance in short-term memory for timbre”. Memory, 25(4), pp. 550-564

K. Siedenburg, S. Mativetsky, S. McAdams (2016). Auditory and verbal memory in North Indian tabla drumming. Psychomusicology: Music, Mind, and Brain, 26 (4), pp. 327–336

K. Siedenburg & D. Müllensiefen (2017). Modeling timbre similarity of short music clips. Frontiers in Psychology (Section Cognition), 8:639, doi: 10.3389/fpsyg.2017.00639

 


 

StupidWebPic

Time-Frequency Processing and Audio Enhancement

This project attempts to exploit the inherent structure of music and speech signals in order to more reliably estimate their time-frequency content. This project grew out of an involvement with techniques of structured sparsity in my MSc (“Diplom”) thesis and has been gravitating towards applications such as transient extraction, audio noise removal, and audio declipping.

For these projects, I have collaborated with Monika Dörfler (University of Vienna), Matthieu Kowalski (University Paris-Sud), Philippe Depalle (McGill University), and Simon Doclo (University of Oldenburg).

I’ve create a MATLAB toolbox on structured sparsity and generalized time-frequency thresholding.

Key Publications:

Kowalski, M, Siedenburg, K., and Dörfler, M. (2013). Social Sparsity! Neighborhood Structures Enrich Structured Shrinkage Operators, IEEE Transactions on Signal Processing, 61(10), p. 2498-2511

Siedenburg, K. & Dörfler, M. (2013). Persistent Time-Frequency Shrinkage for Audio Denoising, Journal of the Audio Engineering Society (AES), No. 61 (1/2)

Siedenburg, K., Kowalski, M., Dörfler, M., et al. (2014). Audio declipping with social sparsity. In Proc. of the IEEE International Conference on Acoustics, Speech and Signal Processing, pages 1577–1581, Florence, Italy.

Siedenburg, K. and Doclo, S. (2017). Iterative structured shrinkage algorithms for stationary/transient audio separation. In Proc. of the 20th International Conference on Digital Audio Effects (DAFX), Edinburgh, Sep 5–8.

Advertisements