Publications

2022

Afsari, Kiyan; El Barachi, May; Fasciani, Stefano & Belqasmi, Fatna (2022). A Deep Learning Approach for Real-time Detection of Epileptic Seizures using EEG, Proceedings of the7th International Conference on Smart and Sustainable Technologies (SpliTech). ISSN 978-1-6654-8828-0. doi: 10.23919/SpliTech55088.2022.9854359.

Antonelli, M., Liski, J. & Välimäki, V. (2022). Sparse Graphic Equalizer Design, IEEE Signal Processing Letters. 29, pp. 1659-1663.

Belloch, J. A., Badía, J. M., León, G., Bank, B., & Välimäki, V. (2022). Multicore Implementation of a Multichannel Parallel Graphic Equalizer. Journal of Supercomputing, 78(14), pp. 15715-15729.

Bentsen, Lars Ødegaard; Simionato, Riccardo; Wallace, Benedikte & Krzyzaniak, Michael Joseph (2022). Transformer and LSTM Models for Automatic Counterpoint Generation using Raw Audio. Proceedings of the 19th Sound and Music Computing Conference (SMC 2022). ISSN 2518-3672. doi: 10.5281/zenodo.6572847.

Bruschi, V., Välimäki, V., Liski, J., & Cecchi, S. (2022). Linear-Phase Octave Graphic Equalizer. Journal of the Audio Engineering Society, 70(6), pp. 435-445.

Bruschi, V., Välimäki, V., Liski, J. & Cecchi, S. (2022). A Low-Latency Quasi-Linear-Phase Octave Graphic Equalizer, Proceedings of the 25th International Conference on Digital Audio Effects (DAFx20in22). Evangelista, G. & Holighaus, N. (eds.). Vienna, Austria, pp. 94-100.

Cavdir, D., Ganis, F., Paisa, R., Williams, P., & Serafin, S. (2022). Multisensory Integration Design in Music for Cochlear Implant Users. In Proceedings of the 19th Sound and Music Computing Conference (SMC 2022).

Connor, C., Kantan, P. R., & Serafin, S. (2022). A Real-time Movement Sonification Application For Bodyweight Squat Training. In Proceedings of ISon 2022, 7th Interactive Sonification Workshop, BSCC, University of Bremen, Germany, September 22–23, 2022.

Connor, C., Kantan, P. R., & Serafin, S. (2022). The Development of a Real-Time Movement Sonification Exergame for Body-Weight Squat Training. In Proceedings of the 19th Sound and Music Computing Conference (SMC 2022).

Copiaco, Abigail; Ritz, Christian; Fasciani, Stefano & AbdulAziz, Nidhal (2022). Development of a Synthetic Database for Compact Neural Network Classification of Acoustic Scenes in Dementia Care Environments. In APSIPA, . (Eds.), 2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE Signal Processing Society. ISSN 9789881476890. p. 1202–1209.

Dal Santo, G., Prawda, K., & Välimäki, V. (2022). Flutter Echo Modeling. Proceedings of the 25th International Conference on Digital Audio Effects (DAFx20in22), pp. 185-191.

Fagerström, J., Meyer-Kahlen, N., Schlecht, S. J. & Välimäki, V. (2022). Dark Velvet Noise, Proceedings of the 25th International Conference on Digital Audio Effects (DAFx20in22). Evangelista, G. & Holighaus, N. (eds.). Vienna, Austria, pp. 192-199.

Dourado, Mark, Henrik Gert Hassager, Jesper Udesen, and Stefania Serafin. “The Role of Lombard Speech and Gaze Behaviour in Multi-Talker Conversations.” In Audio Engineering Society Conference: AES 2022 International Audio for Virtual and Augmented Reality Conference. Audio Engineering Society, 2022.

Fasciani, Stefano & Goode, Jackson (2022). A Toolkit for the Analysis of the NIME Proceedings Archive. In McPherson, Andrew & Frid, Emma (Ed.), Proceedings of the International Conference on New Interfaces for Musical Expression. The International Conference on New Interfaces for Musical Expression. ISSN 2220-4792. doi: 10.21428/92fbeb44.58efca21.

Stojanovski, T., Zhang, H., Frid, E., Chhatre, K., Peters, C., Samuels, I., Sanders, P., Partanen, J., Lefosse, D. (2022). Rethinking Computer-Aided Architectural Design (CAAD) – From Generative Algorithms and Architectural Intelligence to Environmental Design and Ambient Intelligence. In Computer-Aided Architectural Design: Design Imperatives: The Future Is Now. (s. 62-83). Springer Nature.

Ganis, F., Serafin, S., & Vatti, M. (2022). Tickle Tuner – Haptic Smartphone Cover for Cochlear Implant Users’ Musical Training. In Proceedings of the 19th Sound and Music Computing Conference (SMC 2022).

Ganis, F., Vatti, M., Serafin, S. (2022). Tickle Tuner – Haptic Smartphone Cover for Cochlear Implant Users’ Musical Training. In Saitis, C., Farkhatdinov, I., Papetti, S. (eds) Haptic and Audio Interaction Design. HAID 2022. Lecture Notes in Computer Science, vol 13417.

Geronazzo, Michele, and Stefania Serafin. “Sonic Interactions in Virtual Environments: the Egocentric Audio Perspective of the Digital Twin.” Sonic Interactions in Virtual Environments. Cham: Springer International Publishing, 2022. 3-45.

Gupta, R., He, J., Ranjan, R., Gan, W. S., Klein, F., Schneiderwind, C., Neidhardt, A., Brandenburg, K. & Välimäki, V. (2022). Augmented/Mixed Reality Audio for Hearables: Sensing, control, and rendering, IEEE Signal Processing Magazine. 39, 3, pp. 63-89.

Ivanyi, B., Tsalidis, C., Naylor, S., Tjemsland, T., Adjorlu, A., Kepp, N. E., & Serafin, S. (2022, August). HoloBand: An Augmented Reality Experience to Train Music Perception for the Hard of Hearing. In Audio Engineering Society Conference: AES 2022 International Audio for Virtual and Augmented Reality Conference. Audio Engineering Society.

Kandpal, D., Kantan, P. R., & Serafin, S. (2022). A Gaze-Driven Digital Interface for Musical Expression Based on Real-time Physical Modelling Synthesis. In Proceedings of the 19th Sound and Music Computing Conference (SMC 2022).

Kantan, P., Spaich, E. G., & Dahl, S. (2022). A Technical Framework for Musical Biofeedback in Stroke Rehabilitation. IEEE Transactions on Human-Machine Systems, 52(2), 220-231.

Kantan, P. R., Dahl, S., Jørgensen, H. R. M., Khadye, C., & Spaich, E. G. (2022, September). Designing Sonified Feedback on Knee Kinematics in Hemiparetic Gait Based on Inertial Sensor Data. In Proceedings of the SoniHED Conference on Sonification of Health and Environmental Data, 2022.

Kantan, P. R. (2022). Comparing Sonification Strategies Applied to Musical and Non-Musical Signals for Auditory Guidance Purposes. In Proceedings of the 19th Sound and Music Computing Conference (SMC 2022).

Kantan, P. R., Dahl, S., & Spaich, E. G. (2022, October). Sound-Guided 2-D Navigation: Effects of Information Concurrency and Coordinate System. In Nordic Human-Computer Interaction Conference (pp. 1-11).

Kantan, P., Spaich, E. G., & Dahl, S. (2022). An Embodied Sonification Model for Sit-to-Stand Transfers. Frontiers in psychology, 13.

Kantan, P. R., Dahl, S., Spaich, E. G., & Bresin, R. (2022). Sonifying Walking: A Perceptual Comparison of Swing Phase Mapping Schemes. In Proceedings of ISon 2022, 7th Interactive Sonification Workshop, BSCC, University of Bremen, Germany, September 22–23, 2022.

Adrian B. Latupeirissa, Claudio Panariello, and Roberto Bresin. 2023. Probing Aesthetics Strategies for Robot Sound: Complexity and Materiality in Movement Sonification. J. Hum.-Robot Interact. Just Accepted (March 2023).

Lindfors, J., Liski, J. & Välimäki, V. (2022). Loudspeaker Equalization for a Moving Listener, Journal of the Audio Engineering Society. 70, 9, pp. 722-730.

Meyer-Kahlen, N., Schlecht, S. & Välimäki, V. (2022). Colours of Velvet Noise, Electronics Letters. 58, 12, pp. 495-497.

Moliner, E., & Välimäki, V. (2022). A Two-Stage U-Net for High-Fidelity Denoising of Historical Recordings, Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022, Singapore, pp. 841-845.

Moliner Juanpere, E., & Välimäki, V. (2022). Realistic Gramophone Noise Synthesis Using a Diffusion Model. In G. Evangelista, & N. Holighaus (Eds.), Proceedings of the 25th International Conference on Digital Audio Effects (DAFx20in22), pp. 240-247.

Niimura, A., Serafin, S., & Kaplanis, N. (2022, August). Real-Time Dynamic Acoustics in 6DoF VR for Amateur Singers. In Audio Engineering Society Conference: AES 2022 International Audio for Virtual and Augmented Reality Conference. Audio Engineering Society.

Prawda, K., Schlecht, S. J., & Välimäki, V. (2022). Calibrating the Sabine and Eyring formulas. Journal of the Acoustical Society of America, 152(2), pp. 1158-1169.

Prawda, K., Schlecht, S., & Välimäki, V. (2022). Robust selection of clean swept-sine measurements in non-stationary noise. The Journal of the Acoustical Society of America, 151(3), 2117-2126.

Prawda, K., Schlecht, S., & Välimäki, V. (2022). Multichannel Interleaved Velvet Noise. In G. Evangelista, & N. Holighaus (Eds.), Proceedings of the 25th International Conference on Digital Audio Effects (DAFx20in22), pp. 208-215.

Rushton, T. A., & Kantan, P. R. (2022). A Real-time Embodied Sonification Model to Convey Temporal Asymmetry During Running. In Proceedings of ISon 2022, 7th Interactive Sonification Workshop, BSCC, University of Bremen, Germany, September 22–23, 2022.

Schlecht, S. J., Fierro, L., Välimäki, V. & Backman, J. (2022). Audio peak reduction using a synced allpass filter, Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022. Singapore, pp. 1006-1010.

Simionato, Riccardo & Fasciani, Stefano (2022). Deep Learning Conditioned Modeling of Optical Compression. Proceedings of the International Conference on Digital Audio Effects. ISSN 2413-6700.

Välimäki, V., Fierro, L., Schlecht, S., & Backman, J. (2022). Audio Peak Reduction Using Ultra-Short Chirps, Journal of the Audio Engineering Society, 70(6), pp. 485-494.

Wilczek, J., Wright, A., Välimäki, V. & Habets, E. A. P. (2022). Virtual Analog Modeling of Distortion Circuits Using Neural Ordinary Differential Equations, Proceedings of the 25th International Conference on Digital Audio Effects (DAFx20in22). Evangelista, G. & Holighaus, N. (eds.). Vienna, Austria, pp. 9-16.

Wright, A. & Välimäki, V,( 2022). Grey-Box Modelling of Dynamic Range Compression, Proceedings of the 25th International Conference on Digital Audio Effects (DAFx20in22). Evangelista, G. & Holighaus, N. (eds.). Vienna, Austria, pp. 304-311.

Cavdir, Doga, Ganis, Francesco, Paisa, Razvan, Williams, Peter, & Serafin, Stefania. (2022). Multisensory Integration Design in Music for Cochlear Implant Users

Paisa, R., Nilsson, N.C., Serafin, S. (2022). The Relationship Between Frequency and Hand Region Actuated. In: Saitis, C., Farkhatdinov, I., Papetti, S. (eds) Haptic and Audio Interaction Design. HAID 2022. Lecture Notes in Computer Science, vol 13417. Springer, Cham.

R. Paisa, J. Andersen, N.C. Nilsson, S. Serafin (2022)- A comparison of audio-to-tactile conversion algorithms for melody recognition – Euroregio/BNAM 2022

Patel-Grosz, Pritty; Katz, Jonah; Grosz, Patrick Georg; Kelkar, Tejaswinee & Jensenius, Alexander Refsum (2022). From music to dance: The inheritance of semantic inferences. Empirical Issues in Syntax and Semantics. ISSN 1769-7158. 14, p. 219–238.

Patel-Grosz, Pritty; Grosz, Patrick Georg; Kelkar, Tejaswinee & Jensenius, Alexander Refsum (2022). Steps towards a semantics of dance. Journal of Semantics. ISSN 0167-5133. 39(4), p. 693–748. doi: 10.1093/jos/ffac009.

Remache-Vinueza, Byron; Trujillo-León, Andrés; Clim, Maria-Alena; Sarmiento-Ortiz, Fabián; Topon-Visarrea, Liliana & Jensenius, Alexander Refsum (2022). Mapping Monophonic MIDI Tracks to Vibrotactile Stimuli Using Tactile Illusions. In Saitis, Charalampos; Farkhatdinov, Ildar & Papetti, Stefano (Ed.), Haptic and Audio Interaction Design. Springer Nature. ISSN 978-3-031-15019-7.

Herrebrøden, Henrik; Gonzalez, Victor; Vuoskoski, Jonna Katariina & Jensenius, Alexander Refsum (2022). Pre-recorded sound file versus human coach: Investigating auditory guidance effects on elite rowers. In Andreopoulou, Areti; Walker, Bruce; McMullen, Kyla & Rönnberg, Niklas (Ed.), Proceedings of the 27th International Conference on Auditory Display (ICAD2022). The International Community for Auditory Display. ISSN 0-9670904-8-2. p. 25–30. doi: 10.21785/icad2022.012.

Kwak, Dongho; Krzyzaniak, Michael Joseph; Danielsen, Anne & Jensenius, Alexander Refsum (2022). A mini acoustic chamber for small-scale sound experiments. In Iber, Michael & Enge, Kajetan (Ed.), Audio Mostly 2022: What you hear is what you see? Perspectives on modalities in sound and music interaction. ACM Publications. ISSN 978-1-4503-9701-8. p. 143–146. doi: 10.1145/3561212.3561223.

Lesteberg, Mari & Jensenius, Alexander Refsum (2022). MICRO and MACRO – Developing New Accessible Musicking Technologies. In Iber, Michael & Enge, Kajetan (Ed.), Audio Mostly 2022: What you hear is what you see? Perspectives on modalities in sound and music interaction. ACM Publications. ISSN 978-1-4503-9701-8. p. 147–150. doi: 10.1145/3561212.3561231.

Swarbrick, Dana; Upham, Finn; Erdem, Cagri; Jensenius, Alexander Refsum & Vuoskoski, Jonna Katariina (2022). Measuring Virtual Audiences with The MusicLab App: Proof of Concept. In Michon, Romain; Pottier, Laurent & Orlarey, Yann (Ed.), Proceedings of the 19th Sound and Music Computing Conference. SMC Network. ISSN 9782958412609. doi: 10.5281/zenodo.6798290.

Karbasi, Seyed Mojtaba; Jensenius, Alexander Refsum; Godøy, Rolf Inge & Tørresen, Jim (2022). A Robotic Drummer with a Flexible Joint: the Effect of Passive Impedance on Drumming. In Michon, Romain; Pottier, Laurent & Orlarey, Yann (Ed.), Proceedings of the 19th Sound and Music Computing Conference. SMC Network. ISSN 9782958412609. p. 232–237. doi: 10.5281/zenodo.6797833.

Kwak, Dongho; Olsen, Petter Angell; Danielsen, Anne & Jensenius, Alexander Refsum (2022). A trio of biological rhythms and their relevance in rhythmic mechanical stimulation of cell cultures. Frontiers in Psychology. ISSN 1664-1078. 13. doi: 10.3389/fpsyg.2022.867191.

Kwak, Dongho; Combriat, Thomas Michel Daniel; Wang, Chencheng; Scholz, Hanne; Danielsen, Anne & Jensenius, Alexander Refsum (2022). Music for Cells? A Systematic Review of Studies Investigating the Effects of Audible Sound Played Through Speaker-Based Systems on Cell Cultures . Music & Science. ISSN 2059-2043. 5. doi: 10.1177/20592043221080965.

Jensenius, Alexander Refsum & Erdem, Cagri (2022). Gestures in ensemble performance. In Timmers, Renee; Bailes, Freya & Daffern, Helena (Ed.), Together in Music: Coordination, expression, participation. Oxford University Press. ISSN 9780198860761.

Panariello, C. & Bresin, R. (2022). Sonification of Computer Processes : The Cases of Computer Shutdown and Idle Mode. Frontiers in Neuroscience, 16.

Sköld, M. & Bresin, R. (2022). Sonification of Complex Spectral Structures. Frontiers in Neuroscience, 16.

Misdariis, N., Özcan, E., Grassi, M., Pauletto, S., Barrass, S., Bresin, R. & Susini, P. (2022). Sound experts’ perspectives on astronomy sonification projects. Nature Astronomy, 6(11), 1249-1255.

van den Broek, G., Bresin, R. (2022). Concurrent sonification of different percentage values : the case of database values about statistics of employee engagement. In Proceedings of ISon 2022, 7th Interactive Sonification Workshop, BSCC, University of Bremen, Germany, September 22–23, 2022.

Larson Holmgren, D., Särnell, A., Bresin, R. (2022). Facilitating reflection on climate change using interactive sonification. In Proceedings of ISon 2022, 7th Interactive Sonification Workshop, BSCC, University of Bremen, Germany, September 22–23, 2022.

Frid, E., Panariello, C. & Núñez-Pacheco, C. (2022). Customizing and Evaluating Accessible Multisensory Music Experiences with Pre-Verbal Children : A Case Study on the Perception of Musical Haptics Using Participatory Design with Proxies. Multimodal Technologies and Interaction, 6(7).

Lindetorp, H., & Falkenberg, K. (2022). Evaluating Web Audio for Learning, Accessibility, and Distribution. Journal of the Audio Engineering Society, 70(11), 951-961.

Snarberg, H., Velasquez, Ä. P., Falkenberg, K., & Johansson, S. (2022). Preparing for the future for all: The state of accessibility education at technical universities. EDULEARN Proceedings. https://doi.org/10.21125/edulearn.2022.1820

Eriksson, M. L., Otterbring, T., Frid, E., & Falkenberg, K. (2022). Sounds and Satisfaction: A Novel Conceptualization of the Soundscape in Sales and Service Settings. Proceedings of the Nordic Retail and Wholesale Conference.

Misgeld, O., Lindetorp, H., Ahlbäck, S., & Holzapfel, A. (2022). Exploring sonification as a tool for folk music-dance interactions. In SoMoS 2022-The Second Symposium of the ICTM Study Group on Sound, Movement, and the Sciences.

Misgeld, O., Holzapfel, A., Kallioinen, P., & Ahlbäck, S. (2022). The melodic beat: exploring asymmetry in polska performance. Journal of Mathematics and Music, 16(2), 138-159.

Elvar Atli Ævarsson, Thórhildur Ásgeirsdóttir, Finnur Pind, Árni Kristjánsson, and Runar Unnthorsson. 2022. Vibrotactile Threshold Measurements at the Wrist Using Parallel Vibration Actuators. ACM Trans. Appl. Percept. 19, 3, Article 10 (July 2022), 11 pages. https://doi.org/10.1145/3529259

Eric Michael Sumner, Runar Unnthorsson, and Morris Riedel. Replicating Human Sound Localization with a Multi-Layer Perceptron, Proceedings of the 19th Sound and Music Computing Conference – June 5-12, 2022 – Saint-Étienne (France). https://doi.org/10.5281/zenodo.6822204

Eric Michael Sumner, Marcel Aach, Andreas Lintermann, Rúnar Unnþórsson, Morris Riedel, “Speed-Up of Machine Learning for Sound Localization via High-Performance Computing,” 2022 26th International Conference on Information Technology (IT), 2022, pp. 1-4, https://doi.org/10.1109/IT54280.2022.9743519.

Jonas Karlberg, Alessia Milo,Finnur Pind, and Runar Unnthorsson. PRESERVING AUDITORY CUES FOR HUMAN ECHOLOCATION TRAINING: A GEOMETRICAL ACOUSTICS STUDY USING A BENCHMARK DATASET (BRAS), Proceedings of the ASME 2022 International Mechanical Engineering Congress and Exposition. Columbus, Ohio, USA. October 30-November 3, 2022. ASME. https://doi.org/10.1115/IMECE2022-97044

Nashmin Yeganeh, Ivan Makarov, Snorri Steinn Stefánsson Thors, Hafliði Ásgeirsson, Árni Kristjánsson, and Runar Unnthorsson. VIBROTACTILE SLEEVE TO IMPROVE MUSIC ENJOYMENT OF COCHLEAR IMPLANT USERS, Proceedings of the ASME 2022 International Mechanical Engineering Congress and Exposition. Columbus, Ohio, USA. October 30-November 3, 2022. ASME. https://doi.org/10.1115/IMECE2022-95591

Heimir Rafn Bjarkason, Saethor Asgeirsson, and Runar Unnthorsson. DESIGN AND FABRICATION OF A TEST RIG FOR A NEW APPROACH ON HEALTH MONITORING OF ROLLER BEARINGS USING ACOUSTIC EMISSION. Proceedings of the ASME 2022 International Mechanical Engineering Congress and Exposition. Columbus, Ohio, USA. October 30-November 3, 2022. ASME. https://doi.org/10.1115/IMECE2022-95074

Jensenius, A. R. (2022). Sound Actions: Conceptualizing Musical Instruments. The MIT Press. https://mitpress.mit.edu/books/sound-actions

Kandpal, Devansh, Prithvi Ravi Kantan, and Stefania Serafin. “A Gaze-Driven Digital Interface for Musical Expression Based on Real-time Physical Modelling Synthesis.” In Proceedings of the 19th Sound and Music Computing Conference (SMC 2022). 2022.

Onofrei, Marius George, Federico Fontana, and Stefania Serafin. “Rubbing a physics based synthesis model: From mouse control to frictional haptic feedback.” In Proceedings of the 19th Sound and Music Computing Conference (SMC2022)(R. Michon, L. Pottier, and Y. Orlarey, eds.),(St. Etienne, France), pp. 25-32. 2022.

Riddershom Bargum, A., Ingi Kristjánsson, O., Babó, P., Eske Waage Nielsen, R., Rostami Mosen, S. and Serafin, S., 2022. Spatial Audio Mixing in Virtual Reality. In Sonic Interactions in Virtual Environments (pp. 269-302). Cham: Springer International Publishing.

Serafin, S., 2022. Audio in Multisensory Interactions: From Experiments to Experiences. In Sonic Interactions in Virtual Environments (pp. 305-318). Cham: Springer International Publishing.

Willemsen, Silvin, Stefan Bilbao, Michele Ducceschi, and Stefania Serafin. “The dynamic grid: Time-varying parameters for musical instrument simulations based on finite-difference time-domain schemes.” Journal of the Audio Engineering Society 70, no. 9 (2022): 650-660.

2021

Alary, B., Massé, P., Schlecht, S. J., Noisternig, M., & Välimäki, V. (2021). Perceptual analysis of directional late reverberation. The Journal of the Acoustical Society of America, 149(5), 3189-3199.

Alary, B., & Välimäki, V. (2021, September). A Method for Capturing and Reproducing Directional Reverberation in Six Degrees of Freedom. In 2021 Immersive and 3D Audio: from Architecture to Automotive (I3DA) (pp. 1-8).

Andersen, J.S., Miccini, R., Serafin, S. and Spagnol, S., 2021, September. Evaluation of individualized HRTFs in a 3D shooter game. In 2021 Immersive and 3D Audio: from Architecture to Automotive (I3DA) (pp. 1-10). IEEE.

Ducceschi, M., Bilbao, S., Willemsen, S. and Serafin, S., 2021. Linearly-implicit schemes for collisions in musical acoustics based on energy quadratisation. The Journal of the Acoustical Society of America, 149(5), pp.3502-3516.

Egeberg, M., Lind, S., Nilsson, N. C., & Serafin, S. (2021, September). Exploring the Effects of Actuator Configuration and Visual Stimuli on Cutaneous Rabbit Illusions in Virtual Reality. In ACM Symposium on Applied Perception 2021 (pp. 1-9).

Fagerström, J., Schlecht, S. J., & Välimäki, V. (2021, September). One-to-Many Conversion for Percussive Samples. In International Conference on Digital Audio Effects (pp. 129-135).

Fierro, L., & Välimäki, V. (2021, September). SiTraNo: A MATLAB App for Sines-Transients-Noise Decomposition of Audio Signals. In International Conference on Digital Audio Effects (pp. 73-80).

Ganis, F., Dahl, S., Schmidt Câmara, G., & Danielsen, A. (2021). Beat precision and perceived danceability in drum grooves. Poster at Rhythm Production and Perception workshop, Oslo, Norway.

Kantan, P. R., Spaich, E. G., & Dahl, S. (2021, June). A metaphor-based technical framework for musical sonification in movement rehabilitation. In The 26th International Conference on Auditory Display (ICAD 2021).

Kantan, P. R., Spaich, E. G., & Dahl, S. (2021). Conveying Sit-to-Stand Kinematics Through Musical Sonification for Stroke Rehabilitation. In 16th International Conference on Music Perception and Cognition jointly organised with the 11th triennial conference of ESCOM.

Kantan, P. R., Spaich, E. G., Jørgensen, H. R. M., & Dahl, S. (2021, June). Auditory biofeedback through real-time generated music for balance and gait training of stroke patients. In The neurosciences and music-VII.

Kantan, P. R., Stefan Alecu, R., & Dahl, S. (2021). The Effect of Auditory Pulse Clarity on Sensorimotor Synchronization. I R. Kronland-Martinet, S. Ystad, & M. Aramaki (red.), Perception, Representations, Image, Sound, Music – 14th International Symposium, CMMR 2019, Revised Selected Papers (Bind 12631, s. 379-395). Lecture Notes in Computer Science https://doi.org/10.1007/978-3-030-70210-6_25

Lindfors, J., Liski, J. & Välimäki, V. (2021). User Location-Based Loudspeaker Correction. In Nordic Sound and Music Conference. Aalborg Universitet.

Liski, J., Rämö, J., Välimäki, V., & Lähdeoja, O. (2021, June). Equalization of Wood-Panel Loudspeakers. In Sound and Music Computing Conference (pp. 3-10).

Liski, J., Mäkivirta, A. & Välimäki, V. (2021). Audibility of Group-Delay Equalization. IEEE/ACM Transactions on Audio Speech and Language Processing. 29, p. 2189-2201 13 p., 9450008.

Mancianti, A., Schlecht, S., Välimäki, V., Järvinen, R., & Kallio, E. (2021, November). Space Walk–Visiting the Solar System Through an Immersive Sonic Journey in VR. In Nordic Sound and Music Conference. Aalborg Universitet.

McCrea, M., McCormack, L., & Pulkki, V. (2021). Sound Source Localization Using Sector-Based Analysis with Multiple Receivers. In Nordic Sound and Music Conference. Aalborg Universitet.

Jensenius, Alexander Refsum (2021). Best versus Good Enough Practices for Open Music Research. Empirical Musicology Review. ISSN 1559-5749. 16(1). Full text in Research Archive

Laczkó, Bálint & Jensenius, Alexander Refsum (2021). Reflections on the Development of the Musical Gestures Toolbox for Python. In Kantan, Prithvi Ravi; Paisa, Razvan & Willemsen, Silvin (Ed.), Proceedings of the Nordic Sound and Music Computing Conference. Aalborg University Copenhagen. Full text in Research Archive

Jensenius, Alexander Refsum (2021). Musikkteknologiforskning. Nytt Norsk Tidsskrift. ISSN 0800-336X. 38(3), p. 260–263

Bishop, Laura; Gonzalez Sanchez, Victor Evaristo; Laeng, Bruno; Jensenius, Alexander Refsum & Høffding, Simon (2021). Move like everyone is watching: Social context affects head motion and gaze in string quartet performance. Journal of New Music Research. ISSN 0929-8215. doi: 10.1080/09298215.2021.1977338

Bishop, Laura; Jensenius, Alexander Refsum & Laeng, Bruno (2021). Musical and Bodily Predictors of Mental Effort in String Quartet Music: An Ecological Pupillometry Study of Performers and Listeners. Frontiers in Psychology. ISSN 1664-1078. doi: 10.3389/fpsyg.2021.653021

Masu, Raul; Melbye, Adam Pultz; Sullivan, John & Jensenius, Alexander Refsum (2021). NIME and the Environment: Toward a More Sustainable NIME Practice. In Dannenberg, Roger & Xiao, Xiao (Ed.), Proceedings of the International Conference on New Interfaces for Musical Expression. The International Conference on New Interfaces for Musical Expression. Full text in Research Archive

Karbasi, Seyed Mojtaba; Godøy, Rolf Inge; Jensenius, Alexander Refsum & Tørresen, Jim (2021). A Learning Method for Stiffness Control of a Drum Robot for Rebounding Double Strokes. In Zhang, Dan (Eds.), 2021 7th International Conference on Mechatronics and Robotics Engineering (ICMRE). IEEE. ISSN 978-0-7381-3205-1. p. 54–58. Full text in Research Archive

Fasciani, Stefano & Goode, Jackson (2021). 20 NIMEs: Twenty Years of New Interfaces for Musical Expression. In Dannenberg, Roger & Xiao, Xiao (Ed.), Proceedings of the International Conference on New Interfaces for Musical Expression. The International Conference on New Interfaces for Musical Expression. ISSN 2220-4792. Full text in Research Archive

Copiaco, Abigail; Ritz, Christian; Abdulaziz, Nidhal & Fasciani, Stefano (2021). A Study of Features and Deep Neural Network Architectures and Hyper-Parameters for Domestic Audio Classification. Applied Sciences. ISSN 2076-3417. 11(11). doi: 10.3390/app11114880.

Copiaco, Abigail; Ritz, Christian; Fasciani, Stefano & AbdulAziz, Nidhal (2021). Identifying Sound Source Node Locations Using Neural Networks Trained with Phasograms. Proceedings of IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). ISSN 2641-5542. doi: 10.1109/ISSPIT51521.2020.9408643.

Onofrei, M. G., Willemsen, S., & Serafin, S. (2021, September). Real-time implementation of a friction drum inspired instrument using finite difference schemes. In 24th International Conference on Digital Audio Effects (pp. 168-175).

Prawda, K., Schlecht, S., & Välimäki, V. (2021, November). Room acoustic parameters measurements in variable acoustic laboratory Arni. In Proc. Meeting of the Acoustical Society of Finland (Akustiikkapäivät) (pp. 150-155).

Välimäki, V. & Prawda, K. (2021). Late-Reverberation Synthesis using Interleaved Velvet-Noise Sequences, IEEE/ACM Transactions on Audio Speech and Language Processing. 29, p. 1149-1160 12 p., 9360485.

Wright, A., & Välimäki, V. (2021). Neural Modeling of Phaser and Flanging Effects. Journal of the Audio Engineering Society, 69(7/8), 517-529.

S. Spagnol, R. Miccini, M. G. Onofrei, R. Unnthorsson and S. Serafin, “Estimation of Spectral Notches from Pinna Meshes: Insights from a Simple Computational Model,” in IEEE/ACM Transactions on Audio, Speech, and Language Processing, https://doi.org/10.1109/TASLP.2021.3101928

Arfvidsson, G. F., Ljungdahl Eriksson, M., Lidbo, H., Falkenberg, K. (2021). Design considerations for short alerts and notification sounds in a retail environment. In Proceedings of the Sound and Music Computing Conference.

Bresin, R., Frid, E., Latupeirissa, A., & Panariello, C. (2021, March). Robust Non-Verbal Expression in Humanoid Robots: New Methods for Augmenting Expressive Movements with Sound. In Workshop on Sound in Human-Robot Interaction, ACM/IEEE International Conference on Human-Robot Interaction.

CERIOLI, A., DUCCESCHI, M. and SERAFIN, S., 2021. REAL-TIME IMPLEMENTATION OF NON-LINEAR PHYSICAL MODELS WITH MODAL SYNTHESIS AND PERFORMANCE ANALYSIS.

Christensen, Pelle Juul, Silvin Willemsen, and Stefania Serafin. “Applied Physical Modeling for Sound Synthesis: The Yaybahar.” In Proceedings of the 2nd Nordic Sound and Music Computing (NordicSMC) Conference, pp. 11-16. 2021.

Falkenberg, K., Ljungdahl Eriksson, M., Frid, E., Otterbring, T., Daunfeldt, S.-O. (2021). Auditory notification of customer actions in a virtual retail environment: Sound design, awareness and attention. In Proceedings of International Conference on Auditory Displays ICAD 2021.

Frid, E. & Bresin, R. (2021). Perceptual Evaluation of Blended Sonification of Mechanical Robot Sounds Produced by Emotionally Expressive Gestures : Augmenting Consequential Sounds to Improve Non-verbal Robot Communication. International Journal of Social Robotics.

Junker, A., Hutters, C., Reipur, D., Embøl, L., Nilsson, N.C., Serafin, S. and Rosenberg, E.S., 2021, March. Revisiting Audiovisual Rotation Gains for Redirected Walking. In 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) (pp. 358-359). IEEE.

Kaloumenou, C., Lamda, S., Pouliou, P., Onofrei, M. G., Willemsen, S., & Serafin, S. (2021, November). Recreating the Amoeba Violin Using Physical Modeling and Augmented Reality. In Proceedings of the 2nd Nordic Sound and Music Computing (NordicSMC) Conference (pp. 102-107).

Lindetorp, H., Falkenberg, K. (2021). Audio Parameter Mapping Made Explicit Using WebAudioXML. In Proceedings of the Sound and Music Computing Conference. Torino.

Frid, E., Ljungdahl Eriksson, M., Otterbring, T., Falkenberg, K., Lidbo, H., Daunfeldt, S.-O. (2021). On Designing Sounds to Reduce Shoplifting in Retail Environments. Presented at Nordic Retail and Wholesale Conference.

LASICKAS, Titas, Jonas SIIM ANDERSEN, Stefania SERAFIN, and Marianna VATTI. “COCHLEA: GAMIFYING EAR TRAINING FOR COCHLEAR IMPLANT USERS.” (2021).

Lindetorp, H., Falkenberg, K. (2021). Putting Web Audio API to the test : Introducing WebAudioXML as a pedagogical platform. In Web Audio Conference 2021. Barcelona.

Lindetorp, H., Falkenberg, K. (2021). Sonification For Everyone Everywhere : Evaluating The WebAudioXML Sonification Toolkit For Browsers. Presented at The 26th International Conference on Auditory Display (ICAD 2021).

Myresten, E., Larsson Holmgren, D. and Bresin, R. (2021). Sonification of Twitter Hashtags Using Earcons Based on the Sound of Vowels. In Proceedings of the Nordic Sound and Music Computing Conference 2021

Pauletto, S. & Bresin, R. (2021). Sonification Research and Emerging Topics. In Michael Filimowicz (Ed.), Doing Research in Sound Design. Routledge.

Pauletto, S., Selfridge, R., Holzapfel, A., Frisk, H. (2021) From Foley professional practice to sonic interaction design: initial research conducted within the radio sound studio project. In Proceedings of the Nordic Sound and Music Computing Conference 2021

SÜDHOLT, David, Søren VK LYSTER, Oliver B. WINKEL, and Stefania SERAFIN. “AReal-TIME INTERACTIVE PHYSICAL MODEL OF THE LANGELEIK USING FINITE DIFFERENCE SCHEMES AND WEB AUDIO.” (2021).

SÜDHOLT, D., RUSSO, R., & SERAFIN, S. (2021). AFaust IMPLEMENTATION OF COUPLED FINITE DIFFERENCE SCHEMES. In Sound and Music Computing.

Valle-Pérez, G., Henter, G. E., Beskow, J., Holzapfel, A., Oudeyer, P. Y., & Alexanderson, S. (2021). Transflower: probabilistic autoregressive dance generation with multimodal attention. ACM Transactions on Graphics (TOG), 40(6).

Spagnol, Simone, et al. “Estimation of Spectral Notches From Pinna Meshes: Insights From a Simple Computational Model.” IEEE/ACM Transactions on Audio, Speech, and Language Processing 29 (2021): 2683-2695.

Willemsen, S., Bilbao, S., Ducceschi, M. and Serafin, S., 2021, September. Dynamic grids for finite-difference schemes in musical instrument simulations. In 24th International Conference on Digital Audio Effects (pp. 144-151).

Willemsen, Silvin, Stefan Bilbao, Michele Ducceschi, and Stefania Serafin. “A physical model of the trombone using dynamic grids for finite-difference schemes.” In 24th International Conference on Digital Audio Effects, pp. 152-159. 2021.

2020

Bigoni, F., Grossbach, M., & Dahl, S. (2020). Characterizing Subtle Timbre Effects of Drum Strokes Played with Different Technique. I Proceedings of the 2nd International Conference on Timbre (Timbre 2020), online conference. Aristotle University of Thessaloniki.

Bishop, Laura & Jensenius, Alexander Refsum (2020). Reliability of two infrared motion capture systems in a music performance setting, In Simone Spagnol & Andrea Valle (ed.), Proceedings of the 17th Sound and Music Computing Conference. Axea sas/SMC Network. ISBN 978-88-945415-0-2.

Bresin, R., Mancini, M., Elblaus, L. & Frid, E. (2020). Sonification of the self vs. sonification of the other : Differences in the sonification of performed vs. observed simple hand movements. International journal of human-computer studies, 144.

Bresin, R., Pauletto, S., Laaksolahti, J., Erik, G. (2020). Looking for the soundscape of the future: preliminary results applying the design fiction method. Proceedings of the 17th Sound and Music Computing Conference. 2020.

Bryce, L., Sandler, M., Koreska Andersen, L., Adjorlu, A., & Serafin, S. (2020, October). The Sense of Auditory Presence in a Choir for Virtual Reality. In Audio Engineering Society Convention 149. Audio Engineering Society.

Erdem, Cagri & Jensenius, Alexander Refsum (2020). RAW: Exploring Control Structures for Muscle-based Interaction in Collective Improvisation, In Romain Michon & Franziska Schroeder (ed.), Proceedings of the International Conference on New Interfaces for Musical Expression. Birmingham City University. ISBN 978-1-949373-99-8. 1. s 477 – 482

Erdem, Cagri; Lan, Qichao; Fuhrer, Julian; Martin, Charles Patrick; Tørresen, Jim & Jensenius, Alexander Refsum (2020). Towards Playing in the ‘Air’: Modeling Motion-Sound Energy Relationships in Electric Guitar Performance Using Deep Neural Networks, In Simone Spagnol & Andrea Valle (ed.), Proceedings of the 17th Sound and Music Computing Conference. Axea sas/SMC Network. ISBN 978-88-945415-0-2. 5. s 177 – 184

Erdem, Cagri; Lan, Qichao & Jensenius, Alexander Refsum (2020). Exploring relationships between effort, motion, and sound in new musical instruments. Human Technology. ISSN 1795-6889. 16(3), pp. 310–347 . doi: 10.17011/ht/urn.202011256767.

Fagerström, J., B. Alary, S. J. Schlecht & V. Välimäki (2020). Velvet-Noise Feedback Delay Network. In Proc. 23rd International Conference on Digital Audio Effects (eDAFx20), Vienna, Austria, online, pp. 219–226.

Fierro, L. & V. Välimäki (2020). Towards Objective Evaluation of Audio Time-Scale Modification Methods. In Proc. 17th Sound and Music Computing Conference, Turin, Italy (online).

Geronazzo, M., Tissieres, J.Y. and Serafin, S., 2020, May. A Minimal Personalization of Dynamic Binaural Synthesis with Mixed Structural Modeling and Scattering Delay Networks. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 411-415). IEEE.

Hansen, K. F., Latupeirissa, A. B., Frid, E., Lindetorp, H. (2020). Unproved methods from the frontier in the course curriculum : A bidirectional and mutually beneficial research challenge. In INTED2020 Proceedings. (pp. 7033-7038). IATED.

Kartofelev, D., J. G. Arro & V. Välimäki (2020). Insights into the String-Barrier Interaction Dynamics Based in High-Speed Camera Measurements. In Proc. 17th Sound and Music Computing Conference, Turin, Italy (online), pp. 169–176.

Kjærbo, R. E. R., Parpal, R. R., Pérez, M. G., Correia, F. M. S. D. A. R., Guruvayurappan, V., Overholt, D., & Dahl, S. (2020). Rhythm Rangers: an evaluation of beat synchronisation skills and musical confidence through multiplayer gamification influence. I S. Spagnol, & A. Valle (red.), Proceedings of the 17th Sound and Music Computing Conference (s. 220-227).

Krzyzaniak, Michael Joseph; Veenstra, Frank; Erdem, Cagri; Jensenius, Alexander Refsum & Glette, Kyrre (2020). Air-Guitar Control of Interactive Rhythmic Robots, In Øyvind Brandtsegg & Daniel Buner Formo (ed.), Proceedings of the 5th International Conference on Live Interfaces. Norwegian University of Science and Technology. ISBN 0026639041. Installations (1). Pp. 208–210.

Latupeirissa, A. B., Bresin, R. (2020). Understanding non-verbal sound of humanoid robots in films. Presented at Workshop on Mental Models of Robots at HRI 2020 in Cambridge, UK, Mar 23rd 2020.

Latupeirissa, A. B., Panariello, C., Bresin, R. (2020). Exploring emotion perception in sonic HRI. Proceedings of the 17th Sound and Music Computing Conference. (pp. 434-441). Torino

Miccini, R. & Spagnol, S. (2020). HRTF Individualization using Deep Learning. In Proc. 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Workshops (VRW 2020), Atlanta, USA (online), pp. 390–395.

Moliner, E., J. Rämö & V. Välimäki (2020). Virtual Bass System with Fuzzy Separation of Tones and Transients. In Proc. 23rd International Conference on Digital Audio Effects (eDAFx20), Vienna, Austria, online, pp. 85–93.

Onofrei, M. G., Miccini, R., Unnthorsson, R., Serafin, S. & Spagnol, S. (2020). 3D Ear Shape as an Estimator of HRTF Notch Frequency. In Proc. 17th Sound and Music Computing Conference, Turin, Italy (online), pp. 131–137.

C. Panariello, “Study in three phases: An Adaptive Sound Installation,” Leonardo Music Journal, vol. 30, pp. 44-49, 2020.

Prawda, K., S. J. Schlecht & V. Välimäki (2020). Evaluation of Reverberation Time Models with Variable Acoustics. In Proc. 17th Sound and Music Computing Conference, Turin, Italy (online), pp. 145–152.

Prawda, K., V. Välimäki & S. Serafin (2020). Evaluation of Accurate Artificial Reverberation Algorithm. In Proc. 17th Sound and Music Computing Conference, Turin, Italy (online), pp. 247–254.

Prawda, K., S. Willemsen, S. Serafin & V. Välimäki (2020). Flexible Real-Time Reverberation Synthesis with Accurate Parameter Control. In Proc. 23rd International Conference on Digital Audio Effects (eDAFx20), Vienna, Austria, online.

Rämö, J., J. Liski & V. Välimäki (2020). Third-Octave and Bark Graphic-Equalizer Design with Symmetric Band Filters. Applied Sciences, vol. 10, no. 4, paper no. 1222. https://doi.org/10.3390/app10041222.

Alecu, Rares Stefan, Stefania Serafin, Silvin Willemsen, Emanuele Parravicini, and Stefano Lucato. “Embouchure Interaction Model for Brass Instruments.” In 17th Sound and Music Computing Conference, pp. 153-160. Axea sas/SMC Network, 2020.

Shah, Sneha & Välimäki, Vesa (2020). Automatic Tuning of High Piano Tones. Applied Sciences, vol. 10, no. 6, paper 1983. https://doi.org/10.3390/app10061983.

Spagnol, S. (2020). Auditory Model Based Subsetting of Head-Related Transfer Function Datasets. In Proc. 45th IEEE International Conference on Acoustics, Speech, and Signal Processing, Barcelona, Spain (online), pp. 391–395.

Spagnol, S. (2020). HRTF Selection by Anthropometric Regression for Improving Horizontal Localization Accuracy. IEEE Signal Processing Letters, vol. 27, pp. 590–594.

Torre, I., Latupeirissa, A. B., McGinn, C. (2020). How context shapes the appropriateness of a robot’s voice. In 29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2020. (pp. 215-222). Institute of Electrical and Electronics Engineers (IEEE).

Willemsen, S., Paisa, R., & Serafin, S. (2020, June). Resurrecting the Tromba Marina: A Bowed Virtual Reality Instrument using Haptic Feedback and Accurate Physical Modelling. In 17th Sound and Music Computing Conference(pp. 300-307).

Willemsen, Silvin, Stefania Serafin, Stefan Bilbao, and Michele Ducceschi. “Real-time Implementation of a Physical Model of the Tromba Marina.” In 17th Sound and Music Computing Conference, pp. 161-168. 2020.

Wright, A., E.-P. Damskägg, L. Juvela & V. Välimäki (2020). Real-Time Guitar Amplifier Emulation with Deep Learning. Applied Sciences, vol. 10, no. 3, paper no. 766. https://doi.org/10.3390/app10030766.

Wright, A. & V. Välimäki (2020). Perceptual Loss Function for Neural Modeling of Audio Systems. In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, pp. 251–255. DOI: https://doi.org/10.1109/ICASSP40776.2020.9052944

Wright, A. & V. Välimäki (2020). Neural Modelling of LFO Modulated Time Varying Effects. In Proc. 23rd International Conference on Digital Audio Effects (eDAFx20), Vienna, Austria, online, pp. 281–288.

Zelechowska, Agata; Gonzalez Sanchez, Victor Evaristo & Jensenius, Alexander Refsum (2020). Standstill to the ‘beat’: Differences in involuntary movement responses to simple and complex rhythms, In AM ’20: Proceedings of the 15th International Conference on Audio Mostly. Association for Computing Machinery (ACM). ISBN 978-1-4503-7563-4. Musical Structure. s 107 – 113

Zelechowska, Agata; Gonzalez-Sanchez, Victor E.; Laeng, Bruno & Jensenius, Alexander Refsum (2020). Headphones or Speakers? An Exploratory Study of Their Effects on Spontaneous Body Movement to Rhythmic Music. Frontiers in Psychology. ISSN 1664-1078. 11(698). doi: 10.3389/fpsyg.2020.00698

2019

Adjorlu, A., & Serafin, S. (2019). Teachers’ Views on how to use Virtual Reality to Instruct Children and Adolescents Diagnosed with Autism Spectrum Disorder. In Proc. IEEE Conf. Virtual Reality and 3D User Interfaces (VR) (pp. 1439-1442).

Alary, B., Politis, A., Schlecht, S. J., & Välimäki, V. (2019). Directional feedback delay network. Journal of the Audio Engineering Society, 67(10), 752-762. https://doi.org/10.17743/jaes.2019.0026

Andersson, N., Erkut, C., & Serafin, S. (2019). Immersive audio programming in a virtual reality sandbox. In Proc. AES Int. Conf. Immersive and Interactive Audio.

Andreasen, A., Geronazzo, M., Nilsson, N. C., Zovnercuka, J., Konovalov, K., & Serafin, S. (2019). Auditory feedback for navigation with echoes in virtual environments: training procedure and orientation strategies. IEEE Transactions on Visualization and Computer Graphics, 25(5), 1876-1886.

Becker, Artur; Herrebrøden, Henrik; Gonzalez Sanchez, Victor Evaristo; Nymoen, Kristian; Dal Sasso Freitas, Carla Maria; Tørresen, Jim & Jensenius, Alexander Refsum (2019). Functional Data Analysis of Rowing Technique Using Motion Capture Data. In Proc. 6th Int. Conf. Movement and Computing. Article 12.

Mathias Lyneborg Damgård, Elvar Atli Ævarsson, Runar Unnthorsson and Árni Kristjánsson (2019). Evaluation of Two Music Tactile Display Encodings for Cochlear Implant Recipients. In Proc. 1st Nordic Sound and Music Computing Conf., pages 42-47, Stockholm, Sweden.

Damskägg, E-P., Juvela, L., Thuillier, E. & Välimäki, V. (2019). Deep Learning for Tube Amplifier Emulation. In Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing ICASSP-19, pp. 471–475, Brighton, UK.

Damskägg, E-P., Juvela, L., & Välimäki, V. (2019). Real-Time Modeling of Audio Distortion Circuits with Deep Learning. In Proc. Sound & Music Computing Conf. SMC (pp. 332-339). Best paper award.

Degli Innocenti, E., Geronazzo, M., Vescovi, D., Nordahl, R., Serafin, S., Ludovico, L. A., & Avanzini, F. (2019). Mobile virtual reality for musical genre learning in primary education. Computers & Education, 139, 102-117.

Erdem, Cagri; Schia, Katja Henriksen & Jensenius, Alexander Refsum (2019). Vrengt: A Shared Body–Machine Instrument for Music–Dance Performance. In Federico Visi (ed.), Music Proceedings of the Int. Conf. New Interfaces for Musical Expression.

Erdem, Cagri; Schia, Katja Henriksen & Jensenius, Alexander Refsum (2019). Vrengt: A Shared Body–Machine Instrument for Music–Dance Performance, In Marcelo Queiroz & Anna Xambo Sedo (ed.), Proc. Int. Conf. New Interfaces for Musical Expression. Universidade Federal do Rio Grande do Sul.

Erkut, C., & Dahl, S. (2019). Incorporating Virtual Reality with Experiential Somaesthetics in an Embodied Interaction Course. The Journal of Somaesthetics, 4(2).

Fierro, L., Rämö, J., & Välimäki, V. (2019). Adaptive Loudness Compensation in Music Listening. In Proc. Sound & Music Computing Conference SMC (pp. 135-142).

Fontana, Federico, et al. (2019). Keytar: Melodic Control of Multisensory Feedback from Virtual Strings. In Proc. Int. Conf. Digital Audio Effects DAFx.

Gentile, V., Adjorlu, A., Serafin, S., Rocchesso, D., & Sorce, S. (2019). Touch or touchless? evaluating usability of interactive displays for persons with autistic spectrum disorders. In Proc. 8th ACM International Symposium on Pervasive Displays (pp. 1-7).

Geronazzo, M., Avanzini, F., Fontana, F., & Serafin, S. (2019). Interactions in Mobile Sound and Music Computing. Wireless Communications and Mobile Computing, 2019.

Gerry, L., Dahl, S., & Serafin, S. (2019). ADEPT: Exploring the design, pedagogy, and analysis of a mixed reality application for piano training. In Proc. Sound & Music Computing Conference (pp. 2891-2892).

Godøy, Rolf Inge (2019). Musical Shape Cognition. In Mark Grimshaw; Mads Walther-Hansen & Martin Knakkergaard (ed.), The Oxford Handbook of Sound and Imagination, Volume 2. Oxford University Press. ISBN 9780190460242. Chapter 12. pp. 237 – 258

Godøy, Rolf Inge (2019). Thinking Sound-Motion Objects, In Michael Filimowicz (ed.), Foundations in Sound Design for Interactive Media. Routledge. ISBN 9781138093942. 8. pp. 161 – 178

Gonzalez Sanchez, Victor Evaristo; Dahl, Sofia; Hatfield, Johannes Lunde & Godøy, Rolf Inge (2019). Characterizing movement fluency in musical performance: Toward a generic measure for technology enhanced learning. Frontiers in Psychology. ISSN 1664-1078. 10 . doi: 10.3389/fpsyg.2019.00084

Gonzalez Sanchez, V. E.; Zelechowska, A. & Jensenius, A. R. (2019). Analysis of the Movement-Inducing Effects of Music through the Fractality of Head Sway during Standstill. Journal of Motor Behavior.

Götz, G. & Pulkki, V. (2019). Simplified source directivity rendering in acoustic virtual reality using the directivity sample combination. In Proceedings of 147th AES Convention.

Hoffmann, R., Brinkhuis M.A.B., Unnthorsson, Runar, Kristjansson, Arni (2019). The Intensity Order Illusion: Temporal Order of Different Vibrotactile Intensity Causes Systematic Localization Error. Journal of Neurophysiology, Volume 122, Issue 4, October 2019, pp.1810-1820.

Holbrook, Ulf A. S. (2019). A question of backgrounds: Sites of listening, In Monty Adkins & Simon Cummings (ed.), Music Beyond Airports: Appraising Ambient Music. University of Huddersfield Press. ISBN 978-1-86218-161-8. 3, pp. 51 – 66 Fulltekst i vitenarkiv.

Holbrook, Ulf A. S. (2019). Sound Objects and Spatial Morphologies. Organised Sound. ISSN 1355-7718. 24(1),pp. 20-29 . doi: 10.1017/S1355771819000037

Hussain, A., Modekjaer, C., Austad, N. W., Dahl, S., & Erkut, C. (2019, October). Evaluating movement qualities with visual feedback for real-time motion capture. In Proc. Int. Conf. Movement and Computing (pp. 1-9).

Kahles, J., Esqueda Flores, F., & Välimäki, V. (2019). Oversampling for Nonlinear Waveshaping: Choosing the Right Filters. Journal of the Audio Engineering Society, 67(6), pp. 440–449.

Kantan, P. R., & Dahl, S. (2019). Communicating Gait Performance Through Musical Energy: Towards an Intuitive Biofeedback System for Neurorehabilitation. In Combined Proceedings of the Nordic Sound and Music Computing Conference 2019 and the Interactive Sonification Workshop 2019 (pp. 107-114).

Kantan, P. R., Alecu, R. S., & Dahl, S. (2019). The Effect of Auditory Pulse Clarity on Sensorimotor Synchronization. In Proc. Int. Symp. Computer Music Multidisciplinary Research (CMMR).

Kantan, P., & Dahl, S. (2019). An Interactive Music Synthesizer for Gait Training in Neurorehabilitation. In Proc. Sound and Music Computing Conference (pp. 159-166).

Kartofelev, D., Arro, J. G., & Välimäki, V. (2019). Experimental Verification of Dispersive Wave Propagation on Guitar Strings. In Proc. Sound & Music Computing Conference SMC 2019 (pp. 324-331).

Lan, Qichao & Jensenius, Alexander Refsum (2019). QuaverSeries: A Live Coding Environment for Music Performance Using Web Technologies. In Proc. Int. Web Audio Conference.

Lan, Qichao; Tørresen, Jim & Jensenius, Alexander Refsum (2019). RaveForce: A Deep Reinforcement Learning Environment for Music Generation. In Proc. Sound & Music Computing Conf. P2.1. pp. 217 – 222.

Liski, J., Rämö, J., and Välimäki, V. (2019). Graphic equalizer design with symmetric biquad filters. In Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA) (pp. 55-59).

Mandanici, M., Erkut, C., Paisa, R., & Serafin, S. (2019). Movement Patterns in the Harmonic Walk Interactive Environment. In Proc. Int. Symp. Computer Music Multidisciplinary Research (p. 765).

Maunsbach, M., & Serafin, S. (2019). Non-Linear Contact Sound Synthesis for Real-Time Audio-Visual Applications using Modal Textures. In Proc. Sound and Music Computing Conference (pp. 431-436). Electrical Engineering/Electronics, Computer, Communications and Information Technology Association.

Moth-Poulsen, M., Bednarz, T., Kuchelmeister, V., & Serafin, S. (2019). Teach Me Drums: Learning Rhythms through the Embodiment of a Drumming Teacher in Virtual Reality. In Proc. Sound and Music Computing Conference.

Müller, M., Pardo, B.A., Mysore, G.J., & Välimäki, V. (2019). Recent Advances in Music Signal Processing. IEEE Signal Processing Magazine, vol. 36, no. 1, pp. 17–19.

Pajala-Assefa, H., & Erkut, C. (2019). A Study of Movement-Sound within Extended Reality: Skeleton Conductor. In Proc. Int. Conf. Movement and Computing (pp. 1-4).

Passalenti, A., Paisa, R., Nilsson, N. C., Andersson, N. S., Fontana, F., Nordahl, R., & Serafin, S. (2019). No strings attached: Force and vibrotactile feedback in a virtual guitar simulation. In Proc. IEEE Conf. Virtual Reality and 3D User Interfaces (VR) (pp. 1116-1117).

Prawda, K., Välimäki, V., & Schlecht, S. (2019). Improved Reverberation Time Control for Feedback Delay Networks. In Proc. Int. Conf. Digital Audio Effects (DAFx).

Rämö, J., & Välimäki, V. (2019). Neural third-octave graphic equalizer. In Proc. Int. Conf. Digital Audio Effects (DAFx).

Solberg, Ragnhild Torvanger & Jensenius, Alexander Refsum (2019). Group behaviour and interpersonal synchronization to electronic dance music. Musicae Scientiae. 23(1), pp. 111- 134.

Spagnol, S., Purkhús, K. B., Björnsson, S. K., & Unnthórsson, R. (2019). Misure di HRTF su una testa KEMAR con padiglioni auricolari intercambiabili. In Proc. Atti del XXII Colloquio di Informatica Musicale (XXII CIM) (pp. 47-52). Udine, Italy.

S.Spagnol, K. B. Purkhús, S. K. Björnsson and R. Unnthórsson (2019). A head-related transfer function dataset of KEMAR with various pinna shapes. In Proc. Sound & Music Computing Conference (pp. 55-60).

Suárez, S., Kaplanis, N., & Bech, S. (2019). In-Virtualis: A Study on the Impact of Congruent Virtual Reality Environments in Perceptual Audio Evaluation of Loudspeakers. In Proc. AES Int. Conf. Immersive and Interactive Audio, York.

Thuillier, E., Lähdeoja, O., & Välimäki, V. (2019). Feedback Control in an Actuated Acoustic Guitar Using Frequency Shifting. Journal of the Audio Engineering Society, 67(6), pp. 373–381.

Tuovinen, J., Hu, J., & Välimäki, V. (2019). Toward Automatic Tuning of the Piano. In Proc. Sound & Music Computing Conference SMC 2019 (pp. 143-150).

Willemsen, S., Bilbao, S., & Serafin, S. (2019). Real-time implementation of an elasto-plastic friction model applied to stiff strings using finite-difference schemes. In Proc. Int. Conf. Digital Audio Effects (DAFx).

Willemsen, S., Bilbao, S., Andersson, N., & Serafin, S. (2019). Physical Models and Real-Time Control with the Sensel Morph. In Proc. Sound & Music Computing Conference SMC 2019.

Wright, A., Damskägg, E-P., & Välimäki, V. (2019). Real-time black-box modelling with recurrent neural networks. In Proc. Int. Conf. Digital Audio Effects (DAFX).

Xambo Sedo, A.; Saue, S.; Jensenius, A. R.; Støckert, R. & Brandtsegg, Ø. (2019). NIME Prototyping in Teams: A Participatory Approach to Teaching Physical Computing, in Proc. Int. Conf. New Interfaces for Musical Expression. Universidade Federal do Rio Grande do Sul. pp. 216-221.

Xambo Sedo, Anna; Støckert, Robin; Jensenius, Alexander Refsum & Saue, Sigurd (2019). Facilitating Team-Based Programming Learning with Web Audio. In Proc. International Web Audio Conference.

2018

Baldwin, A., Serafin, S., & Erkut, C. (2018). Towards the design and evaluation of delay-based modeling of acoustic scenes in mobile augmented reality. In Proc. 2018 IEEE 4th VR Workshop on Sonic Interactions for Virtual Environments (SIVE), pp. 1-5.

Dahl, S., & Sioros, G. (2018). Rhythmic recurrency in dance to music with ambiguous meter. In Proceedings of the 5th International Conference on Movement and Computing (pp. 38:1–38:6). New York, NY, USA: ACM.

Erkut, C., & Dahl, S. (2018). Incorporating virtual reality in an embodied interaction course. In Proceedings of the 5th International Conference on Movement and Computing, pp. 45:1–45:6. New York, NY, USA: ACM.

Esqueda, F., Lähdeoja, O., & Välimäki, V. (2018). Algorithms for guitar-driven synthesis: Application to an augmented guitar. In Proc. 15th Sound and Music Computing Conference (SMC-18), pp. 444–451, Limassol, Cyprus.

Frid, E., Elblaus, L., & Bresin, R. (2018). Interactive sonification of a fluid dance movement: an exploratory study, Journal on Multimodal User Interfaces.

Frid, E., Moll, J., Bresin, R., & Sallnäs Pysander, E-L., (2018). Haptic feedback combined with movement sonification using a friction sound improves task performance in a virtual throwing task, Journal on Multimodal User Interfaces.

Geronazzo, M., Sikström, E., Kleimola, J., Avanzini, F., De Götzen, A., & Serafin, S. (2018, October). The impact of an accurate vertical localization with HRTFs on short explorations of immersive virtual reality scenarios. In Proc. 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (pp. 90-97).

Gonzalez Sanchez, V. E.,Martin, C. P.; Zelechowska, A., Bjerkestrand, K. A. V., Johnson, V. & Jensenius, A. R. (2018). Bela-based augmented acoustic guitars for sonic microinteraction, In L. Dahl, D. Bowman & T. Martin (eds.). In Proceedings of the International Conference on New Interfaces for Musical Expression. pp. 324-327.

Gonzalez Sanchez, V. E., Zelechowska, A. & Jensenius, A. R. (2018). Muscle activity response of the audience during an experimental music performance, In S. Cunningham (ed.), Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion. Association for Computing Machinery (ACM).

Holbrook, U. A. S. (2018). An approach to stochastic spatialisation: A case of hot pocket, In L. Dahl, D. Bowman & T. Martin (eds.). In Proceedings of the International Conference On New Interfaces For Musical Expression. Virginia Tech. 1, pp. 31-32.

Holzapfel, A., Sturm, B. & Coeckelbergh, M. (2018). Ethical dimensions of music information retrieval technology, Transactions of the International Society for Music Information Retrieval, vol. 1, no. 1, pp. 44-55, 2018.

Jensenius, A. R. (2018). The Musical Gestures Toolbox for Matlab, In Emilia Gómez; Xiao Hu; Eric Humphrey & Emmanouil Benetos (ed.), In Proceedings of the 19th International Society for Music Information Retrieval Conference. Institut de Recherche et Coordination Acoustique/Musique.

Kelkar, T., Roy, U. & Jensenius, A. R. (2018). Evaluating a collection of sound-tracing data of melodic phrases, In E. Gómez, X. Hu, E. Humphrey & E. Benetos (eds.). In Proceedings of the 19th International Society for Music Information Retrieval Conference. Institut de Recherche et Coordination Acoustique/Musique.

Lartillot, O., Thedens, H.-H. & Jensenius, A. R. (2018). Computational model of pitch detection, perceptive foundations, and application to Norwegian fiddle music, In R. Parncutt & S. Sattmann (ed.), Proceedings of ICMPC15/ESCOM10. Centre for Systematic Musicology, University of Graz.

Lokki, T., Müller, M., Serafin, S. & Välimäki, V. (2018). Special Issue on Sound and Music Computing. Applied Sciences 8(4), 518

Mäkivirta, A., Liski, J. & Välimäki, V., (2018). Modeling and delay-equalizing loudspeaker responses. Journal of the Audio Engineering Society 66(11), pp. 922-934.

Martin, C. P., Jensenius, A. R. & Tørresen, J. (2018). Composing an ensemble standstill work for Myo and Bela, In L. Dahl, D. Bowman & T. Martin (ed.). In Proceedings of the International Conference on New Interfaces for Musical Expression. Virginia Tech.

Hoffmann, R., Spagnol, S., Kristjánsson, Á., & Unnthorsson, R. (2018). Evaluation of an audio-haptic sensory substitution device for enhancing spatial awareness for the visually impaired. Optometry and Vision Science, 95(9), 757.

Polak.R., Jacoby, N., Fischinger, T., Goldberg, D. & Holzapfel, A. (2018) Rhythmic Prototypes Across Cultures: A Comparative Study of Tapping Synchronization. Music Perception, vol. 36, no. 1, pp. 1-23.

Schlecht, S., Alary, B., Välimäki, V. & Habets, E. A. P., (2018). Optimized velvet-noise decorrelator. In Proceedings of the 21st International Conference on Digital Audio Effects (DAFx-18), pp. 87–94, Aveiro, Portugal.

Schoeller, F., Zenasni, F., Bertrand, P., Gerry, L. J., Jain, A., & Horowitz, A. H. (2018). Combining virtual reality and biofeedback to foster empathic abilities in humans. Frontiers in Psychology, 9, 2741.

Serafin, S., Geronazzo, M., Erkut, C., Nilsson, N. C., & Nordahl, R. (2018). Sonic interactions in virtual reality: State of the art, current challenges, and future directions. IEEE Computer Graphics and Applications, 38(2), 31-43.

Serafin, S., Dahl, S., Bresin, R., Jensenius, A. R., Unnþórsson, R., & Välimäki, V. (2018). NordicSMC: A Nordic university hub on sound and music computing. In Proceedings of the 15th Sound and Music Computing Conference (SMC-18), pp. 130–134, Limassol, Cyprus. https://doi.org/10.5281/zenodo.1422528

Simionato, R., Liski, J., Välimäki, V. & Avanzini, F. (2018). A virtual tube delay effect. In Proceedings of the 21st International Conference on Digital Audio Effects (DAFx-18), pp. 361–368, Aveiro, Portugal.

Spagnol, S., Hoffmann, R., Martínez, M. H., & Unnthorsson, R. (2018). Blind wayfinding with physically-based liquid sounds. International Journal of Human-Computer Studies 115, pp. 9-19.

Tissieres, J., Vaseileiou, A., Zabetian, S., Delgado, P., Dahl, S. & Serafin, S. (2018). An expressive multidimensional physical modelling percussion instrument. In Proceedings of the 15th Sound and Music Computing Conference (SMC-18), pp. 339–346, Limassol, Cyprus. https://doi.org/10.5281/zenodo.1422605

Välimäki, V., Rämö, J. & Esqueda, F. (2018). Creating endless sounds. In Proceedings of the 21st International Conference on Digital Audio Effects (DAFx-18), pp. 32–39, Aveiro, Portugal.