NordicSMC day

April 16th from 13 PM CET + MusicLab 6 from 19:00

SESSION 1: Sound design
13-13:30
Session chair: Stefania Serafin

5 minutes presentations plus discussions

Derek Holzer

Sounds of Futures Passed: Media Archaeology and Design Fiction as NIME Methodologies

Silvin Willemsen

Physical Modelling of Musical Instruments for Real-Time Sound Synthesis and Control

Prithvi Kantan

Design of musical feedback for post-stroke movement rehabilitation. 

Adrian Benigno Latupeirissa

Mediating Postphenomenological Relations in Human-Robot Communication With Sound


SESSION 2: Spatial sound and virtual acoustics - Part 1
13:45-14:15
Session chair: Alexander Refsum Jensenius

Dana Swarbrick

Measuring the virtual concert experience: Surveys, social connection, and motion

Michael McCrea

Acoustic scene analysis and source extraction using sector-based sound field decomposition across multiple high order microphones

 Finnur Pind

Wave-based virtual acoustics 

 Riccardo Miccini 

A hybrid approach to structural modeling of individualized HRTFs


 SESSION 3: Music and cochlear implant
14:30-14:50
Session chair: Rúnar Unnþórsson

Elvar Atli Ævarsson 

Title:  Enhancing the music listening enjoyment of cochlear implant recipients

Razvan Paisa

 Vibrotactile augmentation of music for postlingual cochlear implant users 


SESSION 4: Spatial sound and virtual acoustics - Part 2
Session chair: Vesa Valimaki
15:00-15:20

Ulf Holbrook

The landscape and topographical questions in spatial audio

Karolina Prawda

Towards more accurate estimation of room acoustic parameters

Joel Lindfors

Spatial speaker correction


 SESSION 5: Musical applications
Session chair: Roberto Bresin
15:30 -16:00

Claudio Panariello

Adaptive Behaviour in Musical Context

Çağrı Erdem

Who Makes The Sound? Agency, Surprise and Embodiment in Performing Music with Artificial Musical Intelligence

Qichao Lan

Glicol: Graph-oriented Live Coding Language Written in Rust



MusicLab 6: Human-Machine Improvisation
19:00 - 21:00

This 6th edition of MusicLab focuses on musical interactions between humans and machines, featuring prominent musicians from Norway’s improvisation scene; Christian Winther (guitar) and Dag Erik Knedal Andersen (drums) will play with an artificial intelligence-enabled interactive music system, CAVI, developed by Çağrı Erdem.

More Info

Abstracts and Bios

Derek Holzer

Sounds of Futures Passed: Media Archaeology and Design Fiction as NIME Methodologies

This paper provides a study of a cooperative learning workshop which invited composers, musicians, and sound designers to explore instruments from the history of electronic sound in Sweden. The workshop applied media archaeology methods towards analyzing one particular instrument from the past, the Dataton System 3000. It then applied design fiction methods towards imagining several speculative instruments of the future. Each stage of the workshop revealed very specific utopian ideas surrounding the design of sound instruments. Here, we lay out the intellectual, material, and social conditions of the workshop, along with a selection of the workshop’s outcomes, and some reflections on their significance for the NIME community. By presenting this method-in-progress, our intent is to inspire dialog around the premise that the linked examination of historical electronic sound technology’s affordances and ethics can inform contemporary instrument design practices.

Derek Holzer (USA 1972) is a PhD researcher in Sound and Music Computing at the Royal Institute of Technology (KTH) in Sweden, focusing on historically informed sound synthesis design. His project, "Sounds of Futures Passed", is a cooperation between KTH, the Royal College of Music (KMH), Statens Musikverket, and Elektronmusikstudion (EMS), with support from the Swedish Research Council/Vetenskapsrådet.
Holzer has performed live, taught workshops and created scores of unique instruments and installations since 2002 across Europe, North and South America, and New Zealand.
http://macumbista.net/

Silvin Willemsen

 Physical Modelling of Musical Instruments for Real-Time Sound Synthesis and Control

The goal of my PhD project is to push the state of the art in the field of physical modelling for real-time musical instrument simulation. Using a physical model rather than, for example, sampling synthesis, makes the playability of the digital instrument very flexible.  

“Why not just use the real instrument?” you might ask. Many interesting instruments exist that can not be played anymore as they are either too damaged or too rare and valuable museum pieces. Physical modelling allows these instruments to be resurrected virtually and brought back to a greater audience. Furthermore, instrument simulations could potentially extend traditional instruments in ways that would be impossible in the real world. Changing the shape, size or material of the instrument over time potentially results in interesting sounds and could even extend the possibilities for expression for the musician.

Silvin Willemsen is a PhD student at Aalborg University (AAU) and holds a BSc. in Industrial Design from TU Eindhoven and a MSc. in Sound and Music Computing from AAU. He is part of the Multisensory Experience Lab where he combines his interests in music, mathematics and computer science to create real-time simulations of musical instruments, expressively controlled by devices such as the Sensel Morph and the Phantom OMNI. 

Razvan Paisa

 Vibrotactile augmentation of music for postlingual cochlear implant users

The potential benefits of listening to music have received widespread attention in the academic world as well as the mainstream media. While some anecdotal benefits have been repeatedly disproved (f.ex. the link between classical music and academic performance in children) some research focused on the psychological, emotional and social benefits. These studies indicate that music provides a platform for multi faceted self-related thoughts focusing on emotion and sentiments, escapism, coping, solace or meaning of life. Unfortunately, not everybody is allowed access to these benefits for different reasons, but one category of people in particular are affected: the hearing impaired.

Hearing impairment is a problem of significant prevalence in the population that can affect individual personality, social life and is linked to higher isolation, seclusion and anxiety levels. According to the World Health, almost half a billion people suffer from disabling hearing loss, with a predicted 900 million by the year 2050.

In cases of severe hearing loss, the most frequent solution is a cochlear implant (CI) – a neuro-prosthetic device that replaces the damaged inner ear with electro-acoustic devices. These implants electrically stimulate the auditory nerve, bypassing the faulty ear. The quality of the post-implant hearing lies on a wide spectrum and is influenced by the size of the electrode array, the health of the patient, neuroplasticity, quality of the surgery, period of pre-implant hearing loss and the amount of residual hearing. While the majority of CI users achieve good speech perception in low noise environments, music perception is particularly are affected. Nevertheless, most users can enhance their listening experience to some extent by following visual cues such as lyrics of a song, or by using haptic devices that convert music to touch.

The aim for this project is to explore the possibility of augmenting music with vibrotactile stimuli so that persons with CI implants would find more enjoyment in music. The focus will be on designing and evaluating tightly integrated hardware and software systems that aim to improve music characteristics such as melody and timber perception. As a result, the project will develop novel methodologies for multisensory signal processing, focusing on the physiological and perceptual characteristics of the touch and hearing.

Development will follow a rapid prototyping approach, using tools available in fab-labs including 3d printers, laser cutters, CNC milling machines, or vacuum molders. Electronics prototyping will rely on off-the-shelf development boards and high fidelity actuators.

Razvan is a PhD student coming out of the SMC program @ AAU Copenhagen and is focusing his efforts towards improving the music listening experience of cochlear implant users. He is part of the Multisensory Experience Lab team where he conducted several studies in musical haptics, replicating old music instrument and wavefield synthesis. 

Prithvi Kantan

My goal is to systematically investigate the design of musical feedback for post-stroke movement rehabilitation. To regain physical function after a stroke, prompt and extensive rehabilitation is crucial. An emerging rehabilitation technology is biofeedback, which involves measuring bodily activity and converting it into a perceivable stimulus (vision, sound or touch) to give the individual greater bodily awareness and control. Auditory biofeedback in this context converts movement measurements into sound signals to provide simultaneous feedback, found to improve patients’ movement performance. However, the sounds are basic (noise or simple tones), making the feedback annoying and fatiguing over long periods. The use of music has recently emerged as a promising solution to this problem, with its potential as a motivational and movement-inducing tool. 

The research will investigate musical biofeedback in the contexts of balance, sit-to-stand and gait training, specifically addressing the following challenges:

  • It is unclear what aspects of movement should be sonified (fed back to patients) in different training contexts.
  • There are infinite ways to convert movement to music, and it is unclear how to do this in an intuitive manner for patients, which clearly conveys important movement information in a given training context, while also enhancing motivation and engagement. 
  • Finally, we do not know how impactful such feedback can potentially be in terms of movement benefits.

The work will focus on technology development, collaborative musical biofeedback design and experimental evaluation. This research will provide a necessary theoretical and empirical foundation for the future design and application of musical feedback in stroke rehabilitation protocols.

Prithvi Kantan holds a bachelor degree in Electronics and Telecommunications Engineering from Mumbai University and an MSc. in Sound and Music Computing from Aalborg University, Copenhagen, where he is now pursuing a PhD since January 2021. His primary interests lie in the research and development of augmented musical feedback for the healthcare domain, but he has also worked in the past with music information retrieval, rhythm perception and sensorimotor synchronization.



Riccardo Miccini 

A hybrid approach to structural modeling of individualized HRTFs

Individual HRTFs are paramount for a VR experience free from localization and perceptual artifacts. However, acoustic measurements are often strenuous and expensive.
We present a hybrid approach to HRTF modeling using three anthropometric measurements and an image on the pinna contours. A prediction algorithm based on variational autoencoders synthesizes a pinna response from the contours, which is used to filter a measured head-and-torso response. The ITD is then manipulated to match that of a HUTUBS dataset subject minimizing the predicted localization error.
Although evaluation with a perceptual localization model proves inconclusive, the performances in terms of spectral distortion show promising results.

Riccardo Miccini is a Sound and Music Computing graduate from Aalborg University in Copenhagen. During his Master’s, he has been researching applications of deep neural networks for sound synthesis, speech enhancement, and HRTF individualization. His research interests revolve around applications of deep learning, data science, and data visualization.

Adrian Benigno Latupeirissa

Mediating Postphenomenological Relations in Human-Robot Communication With Sound


As robot permeates our life, postphenomenological perspectives have been used to examine the relations between human and robots. My research investigates the application of sound as the medium of communication  in such relations.

Adrian main research interests are sound in interaction, interaction design, and human-robot interaction.
He is currently employed as a PhD student at the Division of Media Technology and Interaction Design (MID)School of Electrical Engineering and Computer Science (EECS) since 2018. He is part of the Sound and Music Computing research group, working in the project SONAO to explore new methods for augmenting robot movements with sound.
In 2018 he completed Master study in Interactive Media Technology at KTH, exploring the intersection between physical interaction design and sound in interaction with a thesis in prototyping e-textile interface for music interaction.

Claudio Panariello

Adaptive Behaviour in Musical Context

The doctoral project proposes the study and the implementation of a musical adaptive system and the investigation of the system’s interaction with the audience through the medium of sound art installations. Adaptive systems are increasingly used in the musical interaction field because of their flexibility to adapt to different and unexpected environment perturbations, thus creating a connection between the audience and the system itself. Preliminary studies and research have been conducted in order to increase the background requested for the project, also exploring the connections with other projects. Future work will focus more on the problem of musical style and creativity for such adaptive systems and on the implementation of the results found so far.

Claudio is a PhD student in the Sound and Music Computing group at the School of Electrical Engineering and Computer Science (EECS)Division of Media Technology and Interaction Design.
His main fields of interest as a composer and researcher are feedback and musical adaptive systems, focusing on the problem of emergence and style.


Qichao Lan

Glicol: Graph-oriented Live Coding Language Written in Rust

How can we define and design text-based musical interactions? To investigate this question, a new computer music programming language GLICOL (an acronym for graph-oriented live coding language) is developed. As its name suggests, Glicol has a unique syntax that can be used to represent an audio graph and it is optimised for live performances. Glicol is written in Rust and runs in web browsers at a near-native speed using WebAssembly. User studies of Glicol are in progress and we are keen to see how this language can help more people without a programming background to get started with live coding.

Qichao Lan works as a PhD research fellow for the RITMO centre at the University of Oslo. As a computer musician, he is specialising in audio programming, live coding, new instrument design, and music AI. He is also publishing open-sourced software and performing live coding under the name ‘chaosprint'.

https://people.uio.no/qichaol

Ulf Holbrook

The landscape and topographical questions in spatial audio

Spatial audio is a set of techniques and technologies for moving sound around a listener in three dimensions. This presentation discusses spatial audio from discursive and practice-based perspectives with a focus on ambisonics and wave field synthesis. The primary focus is how place and the landscape can be represented through spatial audio.

Ulf A. S. Holbrook is a researcher, composer, programmer and artist working with sound in a variety of mediums. His research and artistic practice involves an interest in representations of space and place through sound, spatial audio and software.

https://people.uio.no/ulfah

Dana Swarbrick

Measuring the virtual concert experience: Surveys, social connection, and motion

Virtual concerts grew in popularity during the coronavirus crisis. In a series of studies, we examined the effects of virtual concerts on social connection and motion. The Corona Concerts project gathered 300 survey responses to understand what concert characteristics promote social connection and kama muta (feeling moved). The Experimental Sessions project aimed to manipulate agency, presence, and social context to determine their effects on social connection in a virtual concert. The MusicLab Algorave project leveraged participants’ own mobile phones to measure their motion during a virtual concert. Together, these studies contribute to understanding social connection in virtual environments.

Dana Swarbrick is a doctoral researcher at RITMO whose research interests include embodied music cognition, entrainment, and social psychology. Her Bachelor’s thesis involved using motion capture to measure head movements at a rock concert. In her spare time, she makes folk-rock music with her band Dana & The Monsters.

https://people.uio.no/danasw

Çağrı Erdem

Who Makes The Sound? Agency, Surprise and Embodiment in Performing Music with Artificial Musical Intelligence Is it possible that humans and machines share agency in music performance? Imagine playing the guitar, for example. It is the skilled player who makes the sound, and surprising rhythms or pitches are often unwanted. What about when you include machines? My PhD focuses on what computational approaches to generating sound and music can afford in terms of control differently from traditional acoustic instruments. In this talk, I will present snippets from my research-creation on designing and performing music with machines, mainly using artificial intelligence methods for embodied interaction.

Çağrı Erdem is a performer and music technologist. His MSc focused on developing new musical interfaces to extract body movements in the form of biosignals. As a PhD fellow at RITMO, he expands his research focusing on embodied cognitive approaches to defining, developing and playing with artificial musical intelligence.

https://people.uio.no/cagrie

Elvar Atli Ævarsson

Enhancing the music listening enjoyment of cochlear implant recipients

Cochlear implant (CI) devices have improved the quality of life of hundreds of thousands of deaf individuals by enabling them to hear and understand speech in quiet environments. However, a necessary reduction in spectral information performed by the CI processor significantly affects the CI recipients’ perception and enjoyment of music. The ACUTE (ACoUstic and Tactile Engineering) group at the University of Iceland aims at circumventing these limitations of the CI by providing an additional tactile stimulation alongside musical playback. This includes designing an encoding scheme to map musical properties to vibrations of various amplitudes and frequencies and designing and building a vibrotactile display to convey this information to the skin at the appropriate body locations. In this talk, I will present our current work and research towards this objective.

Elvar Atli Ævarsson worked as an electronics technician for many years, specialising in professional sound system installation. He completed his MSc degree in electrical and computer engineering at the University of Iceland in 2020, having spent time as an exchange student at the Technical University of Denmark (DTU) taking acoustical engineering classes. He is currently a PhD student in industrial engineering at the University of Iceland, working with the ACUTE group and focusing on audio-tactile integration.

Finnur Pind

Wave-based virtual acoustics 

Wave-based simulation methods offer high precision renderings of the acoustics of virtual domains, but historically have been limited to small rooms and very low frequencies only, due to the significant computational cost associated. In this talk I will present our research on wave-based virtual acoustics, where we have leveraged the latest advances in applied mathematics, high performance computing and acoustic modeling to enable large scale and highly accurate renderings of the acoustics of spaces.

Dr. Finnur Pind received his MSc in acoustical engineering in 2013 from the Technical University of Denmark (DTU), and his PhD from the same institution in 2020. His PhD research was centered on virtual acoustics and was done in collaboration with the architectural studio Henning Larsen. Between his MSc and PhD studies, Finnur was an acoustic consultant in the building industry for some three years, and before entering the world of acoustics he was a software engineer in the telecom industry. His research interests include wave-based (numerical) acoustic simulations, acoustic virtual reality, room surface modeling, high-performance computing and spatial audio. He is currently a postdoctoral researcher at the ACUTE (Acoustics and Tactile Engineering) group at the University of Iceland and co-founder / CEO of Treble Technologies, which develops state-of-the-art virtual acoustics software.  

Karolina Prawda

Towards more accurate estimation of room acoustic parameters

Reverberation time is one of the most important parameters that describe the behavior of sound in space. The techniques for reverberation prediction started emerging already over a century ago. Since then numerous formulas, which aimed at improving the accuracy of such predictions, were introduced. None of them, however, provide accurate estimations of the reverberation time values to be used with trust. My present research aims at determining which of the reverberation prediction models is the most accurate when applied to a big range of spaces with different sound absorption conditions. It also addresses the problem of including the air absorption in reverberation time estimation. Additionally, a method of facilitating acoustic measurements is introduced.

Karolina Prawda received her B.Sc. and M.Sc. in Acoustic Engineering from AGH University of Science and Technology in Kraków, Poland, in 2016 and 2017, respectively. The invention of an acoustic panel for variable acoustics described in her Master’s Thesis Acoustic system with adjustable parameters was patented in 2018. She is currently a doctoral candidate at the Department of Signal Processing and Acoustics of Aalto University in Espoo, Finland, researching reverberation and room acoustics. Her research interests include artificial reverberation algorithms, the effect of sound propagation medium on acoustic parameters and variable acoustics. She is a member of the Polish Section of the Audio Engineering Society.

Joel Lindfors

Spatial speaker correction

A room’s acoustics are a prominent attribute of the sound experienced through a loudspeaker. The dimensions of the room, as well as the specific surfaces and surface materials, have a defining effect on what a listener hears when listening to audio through speakers in a room. Moreover, the position of the listener has an effect on the sound as well due to the effect of standing waves and the distance to specific surfaces. Current state-of-the-art methods of accounting for the coloration of the room assume a specific listening position, that is then corrected for. In addition to normal speaker room calibration methods, my research aims to take the position of the listener into account via the use of a camera and face tracking. This allows for location-specific calibration of the speakers in real time and arguably a truer representation of the audio.

Joel Lindfors is a 27 year old Acoustics and Audio Technology Master’s student with a history in audio production. He has a Bachelor’s degree in electrical engineering. His current research aims to solve problems he have seen and faced in daily practice when working on audio.

Michael McCrea

Acoustic scene analysis and source extraction using sector-based sound field decomposition across multiple high order microphones.

This talk will present ongoing work in characterising complex acoustic scenes through the use of sector-based sound field decomposition in order to extract source signals of interest. In particular, the task of localising multiple sound sources in 3-D space is addressed through energetic sound field analysis of spherical harmonic signals produced by distributed, ideal microphone arrays. The technique is evaluated under conditions of concurrent sound sources as well as in noisy and reverberant environments. Potential applications will be discussed briefly, including six-degree-of-freedom sound field navigation in virtual environments, audio coding and transmission, and speech enhancement.

Michael McCrea is a Master’s student of Acoustics and Audio Technology at Aalto University studying sound field decomposition, head-related acoustics, and machine learning. Prior to attending Aalto, he was a research scientist at the Centre for Digital Arts and Experimental Media (DXARTS) at the University of Washington pursuing technological research motivated by artistic inquiry. Spatial sound has been a common theme in Michael’s work—from designing steerable ultrasonic speakers, to collaborating on multimedia artworks, to building sound field authoring tools for composers. He will complete his Master’s thesis this Spring.