{"id":279,"date":"2021-03-21T16:45:56","date_gmt":"2021-03-21T16:45:56","guid":{"rendered":"https:\/\/nordicsmc.create.aau.dk\/?page_id=279"},"modified":"2022-04-12T09:52:54","modified_gmt":"2022-04-12T09:52:54","slug":"nordic-smc-day-2","status":"publish","type":"page","link":"https:\/\/nordicsmc.create.aau.dk\/?page_id=279","title":{"rendered":"NordicSMC day"},"content":{"rendered":"\n<pre class=\"wp-block-preformatted\"><strong>April 16th from 13 PM CET&nbsp;+ MusicLab 6 from 19:00<\/strong><\/pre>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>SESSION 1: Sound desig<\/strong>n\n13-13:30\nSession chair: Stefania Serafin<\/pre>\n\n\n\n<p>5 minutes presentations plus discussions<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><strong>Derek Holzer<\/strong><\/p><p>Sounds of Futures Passed: Media Archaeology and Design Fiction as NIME Methodologies<\/p><\/blockquote>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><strong>Silvin Willemsen<\/strong><\/p><p>Physical Modelling of Musical Instruments for Real-Time Sound Synthesis and Control<\/p><\/blockquote>\n\n\n\n<figure class=\"wp-block-video\"><video controls src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2022\/04\/SilvinNordicSMCprez_1.mp4\"><\/video><\/figure>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><strong>Prithvi Kantan<\/strong><\/p><p>Design of musical feedback for post-stroke movement rehabilitation.&nbsp;<\/p><\/blockquote>\n\n\n\n<figure class=\"wp-block-video\"><video controls src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/05\/1.3-Prithvi-Kantan.m4v\"><\/video><\/figure>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><strong>Adrian Benigno Latupeirissa<\/strong><\/p><p>Mediating Postphenomenological Relations in Human-Robot Communication With Sound<\/p><\/blockquote>\n\n\n\n<figure class=\"wp-block-video\"><video controls src=\"blob:https:\/\/nordicsmc.create.aau.dk\/232b434e-407d-4240-a27d-ac0a58f223b8\"><\/video><\/figure>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>SESSION 2: Spatial sound and virtual acoustics - Part 1<\/strong><\/pre>\n\n\n\n<pre class=\"wp-block-preformatted\">13:45-14:15\nSession chair: Alexander Refsum Jensenius<\/pre>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><strong>Dana Swarbrick<\/strong><\/p><p>Measuring the virtual concert experience: Surveys, social connection, and motion<\/p><\/blockquote>\n\n\n\n<figure class=\"wp-block-video\"><video controls src=\"blob:https:\/\/nordicsmc.create.aau.dk\/16416e56-788d-4df9-904e-ae632604e7b8\"><\/video><\/figure>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><strong>Michael McCrea<\/strong><\/p><p>Acoustic scene analysis and source extraction using sector-based sound field decomposition across multiple high order microphones<\/p><\/blockquote>\n\n\n\n<figure class=\"wp-block-video\"><video controls src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/05\/2.2-Michael-McCrea.mp4\"><\/video><\/figure>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><strong>&nbsp;Finnur Pind<\/strong><\/p><p>Wave-based virtual acoustics&nbsp;<\/p><\/blockquote>\n\n\n\n<figure class=\"wp-block-video\"><video controls src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/05\/2.3-Finnur-Pind.mp4\"><\/video><\/figure>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><strong>&nbsp;Riccardo Miccini&nbsp;<\/strong><\/p><p>A hybrid approach to structural modeling of individualized HRTFs<\/p><\/blockquote>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<figure class=\"wp-block-video\"><video controls src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/05\/2.4-Riccardo-Miccini.mp4\"><\/video><\/figure>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>&nbsp;SESSION 3: Music and cochlear implant<\/strong><\/pre>\n\n\n\n<pre class=\"wp-block-preformatted\">14:30-14:50\nSession chair: R\u00fanar Unn\u00fe\u00f3rsson<\/pre>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><strong>Elvar Atli \u00c6varsson&nbsp;<\/strong><\/p><p>Title:&nbsp; Enhancing the music listening enjoyment of cochlear implant recipients<\/p><p><\/p><\/blockquote>\n\n\n\n<figure class=\"wp-block-video\"><video controls src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/05\/3.1-Elvar-Atli-\u00c6varsson-1.mp4\"><\/video><\/figure>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><strong>Razvan Paisa<\/strong><\/p><p>&nbsp;Vibrotactile augmentation of music for postlingual cochlear implant users&nbsp;<\/p><\/blockquote>\n\n\n\n<figure class=\"wp-block-video\"><video controls src=\"blob:https:\/\/nordicsmc.create.aau.dk\/d3c1ac77-3c9b-43d4-bf67-7fbd87b35cf3\"><\/video><\/figure>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>SESSION 4:&nbsp;Spatial sound and virtual acoustics - Part 2<\/strong>\nSession chair: Vesa Valimaki<\/pre>\n\n\n\n<pre class=\"wp-block-preformatted\">15:00-15:20<\/pre>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><strong>Ulf Holbrook<\/strong><\/p><p>The landscape and topographical questions in spatial audio<\/p><\/blockquote>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><strong>Karolina Prawda<\/strong><\/p><p>Towards more accurate estimation of room acoustic parameters<\/p><\/blockquote>\n\n\n\n<figure class=\"wp-block-video\"><video controls src=\"blob:https:\/\/nordicsmc.create.aau.dk\/5f4fd68d-c570-44f8-bc5f-a676aee8c32d\"><\/video><\/figure>\n\n\n\n<figure class=\"wp-block-video\"><video controls src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/05\/4.2-Karolina-Prawda.mp4\"><\/video><\/figure>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><strong>Joel Lindfors<\/strong><\/p><p>Spatial speaker correction<\/p><\/blockquote>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<figure class=\"wp-block-video\"><video controls src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/05\/4.3-Joel-Lindfors.mp4\"><\/video><\/figure>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>&nbsp;SESSION 5: Musical applications<\/strong>\nSession chair: Roberto Bresin<\/pre>\n\n\n\n<pre class=\"wp-block-preformatted\">15:30 -16:00<\/pre>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><strong>Claudio Panariello<\/strong><\/p><p>Adaptive Behaviour in Musical Context<\/p><\/blockquote>\n\n\n\n<figure class=\"wp-block-video\"><video controls src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/05\/5.1-Claudio-Panariello.mp4\"><\/video><\/figure>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><strong>\u00c7a\u011fr\u0131 Erdem<\/strong><\/p><p>Who Makes The Sound? Agency, Surprise and Embodiment in Performing Music with Artificial Musical Intelligence<\/p><\/blockquote>\n\n\n\n<figure class=\"wp-block-video\"><video controls src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/05\/5.2-\u00c7a\u011fr\u0131-Erdem.mp4\"><\/video><\/figure>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><strong>Qichao Lan<\/strong><\/p><p>Glicol: Graph-oriented Live Coding Language Written in Rust<\/p><cite><br><\/cite><\/blockquote>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>MusicLab 6: Human-Machine Improvisation<\/strong><\/pre>\n\n\n\n<pre class=\"wp-block-preformatted\">19:00 - 21:00<\/pre>\n\n\n\n<p>This 6th edition of MusicLab focuses on musical interactions between humans and machines, featuring prominent musicians from Norway\u2019s improvisation scene;&nbsp;Christian Winther (guitar) and Dag Erik Knedal Andersen (drums) will play with an artificial intelligence-enabled interactive music system, CAVI, developed by&nbsp;\u00c7a\u011fr\u0131 Erdem.<\/p>\n\n\n\n<p>More <a href=\"https:\/\/www.uio.no\/ritmo\/english\/news-and-events\/events\/musiclab\/2021\/Human-Machine\/\">Info<\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Abstracts and Bios               <\/h2>\n\n\n\n<p class=\"has-text-align-right\">   <img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"206\" class=\"wp-image-288\" style=\"width: 150px;\" src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/derek.jpg\" alt=\"\" srcset=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/derek.jpg 372w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/derek-218x300.jpg 218w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/p>\n\n\n\n<h4 class=\"has-text-align-left wp-block-heading\">Derek Holzer                                                        <\/h4>\n\n\n\n<p><em>Sounds of Futures Passed: Media Archaeology and Design Fiction as NIME Methodologies<\/em><br><br>This paper provides a study of a cooperative learning workshop which invited composers, musicians, and sound designers to explore instruments from the history of electronic sound in Sweden. The workshop applied media archaeology methods towards analyzing one particular instrument from the past, the Dataton System 3000. It then applied design fiction methods towards imagining several speculative instruments of the future. Each stage of the workshop revealed very specific utopian ideas surrounding the design of sound instruments. Here, we lay out the intellectual, material, and social conditions of the workshop, along with a selection of the workshop\u2019s outcomes, and some reflections on their significance for the NIME community. By presenting this method-in-progress, our intent is to inspire dialog around the premise that the linked examination of historical electronic sound technology\u2019s affordances and ethics can inform contemporary instrument design practices.<\/p>\n\n\n\n<pre class=\"wp-block-verse\">Derek Holzer (USA 1972) is a PhD researcher in Sound and Music Computing at the Royal Institute of Technology (KTH) in Sweden, focusing on historically informed sound synthesis design.&nbsp;His project, \"Sounds of Futures Passed\",&nbsp;is a cooperation between KTH, the Royal College of Music (KMH), Statens Musikverket, and Elektronmusikstudion (EMS), with support from the Swedish Research Council\/Vetenskapsr\u00e5det.\nHolzer&nbsp;has performed live, taught workshops and created scores of unique instruments and installations since 2002 across Europe, North and South America, and New Zealand.\nhttp:\/\/macumbista.net\/<\/pre>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<p class=\"has-text-align-right\"><img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"150\" class=\"wp-image-289\" style=\"width: 150px;\" src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/SILVIN.jpg\" alt=\"\" srcset=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/SILVIN.jpg 512w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/SILVIN-300x300.jpg 300w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/SILVIN-150x150.jpg 150w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/p>\n\n\n\n<h4 class=\"has-text-align-left wp-block-heading\">Silvin Willemsen                                                <\/h4>\n\n\n\n<p><em>&nbsp;Physical Modelling of Musical Instruments for Real-Time Sound Synthesis and Control<\/em><\/p>\n\n\n\n<p>The goal of my PhD project is to push the state of the art in the field of physical modelling for real-time musical instrument simulation. Using a physical model rather than, for example, sampling synthesis, makes the playability of the digital instrument very flexible. &nbsp;<\/p>\n\n\n\n<p>\u201cWhy not just use the real instrument?\u201d you might ask. Many interesting instruments exist that can not be played anymore as they are either too damaged or too rare and valuable museum pieces. Physical modelling allows these instruments to be resurrected virtually and brought back to a greater audience. Furthermore, instrument simulations could potentially extend traditional instruments in ways that would be impossible in the real world. Changing the shape, size or material of the instrument over time potentially results in interesting sounds and could even extend the possibilities for expression for the musician.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>Silvin Willemse<\/strong>n is a PhD student at Aalborg University (AAU) and holds a BSc. in Industrial Design from TU Eindhoven and a MSc. in Sound and Music Computing from AAU. He is part of the Multisensory Experience Lab where he combines his interests in music, mathematics and computer science to create real-time simulations of musical instruments, expressively controlled by devices such as the Sensel Morph and the Phantom OMNI.&nbsp;<\/pre>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<p class=\"has-text-align-right\"><img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"170\" class=\"wp-image-290\" style=\"width: 150px;\" src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/razvan.jpg\" alt=\"\" srcset=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/razvan.jpg 530w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/razvan-265x300.jpg 265w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Razvan Paisa                                                        <\/h4>\n\n\n\n<p><em>&nbsp;Vibrotactile augmentation of music for postlingual cochlear implant users<\/em><\/p>\n\n\n\n<p>The potential benefits of listening to music have received widespread attention in the academic world as well as the mainstream media. While some anecdotal benefits have been repeatedly disproved (f.ex. the link between classical music and academic performance in children) some research focused on the psychological, emotional and social benefits. These studies indicate that music provides a platform for multi faceted self-related thoughts focusing on emotion and sentiments, escapism, coping, solace or meaning of life. Unfortunately, not everybody is allowed access to these benefits for different reasons, but one category of people in particular are affected: the hearing impaired.<\/p>\n\n\n\n<p>Hearing impairment is a problem of significant prevalence in the population that can affect individual personality, social life and is linked to higher isolation, seclusion and anxiety levels. According to the World Health, almost half a billion people suffer from disabling hearing loss, with a predicted 900 million by the year 2050.<\/p>\n\n\n\n<p>In cases of severe hearing loss, the most frequent solution is a cochlear implant (CI) \u2013 a neuro-prosthetic device that replaces the damaged inner ear with electro-acoustic devices. These implants electrically stimulate the auditory nerve, bypassing the faulty ear. The quality of the post-implant hearing lies on a wide spectrum and is influenced by the size of the electrode array, the health of the patient, neuroplasticity, quality of the surgery, period of pre-implant hearing loss and the amount of residual hearing. While the majority of CI users achieve good speech perception in low noise environments, music perception is particularly are affected. Nevertheless, most users can enhance their listening experience to some extent by following visual cues such as lyrics of a song, or by using haptic devices that convert music to touch.<\/p>\n\n\n\n<p>The aim for this project is to explore the possibility of augmenting music with vibrotactile stimuli so that persons with CI implants would find more enjoyment in music. The focus will be on designing and evaluating tightly integrated hardware and software systems that aim to improve music characteristics such as melody and timber perception. As a result, the project will develop novel methodologies for multisensory signal processing, focusing on the physiological and perceptual characteristics of the touch and hearing.<\/p>\n\n\n\n<p>Development will follow a <em>rapid prototyping <\/em>approach, using tools available in fab-labs including 3d printers, laser cutters, CNC milling machines, or vacuum molders. Electronics prototyping will rely on off-the-shelf development boards and high fidelity actuators.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>Razvan<\/strong> is a PhD student coming out of the SMC program @ AAU Copenhagen and is focusing his efforts towards improving the music listening experience of cochlear implant users. He is part of the Multisensory Experience Lab team where he conducted several studies in musical haptics, replicating old music instrument and wavefield synthesis.&nbsp;<\/pre>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<p class=\"has-text-align-right\"><img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"150\" class=\"wp-image-291\" style=\"width: 150px;\" src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/pritvi.jpg\" alt=\"\" srcset=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/pritvi.jpg 900w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/pritvi-300x300.jpg 300w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/pritvi-150x150.jpg 150w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/pritvi-768x768.jpg 768w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Prithvi Kantan                                                    <\/h4>\n\n\n\n<p>My goal is to systematically investigate the design of musical feedback for post-stroke movement rehabilitation. To regain physical function after a stroke, prompt and extensive rehabilitation is crucial. An emerging rehabilitation technology is biofeedback, which involves measuring bodily activity and converting it into a perceivable stimulus (vision, sound or touch) to give the individual greater bodily awareness and control. Auditory biofeedback in this context converts movement measurements into sound signals to provide simultaneous feedback, found to improve patients\u2019 movement performance. However, the sounds are basic (noise or simple tones), making the feedback annoying and fatiguing over long periods. The use of music has recently emerged as a promising solution to this problem, with its potential as a motivational and movement-inducing tool.&nbsp;<\/p>\n\n\n\n<p>The research will investigate musical biofeedback in the contexts of balance, sit-to-stand and gait training, specifically addressing the following challenges:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>It is unclear what aspects of movement should be sonified (fed back to patients) in different training contexts.<\/li><li>There are infinite ways to convert movement to music, and it is unclear how to do this in an intuitive manner for patients, which clearly conveys important movement information in a given training context, while also enhancing motivation and engagement.&nbsp;<\/li><li>Finally, we do not know how impactful such feedback can potentially be in terms of movement benefits.<\/li><\/ul>\n\n\n\n<p>The work will focus on technology development, collaborative musical biofeedback design and experimental evaluation. This research will provide a necessary theoretical and empirical foundation for the future design and application of musical feedback in stroke rehabilitation protocols.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>Prithvi Kantan <\/strong>holds a bachelor degree in Electronics and Telecommunications Engineering from Mumbai University and an MSc. in Sound and Music Computing from Aalborg University, Copenhagen, where he is now pursuing a PhD since January 2021. His primary interests lie in the research and development of augmented musical feedback for the healthcare domain, but he has also worked in the past with music information retrieval, rhythm perception and sensorimotor synchronization.<\/pre>\n\n\n\n<p class=\"has-text-align-right\"><img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"150\" class=\"wp-image-292\" style=\"width: 150px;\" src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/micini.jpg\" alt=\"\" srcset=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/micini.jpg 200w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/micini-150x150.jpg 150w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h4 class=\"wp-block-heading\"><br><strong>Riccardo Miccini&nbsp;<\/strong>                                                 <\/h4>\n\n\n\n<p><em>A hybrid approach to structural&nbsp;modeling of individualized HRTFs<\/em><\/p>\n\n\n\n<p>Individual&nbsp;HRTFs are&nbsp;paramount for a&nbsp;VR&nbsp;experience free from localization&nbsp;and perceptual&nbsp;artifacts.&nbsp;However,&nbsp;acoustic measurements are&nbsp;often strenuous and&nbsp;expensive.<br>We present a&nbsp;hybrid approach to HRTF modeling&nbsp;using three&nbsp;anthropometric&nbsp;measurements and an image on&nbsp;the pinna contours.&nbsp;A prediction&nbsp;algorithm based on variational autoencoders synthesizes a pinna response from&nbsp;the&nbsp;contours,&nbsp;which is used to filter a measured head-and-torso response. The ITD&nbsp;is then manipulated to match that of a&nbsp;HUTUBS dataset subject minimizing the&nbsp;predicted localization error.<br>Although&nbsp;evaluation&nbsp;with a&nbsp;perceptual&nbsp;localization model&nbsp;proves&nbsp;inconclusive,&nbsp;the&nbsp;performances&nbsp;in terms of&nbsp;spectral distortion show promising results.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>Riccardo Miccini&nbsp;<\/strong>is a Sound and Music Computing&nbsp;graduate from&nbsp;Aalborg University&nbsp;in&nbsp;Copenhagen. During his Master\u2019s,&nbsp;he has been&nbsp;researching applications of deep neural networks for sound synthesis,&nbsp;speech enhancement, and HRTF&nbsp;individualization. His research interests revolve around&nbsp;applications of deep&nbsp;learning,&nbsp;data&nbsp;science, and data&nbsp;visualization.<\/pre>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<p class=\"has-text-align-right\"><img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"150\" class=\"wp-image-293\" style=\"width: 150px;\" src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/adrian.jpg\" alt=\"\" srcset=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/adrian.jpg 512w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/adrian-300x300.jpg 300w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/adrian-150x150.jpg 150w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Adrian Benigno Latupeirissa                         <\/h4>\n\n\n\n<p><em>Mediating Postphenomenological Relations in Human-Robot Communication With Sound<\/em><\/p>\n\n\n\n<p><br>As robot permeates our life, postphenomenological perspectives have been used to examine the relations between human and robots. My&nbsp;research investigates the application of sound as the medium of communication &nbsp;in such relations.<\/p>\n\n\n\n<pre class=\"wp-block-verse\">Adrian main research interests are sound in interaction, interaction design, and human-robot interaction.\nHe is currently employed as a PhD student at the&nbsp;<a href=\"https:\/\/www.kth.se\/mid\">Division of Media Technology and Interaction Design (MID)<\/a>,&nbsp;<a href=\"https:\/\/www.kth.se\/en\/eecs\">School of Electrical Engineering and Computer Science (EECS)<\/a>&nbsp;since 2018. He is part of the&nbsp;<a href=\"https:\/\/www.kth.se\/mid\/research\/smc\">Sound and Music Computing<\/a>&nbsp;research group, working in the project&nbsp;<a href=\"https:\/\/www.kth.se\/mid\/research\/smc\/projects\/sonao-1.895500\">SONAO<\/a>&nbsp;to explore new methods for augmenting robot movements with sound.\nIn 2018 he completed&nbsp;Master study&nbsp;in Interactive Media Technology at KTH, exploring the intersection between physical interaction design and sound in interaction&nbsp;with a thesis in&nbsp;<a href=\"https:\/\/kth.diva-portal.org\/smash\/record.jsf?pid=diva2:1228061\">prototyping e-textile interface for music interaction<\/a>.<\/pre>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<p class=\"has-text-align-right\"><img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"150\" class=\"wp-image-294\" style=\"width: 150px;\" src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/claudio.jpg\" alt=\"\" srcset=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/claudio.jpg 500w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/claudio-300x300.jpg 300w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/claudio-150x150.jpg 150w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Claudio Panariello                                             <\/h4>\n\n\n\n<p><em>Adaptive Behaviour in Musical Context<\/em><\/p>\n\n\n\n<p>The doctoral project proposes the study and the implementation of a musical adaptive system and the investigation of the system\u2019s interaction with the audience through the medium of sound art installations. Adaptive systems are increasingly used in the musical interaction field because of their flexibility to adapt to different and unexpected environment perturbations, thus creating a connection between the audience and the system itself. Preliminary studies and research have been conducted in order to increase the background requested for the project, also exploring the connections with other projects. Future work will focus more on the problem of musical style and creativity for such adaptive systems and on the implementation of the results found so far.<\/p>\n\n\n\n<pre class=\"wp-block-verse\">Claudio is a PhD student in the&nbsp;<a href=\"https:\/\/www.kth.se\/mid\/research\/research-areas\/multisensory-interaction-1.780604\">Sound and Music Computing group<\/a>&nbsp;at the&nbsp;<a href=\"https:\/\/www.kth.se\/en\/eecs\">School of Electrical Engineering and Computer Science (EECS)<\/a>,&nbsp;<a href=\"https:\/\/www.kth.se\/mid\">Division&nbsp;of Media Technology and Interaction Design<\/a>.\nHis main fields of interest as a composer and researcher are feedback and musical adaptive systems, focusing on the problem of emergence and style.<\/pre>\n\n\n\n<p class=\"has-text-align-right\"><img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"200\" class=\"wp-image-295\" style=\"width: 150px;\" src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/qichao-web.jpeg\" alt=\"\"><\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h4 class=\"wp-block-heading\">Qichao Lan                                                            <\/h4>\n\n\n\n<p><em>Glicol: Graph-oriented Live Coding Language Written in Rust<\/em><br><br>How can we define and design text-based musical interactions? To investigate this question, a new computer music programming language GLICOL (an acronym for graph-oriented live coding language) is developed. As its name suggests, Glicol has a unique syntax that can be used to represent an audio graph and it is optimised for live performances. Glicol is written in Rust and runs in web browsers at a near-native speed using WebAssembly. User studies of Glicol are in progress and we are keen to see how this language can help more people without a programming background to get started with live coding.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>Qichao Lan<\/strong> works as a PhD research fellow for the RITMO centre at the University of Oslo. As a computer musician, he is specialising in audio programming, live coding, new instrument design, and music AI. He is also publishing open-sourced software and performing live coding under the name \u2018chaosprint'.\n\n<a href=\"https:\/\/people.uio.no\/qichaol\">https:\/\/people.uio.no\/qichaol<\/a>\n<\/pre>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<p class=\"has-text-align-right\"><img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"150\" class=\"wp-image-296\" style=\"width: 150px;\" src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/Ulf-Holbrook-2.jpg\" alt=\"\" srcset=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/Ulf-Holbrook-2.jpg 512w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/Ulf-Holbrook-2-300x300.jpg 300w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/Ulf-Holbrook-2-150x150.jpg 150w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Ulf Holbrook                                                        <\/h4>\n\n\n\n<p><em>The landscape and topographical questions in spatial audio<\/em><br><br>Spatial audio is a set of techniques and technologies for moving sound around a listener in three dimensions. This presentation discusses spatial audio from discursive and practice-based perspectives with a focus on ambisonics and wave field synthesis. The primary focus is how place and the landscape can be represented through spatial audio.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>Ulf A. S. Holbrook<\/strong> is a researcher, composer, programmer and artist working with sound in a variety of mediums. His research and artistic practice involves an interest in representations of space and place through sound, spatial audio and software.\n\n<a href=\"https:\/\/people.uio.no\/ulfah\">https:\/\/people.uio.no\/ulfah<\/a><\/pre>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h4 class=\"wp-block-heading\">Dana Swarbrick                                                   <img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"225\" class=\"wp-image-297\" style=\"width: 150px;\" src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/dana.jpg\" alt=\"\"><\/h4>\n\n\n\n<p><em>Measuring the virtual concert experience: Surveys, social connection, and motion<\/em><br><br>Virtual concerts grew in popularity during the coronavirus crisis. In a series of studies, we examined the effects of virtual concerts on social connection and motion. The Corona Concerts project gathered 300 survey responses to understand what concert characteristics promote social connection and kama muta (feeling moved). The Experimental Sessions project aimed to manipulate agency, presence, and social context to determine their effects on social connection in a virtual concert. The MusicLab Algorave project leveraged participants\u2019 own mobile phones to measure their motion during a virtual concert. Together, these studies contribute to understanding social connection in virtual environments.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>Dana Swarbrick<\/strong> is a doctoral researcher at RITMO whose research interests include embodied music cognition, entrainment, and social psychology. Her Bachelor\u2019s thesis involved using motion capture to measure head movements at a rock concert. In her spare time, she makes folk-rock music with her band Dana &amp; The Monsters.\n\n<a href=\"https:\/\/people.uio.no\/danasw\">https:\/\/people.uio.no\/danasw<\/a><\/pre>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<p class=\"has-text-align-right\"><img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"187\" class=\"wp-image-298\" style=\"width: 150px;\" src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/cagri.jpg\" alt=\"\"><\/p>\n\n\n\n<h4 class=\"wp-block-heading\">\u00c7a\u011fr\u0131 Erdem                                                          <\/h4>\n\n\n\n<p><em>Who Makes The Sound? Agency, Surprise and Embodiment in Performing Music with Artificial Musical Intelligence<\/em> Is it possible that humans and machines share agency in music performance? Imagine playing the guitar, for example. It is the skilled player who makes the sound, and surprising rhythms or pitches are often unwanted. What about when you include machines? My PhD focuses on what computational approaches to generating sound and music can afford in terms of control differently from traditional acoustic instruments. In this talk, I will present snippets from my research-creation on designing and performing music with machines, mainly using artificial intelligence methods for embodied interaction.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>\u00c7a\u011fr\u0131 Erdem<\/strong> is a performer and music technologist. His MSc focused on developing new musical interfaces to extract body movements in the form of biosignals. As a PhD fellow at RITMO, he expands his research focusing on embodied cognitive approaches to defining, developing and playing with artificial musical intelligence.\n\n<a href=\"https:\/\/people.uio.no\/cagrie\">https:\/\/people.uio.no\/cagrie<\/a><\/pre>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h4 class=\"wp-block-heading\">Elvar Atli \u00c6varsson<\/h4>\n\n\n\n<p><em>Enhancing the music listening enjoyment of cochlear implant recipients<\/em><\/p>\n\n\n\n<p>Cochlear implant (CI) devices have improved the quality of life of hundreds of thousands of deaf individuals by enabling them to hear and understand speech in quiet environments. However, a necessary reduction in spectral information performed by the CI processor significantly affects the CI recipients\u2019 perception and enjoyment of music. The ACUTE (ACoUstic and Tactile Engineering) group at the University of Iceland aims at circumventing these limitations of the CI by providing an additional tactile stimulation alongside musical playback. This includes designing an encoding scheme to map musical properties to vibrations of various amplitudes and frequencies and designing and building a vibrotactile display to convey this information to the skin at the appropriate body locations. In this talk, I will present our current work and research towards this objective.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>Elvar Atli \u00c6varsson<\/strong> worked as an electronics technician for many years, specialising in professional sound system installation. He completed his MSc degree in electrical and computer engineering at the University of Iceland in 2020, having spent time as an exchange student at the Technical University of Denmark (DTU) taking acoustical engineering classes. He is currently a PhD student in industrial engineering at the University of Iceland, working with the ACUTE group and focusing on audio-tactile integration.<\/pre>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<p class=\"has-text-align-right\"><img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"150\" class=\"wp-image-299\" style=\"width: 150px;\" src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/Finnur-Pind.jpg\" alt=\"\" srcset=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/Finnur-Pind.jpg 512w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/Finnur-Pind-300x300.jpg 300w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/Finnur-Pind-150x150.jpg 150w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Finnur Pind                                                           <\/h4>\n\n\n\n<p><em>Wave-based virtual acoustics&nbsp;<\/em><\/p>\n\n\n\n<p>Wave-based simulation methods offer high precision renderings of the acoustics of virtual domains, but historically have been limited to small rooms and very low frequencies only, due to the significant computational cost associated. In this talk I will present our research on wave-based virtual acoustics, where we have leveraged the latest advances in applied mathematics, high performance computing and acoustic modeling to enable large scale and highly accurate renderings of the acoustics of spaces.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>Dr. Finnur Pind<\/strong> received his MSc in acoustical engineering in 2013 from the Technical University of Denmark (DTU), and his PhD from the same institution in 2020. His PhD research was centered on virtual acoustics and was done in collaboration with the architectural studio Henning Larsen. Between his MSc and PhD studies, Finnur was an acoustic consultant in the building industry for some three years, and before entering the world of acoustics he was a software engineer in the telecom industry. His research interests include wave-based (numerical) acoustic simulations, acoustic virtual reality, room surface modeling, high-performance computing and spatial audio. He is currently a postdoctoral researcher at the ACUTE (Acoustics and Tactile Engineering) group at the University of Iceland and co-founder \/ CEO of Treble Technologies, which develops state-of-the-art virtual acoustics software. &nbsp;<\/pre>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<p class=\"has-text-align-right\"><img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"212\" class=\"wp-image-300\" style=\"width: 150px;\" src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/karolina.jpg\" alt=\"\" srcset=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/karolina.jpg 259w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/karolina-212x300.jpg 212w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Karolina Prawda<\/strong>                                                 <\/h4>\n\n\n\n<p><em>Towards more accurate estimation of room acoustic parameters<\/em><\/p>\n\n\n\n<p>Reverberation time is one of the most important parameters that describe the behavior of sound in space. The techniques for reverberation prediction started emerging already over a century ago. Since then numerous formulas, which aimed at improving the accuracy of such predictions, were introduced. None of them, however, provide accurate estimations of the reverberation time values to be used with trust. My present research aims at determining which of the reverberation prediction models is the most accurate when applied to a big range of spaces with different sound absorption conditions. It also addresses the problem of including the air absorption in reverberation time estimation. Additionally, a method of facilitating acoustic measurements is introduced.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>Karolina Prawda<\/strong> received her B.Sc. and M.Sc. in Acoustic Engineering from AGH University of Science and Technology in Krak\u00f3w, Poland, in 2016 and 2017, respectively. The invention of an acoustic panel for variable acoustics described in her Master\u2019s Thesis <em>Acoustic system with adjustable parameters<\/em> was patented in 2018. She is currently a doctoral candidate at the Department of Signal Processing and Acoustics of Aalto University in Espoo, Finland, researching reverberation and room acoustics. Her research interests include artificial reverberation algorithms, the effect of sound propagation medium on acoustic parameters and variable acoustics. She is a member of the Polish Section of the Audio Engineering Society.<\/pre>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<p class=\"has-text-align-right\"> <img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"129\" class=\"wp-image-319\" style=\"width: 150px;\" src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/Joel_Lindfors-scaled.jpg\" alt=\"\" srcset=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/Joel_Lindfors-scaled.jpg 2560w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/Joel_Lindfors-300x257.jpg 300w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/Joel_Lindfors-1024x878.jpg 1024w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/Joel_Lindfors-768x659.jpg 768w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/Joel_Lindfors-1536x1317.jpg 1536w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/Joel_Lindfors-2048x1756.jpg 2048w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Joel Lindfors<\/strong><\/h4>\n\n\n\n<p><em>Spatial speaker correction<\/em><\/p>\n\n\n\n<p>A room\u2019s acoustics are a prominent attribute of the sound experienced through a loudspeaker. The dimensions of the room, as well as the specific surfaces and surface materials, have a defining effect on what a listener hears when listening to audio through speakers in a room. Moreover, the position of the listener has an effect on the sound as well due to the effect of standing waves and the distance to specific surfaces. Current state-of-the-art methods of accounting for the coloration of the room assume a specific listening position, that is then corrected for. In addition to normal speaker room calibration methods, my research aims to take the position of the listener into account via the use of a camera and face tracking. This allows for location-specific calibration of the speakers in real time and arguably a truer representation of the audio.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>Joel Lindfors<\/strong> is a 27 year old Acoustics and Audio Technology Master\u2019s student with a history in audio production. He has a Bachelor\u2019s degree in electrical engineering. His current research aims to solve problems he have seen and faced in daily practice when working on audio.<\/pre>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<p class=\"has-text-align-right\"><img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"206\" class=\"wp-image-320\" style=\"width: 150px;\" src=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/Michael_McCrea.jpg\" alt=\"\" srcset=\"https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/Michael_McCrea.jpg 1224w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/Michael_McCrea-219x300.jpg 219w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/Michael_McCrea-747x1024.jpg 747w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/Michael_McCrea-768x1053.jpg 768w, https:\/\/nordicsmc.create.aau.dk\/wp-content\/uploads\/2021\/03\/Michael_McCrea-1120x1536.jpg 1120w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Michael McCrea<\/strong>                                                       <\/h4>\n\n\n\n<p><em>Acoustic scene analysis and source extraction using sector-based sound field decomposition<\/em> <em>across multiple high order microphones.<\/em><\/p>\n\n\n\n<p>This talk will present ongoing work in characterising complex acoustic scenes through the use of sector-based sound field decomposition in order to extract source signals of interest. In particular, the task of localising multiple sound sources in 3-D space is addressed through energetic sound field analysis of spherical harmonic signals produced by distributed, ideal microphone arrays. The technique is evaluated under conditions of concurrent sound sources as well as in noisy and reverberant environments. Potential applications will be discussed briefly, including six-degree-of-freedom sound field navigation in virtual environments, audio coding and transmission, and speech enhancement.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>Michael McCrea<\/strong> is a Master\u2019s student of Acoustics and Audio Technology at Aalto University studying sound field decomposition, head-related acoustics, and machine learning. Prior to attending Aalto, he was a research scientist at the Centre for Digital Arts and Experimental Media (DXARTS) at the University of Washington pursuing technological research motivated by artistic inquiry. Spatial sound has been a common theme in Michael\u2019s work\u2014from designing steerable ultrasonic speakers, to collaborating on multimedia artworks, to building sound field authoring tools for composers. He will complete his Master\u2019s thesis this Spring.<\/pre>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>April 16th from 13 PM CET&nbsp;+ MusicLab 6 from 19:00 SESSION 1: Sound design 13-13:30 Session chair: Stefania Serafin 5 minutes presentations plus discussions Derek Holzer Sounds of Futures Passed: Media Archaeology and Design Fiction as NIME Methodologies Silvin Willemsen Physical Modelling of Musical Instruments for Real-Time Sound Synthesis and Control Prithvi Kantan Design of [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-279","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/nordicsmc.create.aau.dk\/index.php?rest_route=\/wp\/v2\/pages\/279","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nordicsmc.create.aau.dk\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/nordicsmc.create.aau.dk\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/nordicsmc.create.aau.dk\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/nordicsmc.create.aau.dk\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=279"}],"version-history":[{"count":28,"href":"https:\/\/nordicsmc.create.aau.dk\/index.php?rest_route=\/wp\/v2\/pages\/279\/revisions"}],"predecessor-version":[{"id":632,"href":"https:\/\/nordicsmc.create.aau.dk\/index.php?rest_route=\/wp\/v2\/pages\/279\/revisions\/632"}],"wp:attachment":[{"href":"https:\/\/nordicsmc.create.aau.dk\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=279"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}