dc.contributor.author | Brandtsegg, Øyvind | |
dc.contributor.author | Tidemann, Axel | |
dc.date.accessioned | 2021-02-09T08:44:46Z | |
dc.date.available | 2021-02-09T08:44:46Z | |
dc.date.created | 2020-12-08T14:46:38Z | |
dc.date.issued | 2020 | |
dc.identifier.issn | 2663-9041 | |
dc.identifier.uri | https://hdl.handle.net/11250/2726763 | |
dc.description.abstract | The development of musical interfaces has moved from static to malleable, where the interaction mode can be designed by the user. However, the user still has to specify which input parameters to adjust, and inherently how it affects the sound generated. We propose a novel way to learn mappings from movements to sound generation parameters, based on inherent features in the control inputs. An assumption is that any correlation between input features and output characteristics is an indication of a meaningful mapping. The goal is to make the user interface evolve with the user, creating a unique, tailor made interaction mode with the instrument. | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Zenodo | en_US |
dc.relation.uri | 10.5281/zenodo.3932892 | |
dc.rights | Navngivelse 4.0 Internasjonal | * |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/deed.no | * |
dc.title | {Shape: an adaptive musical interface that optimizes the correlation between gesture and sound | en_US |
dc.type | Peer reviewed | en_US |
dc.type | Journal article | en_US |
dc.description.version | publishedVersion | en_US |
dc.source.journal | Proceedings of the International Conference on Live Interfaces (Proceedings of ICLI) | en_US |
dc.identifier.doi | 10.5281/zenodo.3928017 | |
dc.identifier.cristin | 1857541 | |
dc.description.localcode | Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). © 2019 Copyright held by the owner/author(s). | en_US |
cristin.ispublished | true | |
cristin.fulltext | original | |
cristin.qualitycode | 1 | |