このアイテムのアクセス数: 96

このアイテムのファイル:
ファイル 記述 サイズフォーマット 
116.00000052.pdf2.63 MBAdobe PDF見る/開く
完全メタデータレコード
DCフィールド言語
dc.contributor.authorWu, Yimingen
dc.contributor.authorYoshii, Kazuyoshien
dc.contributor.alternative呉, 益明ja
dc.contributor.alternative吉井, 和佳ja
dc.date.accessioned2023-02-15T09:11:59Z-
dc.date.available2023-02-15T09:11:59Z-
dc.date.issued2022-06-21-
dc.identifier.urihttp://hdl.handle.net/2433/279280-
dc.description.abstractThis paper describes a deep generative approach to joint chord and key estimation for music signals. The limited amount of music signals with complete annotations has been the major bottleneck in supervised multi-task learning of a classification model. To overcome this limitation, we integrate the supervised multi-task learning approach with the unsupervised autoencoding approach in a mutually complementary manner. Considering the typical process of music composition, we formulate a hierarchical latent variable model that sequentially generates keys, chords, and chroma vectors. The keys and chords are assumed to follow a language model that represents their relationships and dynamics. In the framework of amortized variational inference (AVI), we introduce a classification model that jointly infers discrete chord and key labels and a recognition model that infers continuous latent features. These models are combined to form a variational autoencoder (VAE) and are trained jointly in a (semi-)supervised manner, where the generative and language models act as regularizers for the classification model. We comprehensively investigate three different architectures for the chord and key classification model, and three different architectures for the language model. Experimental results demonstrate that the VAE-based multi-task learning improves chord estimation as well as key estimation.en
dc.language.isoeng-
dc.publisherNow Publishersen
dc.rights© 2022 Y. Wu and K. Yoshiien
dc.rightsThis is an Open Access article, distributed under the terms of the Creative Commons Attribution licence , which permits unrestricted re-use, distribution, and reproduction in any medium, for non-commercial use, provided the original work is properly cited.en
dc.rights.urihttp://creativecommons.org/licenses/by-nc/4.0/-
dc.subjectAutomatic chord estimationen
dc.subjectautomatic key estimationen
dc.subjectvariational autoencoderen
dc.subjectmulti-task learningen
dc.titleJoint Chord and Key Estimation Based on a Hierarchical Variational Autoencoder with Multi-task Learningen
dc.typejournal article-
dc.type.niitypeJournal Article-
dc.identifier.jtitleAPSIPA Transactions on Signal and Information Processingen
dc.identifier.volume11-
dc.identifier.issue1-
dc.relation.doi10.1561/116.00000052-
dc.textversionpublisher-
dc.identifier.artnume19-
dcterms.accessRightsopen access-
datacite.awardNumber16H01744-
datacite.awardNumber19H04137-
datacite.awardNumber19K20340-
datacite.awardNumber20K21813-
datacite.awardNumber.urihttps://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-16H01744/-
datacite.awardNumber.urihttps://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-19H04137/-
datacite.awardNumber.urihttps://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-19K20340/-
datacite.awardNumber.urihttps://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-20K21813/-
dc.identifier.eissn2048-7703-
jpcoar.funderName日本学術振興会ja
jpcoar.funderName日本学術振興会ja
jpcoar.funderName日本学術振興会ja
jpcoar.funderName日本学術振興会ja
jpcoar.awardTitle統計的文法理論と構成的意味論に基づく音楽理解の計算モデルja
jpcoar.awardTitle認識・生成過程の統合に基づく視聴覚音楽理解ja
jpcoar.awardTitle統計学習と進化理論に基づく音楽創作の学習・進化の研究ja
jpcoar.awardTitleあらゆる音の定位・分離・分類のためのユニバーサル音響理解モデルja
出現コレクション:学術雑誌掲載論文等

アイテムの簡略レコードを表示する

Export to RefWorks


出力フォーマット 


このアイテムは次のライセンスが設定されています: クリエイティブ・コモンズ・ライセンス Creative Commons