このアイテムのアクセス数: 345
このアイテムのファイル:
ファイル | 記述 | サイズ | フォーマット | |
---|---|---|---|---|
978-3-030-21935-2_26.pdf | 3.02 MB | Adobe PDF | 見る/開く |
完全メタデータレコード
DCフィールド | 値 | 言語 |
---|---|---|
dc.contributor.author | Hasnine, Mohammad Nehal | en |
dc.contributor.author | Flanagan, Brendan | en |
dc.contributor.author | Akcapinar, Gokhan | en |
dc.contributor.author | Ogata, Hiroaki | en |
dc.contributor.author | Mouri, Kousuke | en |
dc.contributor.author | Uosaki, Noriko | en |
dc.contributor.alternative | 緒方, 広明 | ja |
dc.date.accessioned | 2019-08-06T06:39:59Z | - |
dc.date.available | 2019-08-06T06:39:59Z | - |
dc.date.issued | 2019 | - |
dc.identifier.isbn | 9783030219345 | - |
dc.identifier.issn | 0302-9743 | - |
dc.identifier.uri | http://hdl.handle.net/2433/243253 | - |
dc.description | 7th International Conference, DAPI 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings | en |
dc.description | Part of the book series: Lecture Notes in Computer Science (LNCS, volume 11587) | en |
dc.description.abstract | Learning context has evident to be an essential part in vocabulary development, however describing learning context for each vocabulary is considered to be difficult. In the human brain, it is relatively easy to describe learning contexts using pictures because pictures describe an immense amount of details at a quick glance that text annotations cannot do. Therefore, in an informal language learning system, pictures can be used to overcome the problems that language learners face in describing learning contexts. The present study aimed to develop a support system that generates and represents learning contexts automatically by analyzing the visual contents of the pictures captured by language learners. Automatic image captioning, a technology of artificial intelligence that connects computer vision and natural language processing is used for analyzing the visual contents of the learners’ captured images. A neural image caption generator model called Show and Tell is trained for image-to-word generation and to describe the context of an image. The three-fold objectives of this research are: First, an intelligent technology that can understand the contents of the picture and capable to generate learning contexts automatically; Second, a leaner can learn multiple vocabularies by using one picture without relying on a representative picture for each vocabulary, and Third, a learner’s prior vocabulary knowledge can be mapped with new learning vocabulary so that previously acquired vocabulary be reviewed and recalled while learning new vocabulary. | en |
dc.format.mimetype | application/pdf | - |
dc.language.iso | eng | - |
dc.publisher | Springer, Cham | en |
dc.rights | This is a post-peer-review, pre-copyedit version of an article published in 'Distributed, Ambient and Pervasive Interactions'. The final authenticated version is available online at: https://doi.org/10.1007/978-3-030-21935-2_26 | en |
dc.rights | The full-text file will be made open to the public on 07 June 2020 in accordance with publisher's 'Terms and Conditions for Self-Archiving'. | en |
dc.rights | This is not the published version. Please cite only the published version. | en |
dc.rights | この論文は出版社版でありません。引用の際には出版社版をご確認ご利用ください。 | ja |
dc.subject | Artificial intelligence in education | en |
dc.subject | Automatic image captioning | en |
dc.subject | Learning context representation | en |
dc.subject | Ubiquitous learning | en |
dc.subject | Visual contents analysis | en |
dc.subject | Vocabulary learning | en |
dc.title | Vocabulary Learning Support System based on Automatic Image Captioning Technology | en |
dc.type | conference paper | - |
dc.type.niitype | Conference Paper | - |
dc.identifier.jtitle | Distributed, Ambient and Pervasive Interactions | en |
dc.identifier.spage | 346 | - |
dc.identifier.epage | 358 | - |
dc.relation.doi | 10.1007/978-3-030-21935-2_26 | - |
dc.textversion | author | - |
dc.address | Kyoto University | en |
dc.address | Kyoto University | en |
dc.address | Hacettepe University | en |
dc.address | Kyoto University | en |
dc.address | Tokyo University of Agriculture and Technology | en |
dc.address | Osaka University | en |
dcterms.accessRights | open access | - |
datacite.date.available | 2020-06-07 | - |
datacite.awardNumber | 16H06304 | - |
datacite.awardNumber | 17K12947 | - |
datacite.awardNumber | 18H05745 | - |
jpcoar.funderName | 日本学術振興会 | ja |
jpcoar.funderName | 日本学術振興会 | ja |
jpcoar.funderName | 日本学術振興会 | ja |
jpcoar.funderName.alternative | Japan Society for the Promotion of Science (JSPS) | en |
jpcoar.funderName.alternative | Japan Society for the Promotion of Science (JSPS) | en |
jpcoar.funderName.alternative | Japan Society for the Promotion of Science (JSPS) | en |
出現コレクション: | 学術雑誌掲載論文等 |

このリポジトリに保管されているアイテムはすべて著作権により保護されています。